AI vs Manual Amazon PPC: A Real Case Study (Tropeza, 2024)
In August 2024, the same Amazon brand ran AI-managed Sponsored Products campaigns alongside a parallel set of manually-managed campaigns. Same account, same month, same product catalog. The AI side returned 3.1 RoAS at 32.7% ACoS. The manual side returned 1.8 RoAS at 56.6% ACoS, for a real, side-by-side Amazon PPC case study where the difference is clean: cheaper clicks, higher conversion rate, lower ACoS, more orders, more revenue. The numbers below are the entire point.
Ownership disclosure. I am the founder of Daniks.AI, the AI-native Amazon PPC automation that managed the auto side of this case study. I built it for my own listings first, the Daniks cookware brand that reached Top-1 in Germany and is currently Top-20 in the USA. Tropeza is one of the customer brands now running on Daniks.AI, not my brand. They sell artificial plants and trees on Amazon US, their brand store is here, and they ran the manual and auto sides simultaneously on their own seller account. The dashboard data below is from their account with permission to publish. Read this Amazon PPC case study with that context.
What was tested
The setup was deliberately simple: a single Amazon US seller running two parallel PPC tracks on the same brand portfolio for the full month of August 2024.
- Track A (Auto): Daniks.AI managing Sponsored Products campaigns end-to-end, campaign creation, keyword harvesting, bid adjustments, negative keyword management, and target-ACoS optimization. The AI agent was set to a fixed ACoS target and given autopilot authority.
- Track B (Manual): the brand’s in-house operator running parallel Sponsored Products campaigns with the same product catalog, using the playbook most experienced sellers run, manual bid adjustments, manual keyword research, manual negative keyword harvesting from the search-term report, manual placement modifiers.
Both tracks pulled from the same Amazon Ads inventory, defended the same brand on the same marketplace, and ran for the same 31 days. What was not equalized: Track A received roughly four times the ad spend Track B did, because the brand’s allocation logic gave more budget to the campaigns hitting their target ACoS, which Track A was, and Track B mostly was not. That is itself part of the story; more on it below.
Key Takeaways
- On a 31-day run on the same Amazon brand, Daniks.AI delivered 3.1 RoAS vs 1.8 RoAS for manually-managed PPC, a 72% efficiency advantage at scale.
- ACoS dropped from 56.6% to 32.7%, a 23.9 percentage-point reduction, which on most brands is the difference between an unprofitable PPC program and a profitable one.
- CPC was 11% cheaper on the AI side ($0.68 vs $0.76) and conversion rate was 73% higher (2.48% vs 1.43%), the two effects compound.
- Volume scaled cleanly: Daniks.AI drove 3.5× the impressions and 8× the orders of the manual side at 4× the spend, so the marginal dollar bought more.
- This is one Amazon PPC case study, single brand, single month, directional, not statistically conclusive. Read the caveats section before drawing universal conclusions.
The brand: Tropeza on Amazon US
Tropeza is a US Amazon seller in the artificial plants category, indoor artificial trees, fiddle leaf figs, and decorative greenery for home and office. Mid-tier price point, year-round demand, durable goods, no certification overhead. The kind of brand that runs cleanly on the platform and can absorb a properly-optimized PPC program. That is also why it makes a useful case study, the test conditions are not exotic. If the auto-vs-manual gap shows up on a normal year-round private-label brand, the result is more transferable than a comparison run on a heavily seasonal or heavily branded SKU.

You can see the Tropeza brand store on Amazon for the catalog. The case-study comparison below is for the entire brand’s Sponsored Products spend across both tracks, not a single ASIN.
The headline numbers
The dashboard view from the Daniks.AI Amazon App, Aug 1 – Sep 1, 2024:
| Metric | Daniks.AI (Auto) | Manual (Human) | Edge |
|---|---|---|---|
| Impressions | 419,631 | 121,318 | Auto +246% |
| Clicks | 4,509 | 977 | Auto +361% |
| Spend | $3,069 | $746 | Auto +311% |
| CPC | $0.68 | $0.76 | Auto -11% |
| Orders | 112 | 14 | Auto +700% |
| Sales | $9,383 | $1,317 | Auto +612% |
| RoAS | 3.1 | 1.8 | Auto +72% |
| ACoS | 32.7% | 56.6% | Auto -23.9 pp |
Two things to notice before walking through the metrics one by one. First, the AI side won on every per-unit efficiency metric, CPC, CVR, ACoS, RoAS, not just on volume. Volume could be explained by budget allocation; efficiency cannot. Second, the gap is the kind of gap that decides whether a brand’s PPC program is funding growth or quietly draining margin.
Click economics: the AI bought clicks for less
CPC on the auto side came in at $0.68 vs $0.76 on manual, the AI was buying impressions and clicks for 11% less than the human operator. Why does that show up?
Manual PPC depends on someone reading the search-term report on a cadence, usually weekly, sometimes daily for the most active operators, and adjusting bids in batch. Between adjustments, bids drift. Some keywords get overbid because the operator has not had time to walk them down. Others get underbid because the auction shifted overnight. The CPC you actually pay is the average of those drift periods.
A live AI agent does not have a cadence; it has a frequency. Bids respond to live auction signals, competing bids, time-of-day patterns, recent conversion data, on a near-continuous loop. The result, as the dashboard shows, is a CPC that hovers closer to the auction’s true clearing price for your specific bid strategy. Eleven percent does not sound like much in isolation. Across 4,509 clicks, it is the difference between a $3,069 month and a $3,425 month, $356 of margin recovered with no ACoS cost.
If you want the manual playbook the human side was running here, my Amazon PPC strategy guide walks through the three-phase campaign structure I still use as a baseline reference for any account.
Conversion rate: where the AI agent really won
CPC tells one part of the story. Conversion rate tells the more decisive part.
- Auto CVR: 112 orders / 4,509 clicks = 2.48%
- Manual CVR: 14 orders / 977 clicks = 1.43%
The AI side converted clicks to orders at 1.73× the rate of the manual side. That gap is bigger than the CPC gap, and it is more interesting because it is not about bid math, it is about which clicks you bought in the first place.
When the AI agent harvests converting search terms into exact-match campaigns and aggressively negative-matches non-converters, the next batch of clicks comes from a tighter, more-qualified inventory. The compounding loop runs faster than a human can: a weekly STR review pushes converting terms once a week; a continuous AI agent does it every few hours. By the time the manual side has cleaned up last week’s report, the AI side has already shifted spend toward the keywords that are paying back this morning.
The compounding effect is the headline math: 11% cheaper clicks × 73% higher CVR ≈ 92% better return per dollar of spend. That is what shows up in RoAS.
ACoS and RoAS: the profitability gap
Two metrics, same underlying story:
- ACoS dropped from 56.6% to 32.7%, a 23.9 percentage-point reduction, or about a 42% relative improvement.
- RoAS rose from 1.8 to 3.1, a 72% improvement.
ACoS and RoAS are reciprocals scaled differently, so they are saying the same thing in different vocabulary. What matters is what those numbers mean for the brand’s P&L.
A 56.6% ACoS on a typical private-label product, after accounting for Amazon’s fee stack, referral fees, fulfillment fees, storage, returns, usually leaves the seller losing money on PPC-driven sales. The campaigns are running, the dashboard shows revenue, the brand looks active, and the underlying economics are quietly negative.
A 32.7% ACoS on the same product is in the band where most operators target, 18-35% depending on category margin. Below 35%, the campaign is funding inventory turnover and review velocity. Above 50%, it is taxing the rest of the business.
The auto side was profitable PPC. The manual side, on its own, was a margin leak.
Honest caveats, read these before drawing big conclusions
Case studies are useful exactly because they show what happened on a real account. They are easy to misread because they are not controlled experiments. Four caveats specific to this Amazon PPC case study:
1. Single brand, single month. The mathematical rigor of “n=1 over 31 days” is what it is. The pattern is consistent with what I see across hundreds of brands now running on the platform, but a single dashboard view is directional, not predictive. Different categories, different price points, and different competitive densities produce different gaps.
2. Budget allocation was not equal. The auto side spent $3,069; the manual side spent $746, a 4× difference. The brand’s allocation logic routed more money to whichever campaigns were hitting their target ACoS, which the auto side was. RoAS and CPC and CVR are unit-normalized, so the efficiency comparison remains fair. But if you want to ask “what would manual look like with 4× the budget,” this dataset does not answer that question. The honest answer is: probably better than 1.8 RoAS, probably not as good as 3.1, because the manual side was bottlenecking on operator attention, not on budget.
3. No A/B isolation. The auto and manual tracks ran on different campaign sets, not on the same SKU/keyword pairs. A clean A/B would alternate the same keywords between auto and manual on alternating weeks. This was a “two strategies operating side-by-side on the same brand portfolio” comparison, useful, but not isolated.
4. Halo effects in the same account. When the auto side drove more impressions on the brand’s catalog, the manual side may have benefited from increased branded search velocity. If the auto side were turned off, the manual side’s numbers might have been worse. The gap might be wider in isolation, not narrower, but it could go either direction.
Where manual PPC still has a place
The point of this Amazon PPC case study is not “fire your operator.” It is “where the AI agent does best, let it; where the human does best, keep them.”
What an experienced human still does better than any agent I have used:
- Brand defense and competitor-name negative-matching. “Do not bid on this specific competitor’s branded term, ever” is a rule no AI can derive from data alone; you have to teach it.
- Launch-week keyword strategy. New ASINs with no conversion history confuse most AI agents. A human shaping the first 14 days of campaigns matters more than the agent does.
- Coupon, deal, and Prime Day overlay strategy. The AI optimizes bid math; humans still set the strategic moments where the bid math should change.
- Long-tail relevance vetting. Auto can negative-match a non-converting term, but a human catches when an “irrelevant” term is actually a long-tail variant worth keeping for SEO halo.
The pattern that worked for Tropeza, and that I see working across most accounts: the AI runs the bid math 24/7, the human runs the strategic overlay, and the human checks the AI’s keyword decisions weekly rather than daily. That is the division of labor where the case-study numbers above hold up at scale.
What this means if you are evaluating PPC automation
The honest read on this Amazon PPC case study, from the founder of the tool that ran the auto side:
- If your current ACoS is above 50% and you have read your search-term reports more than three times in the last month, you are bottlenecked on operator attention. An AI agent will close most of that gap inside three weeks once it has clean conversion data to work from.
- If your ACoS is already in the 25-35% band, the gap will be smaller. You will see a CPC efficiency improvement and probably an RoAS lift in the 10-25% range, meaningful, not transformative.
- If your ad spend is under $5K/month, you can run manually with weekly STR reviews and stay competitive. Below that scale, the time the AI saves is not worth the subscription cost. Above $10K/month, the math flips hard.
- If you are evaluating tools, do not take any single case study, including this one, as the answer. Run a parallel two-week test against your own incumbent setup and watch the daily ACoS. The pattern will be obvious by week three or it will not be there at all.
For the full feature breakdown of what was running on the auto side here, see my Daniks.AI review. For the manual playbook the human side was running, see the Amazon PPC strategy guide and the setting-up-PPC-campaigns tutorial.
Frequently asked questions
What is a realistic ACoS on Amazon PPC for a private-label brand?
Most operators target 18-35% ACoS depending on product margin. High-margin SKUs (40%+ gross margin) can sustain 35%+ ACoS profitably. Thin-margin SKUs need ACoS below 20%. Anything above 50% is usually a margin leak unless the campaign is intentionally subsidizing review velocity on a launch.
Did Daniks.AI win because it had four times the budget?
No, the per-unit efficiency metrics (CPC, CVR, ACoS, RoAS) are unit-normalized and remain fair. CPC was 11% cheaper and CVR was 73% higher on the auto side regardless of total spend. Budget allocation explains volume, not efficiency.
How long does it take an AI agent to outperform manual on a fresh account?
In my experience across hundreds of brands, the auto side stabilizes in two to three weeks once it has clean conversion data to work from. Sellers who toggle autopilot off every time ACoS spikes in the first ten days never see the benefit; sellers who let the agent collect three weeks of data see the kind of gap shown in this case study.
Can I run AI and manual PPC on the same account at the same time?
Yes. This Tropeza Amazon PPC case study is exactly that setup, auto and manual running in parallel on different campaign sets. Halo effects exist (branded search benefits from total impression volume across both tracks) but the per-campaign metrics remain attributable to whichever side was running them.
Is this case study representative of all Amazon brands?
Tropeza is a year-round, mid-priced, durable-goods private-label brand, the kind of category most likely to benefit from PPC automation cleanly. Heavily branded categories with strong organic ranking already need less PPC at all. Heavily seasonal categories produce noisier monthly numbers. Treat this as one representative data point, not a universal benchmark.
Where can I see the full feature breakdown of Daniks.AI?
I have a separate hands-on review at /reviews/daniks-ai-review that covers the actual feature set, pricing tiers, and the cases where the tool falls short. The case study above is about results on one specific account; the review covers the tool itself.
What to do this week
If you operate a private-label Amazon brand spending $5K+/month on Sponsored Products and your ACoS is north of 40%, run a parallel two-week test against any AI PPC tool, Daniks.AI, Perpetua, Helium 10 Adtomic, whichever you can trial. Allocate a clean campaign set to the AI side, leave your existing campaigns running on the manual side, and track daily ACoS, CPC, and CVR for the full two weeks. By day 14 the pattern will be visible in your own dashboard, on your own brand, which is the only Amazon PPC case study that actually matters for your account.
For the operator-level walkthrough of the metrics above, plus the brand strategy I run alongside any automation tool, subscribe to @AmazonFBAGirl on YouTube. Comments on the videos are how I learn what to cover next, so leave one with the brand category you are evaluating, I read every one.
Ekaterina Rubtcova
Amazon seller since 2018 · Founder of Daniks cookware · Founder of Daniks.AI
My Daniks cookware reached Top-1 in Germany and is currently Top-20 in the USA. To run its PPC I built Daniks.AI — now used by hundreds of Amazon brands. On this blog I share how I actually operate, no courses, no upsells.
Subscribe to our YouTube channel
Subscribe to our YouTube channel for video walkthroughs
Subscribe NowRelated articles
Daniks.AI vs Manual Amazon PPC: Fornel Case Study (Dec 2025)
Real Amazon PPC case study from Dec 2025, Daniks.AI auto vs strong manual on the Fornel highchair brand: 5.8 vs 5.2 RoAS, 44% higher conversion rate.
The Ultimate Amazon PPC Strategy Guide for 2026
Master Amazon PPC advertising with proven strategies for keyword targeting, campaign structure, ACoS optimization, and bid management.