Menu

Don't make this mistake when testing new campaigns

AffLift More guides Visit site

This post is targeted at new affiliates struggling to make their first campaigns profitable. When you don't have the experience to make decisions based on data, the best way to get started is to follow instructions and recommendations.

When testing campaigns, one of the most common (and easy to understand) suggestions is to follow a rule defined by the offer payout and traffic source.

At afflift, that rule usually is 10x, but depending on the traffic source, your budget, the number of offers and landers, etc. It could be lower or way higher (think of 20-100x).

Don't make this mistake when testing new campaigns



I'm sure you have heard about it by going over the hundreds of follow-along/tutorials in this forum.
If not, this is how it works:

  • Let's say you have an offer that pays $1 per lead.
  • So you multiply that by x10, and that's your initial testing budget.
  • $10 to test if an offer has potential or not.
  • Sounds good, right?

Easy to follow. And then how do you decide if you should trash it or continue spending? Based on the ROI.

  • Positive ROI = Crank that green shit up.
  • -20% = Very good. Keep optimizing, and you'll be in profits soon. Maybe tomorrow.
  • -50% = It has potential. Keep optimizing for a few days to a week, and you might see profits. It depends on your data.
  • -60%, it's going to be challenging to reach profits.
  • -70% or lower, trash it and return it to the drawing board.



Again, it's easy to follow the rules reading guides while you build experience, and the reality is most of the campaigns you're going to test initially are going to have a -80% ROI or maybe even -100%, and there won't be any data to work with. That's fine for a while. At the same time, you learn tracking, how traffic sources and ad formats work, etc.

But eventually, you want to make money, and having a bunch of -80% ROI campaigns isn't going to cut it after a few weeks. You test something, spend $10, make $1-2 back. Throw it to the recycle bin and do it again, again, and again. It's how 70% of the follow alongs posted here end up in many cases (including some of mine). In many cases, going back to the drawing board makes sense as you probably tested a terrible offer or a vertical that doesn't work in that ad format.

But there are a lot of cases where you should spend a bit more before archiving that campaign. Another 10x or 20x sometimes could be the difference to jump to -30% the next day when you have -80% initially, only if your campaign follows a condition.

Which key parameter should you check to see if your campaign has potential even with -80% ROI?

Well, it's not just 1 parameter. It could be 2,3 or even 4 different parameters. But in most cases, you need to check websites (site IDs) to see if you should spend another 10x before throwing the towel.

The thing you need to check is distribution.
It all depends on how distributed your traffic was during the initial test.

A real-life example

Here's an example. I launched a campaign for a download offer that pays around $0.3 per conversion. My initial budget was $10 per day, although I could have started with $5 per day if I had a very tight budget.

After 24 hours, I checked my tracker, and this is what I got:

Don't make this mistake when testing new campaigns



As you can see, after spending $10, I got $2.2 back, with 8 conversions and a -77% ROI.

This ROI level in my book is complicated to optimize to profits, and if I check the following filters, I might launch a new test instead of spending more on this campaign.

  • Operating System
  • Browser
  • WEB (desktop) vs. Mobile
  • ISP if it's a Mobile content offer
  • Websites IDs (or groups of websites if available)

Therefore, I went over each of those parameters and made sure the traffic was evenly distributed as long as they applied to the offer. If the offer only works on mobile devices on Android, I don't need to check the device type or OS.

But the key almost always rests in the website IDs or sometimes in groups of websites (some traffic sources have both).

So I went over the detailed stats by website ID in this campaign, and here's how it looks:

Don't make this mistake when testing new campaigns



On the right side, you can see the first 2 website IDs spent $5.6 and $3.5 each.
The rest of the websites spent just a few cents each, yet we have 8 conversions.

$5.6 + $3.5 = $9.1

So literally 90% of my traffic was wasted in these 2 sites, and both are unprofitable.
In a case like this, where traffic wasn't evenly distributed between websites, running another round of data makes a lot of sense.

If you take those 2 websites away, it's like I just spent $1 and got $2 back, or a 100% ROI. That sounds much better than -77% ROI, right?

Throwing away this campaign would have been a wrong decision, yet I see people doing this while testing the waters.
If most websites had spent $0.5 or less each, this campaign would go straight to the trash can.

Spending another $10 tomorrow could mean I have a profitable campaign or maybe start at -20% ROI, which is way easier to optimize to profits.

Go ahead and do it. And even if you have similar results tomorrow, don't forget to check all the essential parameters to find if your traffic was evenly distributed.

Usually, you only need to do this 2 to 3 times before these bad sites with huge volume disappear, and you get a better chunk of data. It might happen again every few weeks in a profitable campaign as well. But as long as you analyze your data now and then filter via different parameters, you'll be able to notice it and keep the campaign running.

Do I have to care about this problem in all of my campaigns?

No. There are a few cases where you don't need to worry about this.

  1. If you have a budget cap per website, you can select a 2x or something similar, and your traffic will be distributed evenly. I'm not fond of this option because it also means you will cap some good placements that could have got more conversions initially.
  2. Suppose you use RBO (rule-based optimization), either within the traffic source or via an external tool + API. The rules will pause all bad placements after they hit your target budget.
  3. Suppose you're running Smart CPA campaigns. In this case, the algorithm should fix itself and optimize those bad-performing placements. However, some algorithms aren't as strict, and helping them to pause the bad apples manually can give you a slight advantage.
  4. Suppose you're running raw CPA campaigns. The risk is on the traffic source and not on your side.

Many newbies will test the waters with CPM or Smart CPM campaigns. It's the most popular bidding method available in traffic sources and sometimes the easiest way to get enough volume while testing.

Don't pause your campaigns before analyzing your data, even if you have a terrible ROI in the first or second run. Always check if your traffic was evenly distributed.

That's all for today!

AffLift More guides Visit site