This is something that really needs to be demonstrated to answer.
There’s some different variables that are going to come into play with this stuff no matter—depending on what kind of business model you’re in, what audiences you run, what the size of the audience is you’re trying to test to, the amount of budget that you’re spending and things like that. All of those things affect overlap.
What I’m going to do is, I’m going to give you some testing models and then I’m going to walk you through kind of like the factors that I look at in determining which testing model I’m basically going to use. I’m going to go in my own account here.
Now, I’m in the middle of changing a bunch of stuff out, so I’m only running a few campaigns in this particular account right now. What I’m going to do is pull this back to lifetime because one of the campaigns has been running for quite a while. So, you can see here I’ve spent about $30,000 on this particular campaign.
This is a proving campaign. It’s been hitting my KPIs for me for a long period of time. I recently scaled it from $200 a day to $300. I had let it sit at $200 a day, I think, for like 100 days straight before I scaled it. This campaign literally has not been touched since I launched it.
Now, let’s say I want to test different messaging to this audience. Now, if we’re going to test different messaging to the same audience, we have to do it in a different campaign. The first step that I do in that situation is I’m going to duplicate the campaign. Now, what I also want to do next then is look at what is the audience size and how much budget am I running?
In here, we dig a little bit deeper. This audience is a 1% lookalike combo. This is actually a whole bunch of lookalikes that are combined together into one group. Now, these are all lookalikes that I had tested individually already, and a lot of other previous tests and they consistently convert for me.
That’s why they’re combined into one like that at this point. It wasn’t something that I started that way and that’s what I teach in the Bootcamp 2.0 is to test them individually. You can combine afterward, right?
And this is one of the models that Facebook is saying works really good in their Power 5 document, which I put out all over the place.
Now, this audience is pretty big. It doesn’t give us the estimate right here because I’ve got these exclusions on here, so if I just went, bloop, bloop, that should—yep, here we go. So, this is 5.5 million people. That’s a big audience and if we use my rule of thumb, where we can spend $10 per day for every 10,000 people in the audience, then in theory, this audience should support up to a $5,500-a-day budget without having any problems keeping KPI.
That allows me to then look at what my current budget is, which is $300 a day. Now, what I’m going to do is I’m going to look at some individual days. I’m going to first start by looking at yesterday and what I’m trying to see is, how is Facebook pacing the budget? So, if we look here, they actually spent a little bit more than my budget yesterday. I’m going to look at the day before that. It was a little bit under, so it’s kind of up and down right now.
And the reason I’m looking at this number is because with that pacing indicator—and this is right about when I scaled it, so I think that’s really what the issue is right here. But I bet if I go back to the 6th, it only spent $200. Okay, no. It was still under. It’s still kind of pacing up as I scaled it. That right there—yes, we can see back on the 5th was the last day and then it was at $200. I scaled it on the 6th, that’s why it didn’t quite spend the full amount. And over the course of the last few days here now, it’s brought it up to where yesterday it’s pacing over.
That’s normal behavior as you scale up. And so, what I’m going to do is monitor it over the next couple days. I look at like a trailing three-day indicator. If my budget is pacing where it’s over, it’s spending over, like slightly over or the same amount, two out of three days or more, where it’s like 60% of the time or more it’s spending over, that indicates that the audience—Facebook wants to give you more traffic from that audience.
That tells you that when you create a duplicate campaign, you’re really not going to have any issues with overlap. That’s one way we know that we can avoid having to deal with overlap and again, overlap has to do with the amount that you’re spending. When you get too much spend and you’ve got multiple versions of audiences going, then you create overlap.
That’s why there’s like budget indicators and there’s audience indicators and then there’s how the budget is being spent indicators that you have to look at before you make these decisions on whether, you know, how much you’re going to put into this, whether you’re going to duplicate or not. Because if you’ve got a campaign that’s going and let’s say you’re spending $5,000 a day on that thing and it’s profitable—like I don’t know what you’re spending right now, but you told me your campaign is profitable.
If you create a whole bunch of tests to try to improve your results when you’re already profitable, you can create overlap and screw up the whole thing. That’s why you got to be careful with these things. But if the budgets are low, it’s fine.
And any of your testing stuff, you should always use a lower budget on the campaign. Like start it with $100 a day. That way you’re avoiding a lot of overlap.
Now, that’s the first step that you want to do. Now, what you’re going to do inside of that campaign is going to depend on how much ad sets are running and stuff like that. That’s another variable that we have to account for. There’s the possibility of running ad set budgets within the campaign. I’m going to make the assumption that we’re using CBO here, right?
Once you got that campaign duplicated, then you can have the same audience or audiences that are in that campaign at a much lower budget and you can run different creative with different messaging to that because you’re going to get fresh optimization in the new campaign and you can target the same audiences and you shouldn’t have to worry about overlap at all.
Now, if you have performance issues, what they’ve done here, is if you use this inspect link right here…okay. I had just one day, so I put it out to this month. If you use this inspect link, which is a new thing that they’ve put in here, you can look at your auction competition, your audience saturation, your auction overlap.
This audience right here, right now only has a 2.23% overlap with any other audiences that are running in my account right now.
What Facebook says is you want to avoid this being over 25%, because you can’t avoid it entirely. It’s really, really, really hard. Like I’ll go and look at one of my other campaigns, some of my other campaigns. Just some of these other old campaigns that are cold, so you can see. So, this one has minimal, but it’s still .54%. It’s very difficult to eliminate it, but you can really easily see what your auction overlap is.
Really, the best way to test different messaging is you got to duplicate the campaign and you use the same audiences, you lower the budget, and then you put new creative in there with your different messaging to those campaigns, and then you come in here on the ad set level and inspect and look to see if that ad set has over 25% overlap. Then if it does, then what you need to do is work on your exclusions better or maybe test in a different account, if you need to test to the same exact audiences. That’s the way to kind of eliminate the overlap issues that will occur within your one account.
There you go! Hopefully that helps.
To the victor belong the spoils,