You’re Creative Testing COMPLETELY Wrong

Описание к видео You’re Creative Testing COMPLETELY Wrong

Are you running separate testing and scaling campaigns and struggling to scale your Facebook ad account? After generating over a BILLION dollars in revenue through Facebook ads, I cracked the code when it comes to creative testing. In this video, I’ll show you how to ACTUALLY test properly.

Resources
► Education Resources:
- The Facebook Ads MBA Program: https://bit.ly/MBASignUpLink
- Disrupter School & Private Community (Trial for $100): https://bit.ly/DisrupterSchoolEnroll

► Work with me directly & Newsletter
- Consulting: https://bit.ly/DisrupterConsulting
- Newsletter: https://bit.ly/DisrupterNewsletterSignUp


Chapters:
0:00 The Secret to Successful Creative Testing Revealed
01:04 The Key to Advertising Success: The Relationship Between Testing and Scaling
06:48 How to Stay Ahead of the Game with Effective Facebook Ad Testing and Scaling Strategies

The popular approach to testing and scaling advertising campaigns is WILDLY flawed and always leads to suboptimal results. The common practice of having separate testing and scaling campaigns is problematic because it leads to a high percentage of the budget being spent on ads that ultimately don't perform well. Also, the criteria used to determine what constitutes a winning ad is often extremely low integrity, which causes the majority of advertisers to prematurely declare winners.

Instead of the traditional approach, let's use a different method of testing and scaling ads that involves far fewer moving parts and reliance on luck, and gathering more meaningful data from each ad. This approach involves using a methodology that focuses on gathering data and insights that can be applied to the overall advertising strategy, rather than simply identifying one-off winners. We can use machine learning and trend analysis tools to gain a deeper understanding of how ads are performing and how they are impacting the bottom line.

Only the winning ad of a traditional "ABO, running dozens of ads in the testing campaign" is what is actually run in a creative test as soon as it is moved over to our scaling campaign. The test doesn't actually start until the ad has the opportunity to spend meaningful amounts of money and gather consistent levels of data. Short-term wins are not significant in this out-of-date style since they cannot be scaled sustainably

Our much better approach to creative testing involves launching way less creative tests and not using traditional A/B testing. Instead, the focus is on being more insightful with the available data. This means not having a separate testing campaign and not spending 80 to 90% of the budget to try to find out what gets lucky so that a test can actually begin.

To implement this approach, the ad account structure needs to be optimized. This includes creating dynamic creative optimization ad sets that are very focused on a specific audience, as defined by the creative. The ad sets should compete for spend and be given the chance to earn a considerable amount of spend to gather significant data.

Another critical aspect is measuring the right metrics, especially the incremental impact on other channels such as search, email, and organic social. These metrics should be monitored over an extended period to determine the real results.

In our simpler system that involves using one dynamic creative we let Facebook determine which combination of the ad works best for the end user experience. If the ad doesn't earn any spend, then it's not good enough, and if it earns spend, the ad might be a winner. By using dynamic creative, most of the budget is spent on the winning ad, unlike the previous approach, where most of the money was spent on losers.

We must be able to understand how a test impacts the entire omnichannel marketing. If an ad is winning, that means its earning more spend and is scaling our efficiency... meaning our overall blended CPA comes down. If this happens, then our test continues by increasing the budget incrementally, rather than disrupting the campaign by "scaling it to the moon." The winning creative test can run for weeks or months, and once the point of maximum efficiency is reached, the winning post ID can be moved to our control environment, and another test can be launched to further improve on the ads in the control environment. By using this approach, ad fatigue can be all but eliminated entirely, and business growth can continue almost indefinitely.

The bottom line:
Testing like this leverages Machine Learning
Testing like this is extremely stable and scalable
Testing like this allows you to understand the direct of every ad on your ecosystem
Testing like this stabilizes the front end of your business, making testing after the click far more valuable.

Instead of changing out quality, volume and type of traffic constantly while trying to get lucky... we can scale with confidence much quicker, and with a lot less work.

Комментарии

Информация по комментариям в разработке