We've learned about A\B tests are and when you can use them. As is the case for many online advertising platforms, Facebook lets you execute A\B tests as part of the campaigns you run on it's platform. In this video, we'll take a look at how that works. Let's go back to CALLA & IVY, the flower business in Amsterdam. Imra is planning a campaign on Facebook to increase traffic to her website. So people can become familiar with her online purchase options. She had a planning meeting with her team and her designers showed her the images from her last in shop photoshoot. There were lots of good pictures and the team had a lively discussion about which images to use for the ads. The photographer really liked a close-up of a few flowers in a bouquet. But some other people on the team thought that a full bouquet would be more representative of what CALLA & IVY actually sells. Imra told her team that rather than guessing at what would work best, they should probably test it. This is exactly the type of question you can answer with an A\B test. If Imra creates an A and a B version of the ad, with the image being the only difference between the ads. She can get an objective answer to the question of which image is better in this ad. Here's how you can set up an A\B test on Facebook. After Imra set up her campaign, she can indicate that she'd liked to run an A\B test by selecting the campaign in her ads Manager Dashboard and clicking A\B Testing. This will make the selected campaign the A version in the test. It will then prompt you to change one variable in your campaign to create the B version. Facebook will automatically show the variables you can change based on the specifics of your campaign. In our example, Imra would choose to change the image, all the other aspects of her campaign will remain the same. Facebook then creates a new ad set. Imra is then prompted to upload the new image for Version B. In this case, that's the close-up flower picture. You then have to give your test a name and you can adjust the budget. Note that the budget will be split 50-50 over the two ad sets. You should make sure that the budget is big enough to see meaningful results. You then also set the test schedule. You then also tell Facebook how you will determine the winner. This will depend on your goal and how you're tracking success. For Imra, the campaign is focused on generating traffic to her website. She's optimizing her campaign for link clicks. She decides that the ultimate test for her should help her understand which image generates her most results for the amount she's spending. So cost per result is the right metric to help her determined the winner. Note that there's an indication here of the estimated power of the test. This is the likelihood that this test can detect a difference in your ads if there is actually a difference to detect. To have a good test, the power should be at least 80 percent. It means that in 80 percent of the cases, if you repeated test, you'll find the same result. If you think that decision you're about to make is a really important one and one that you plan to base many decisions on, I suggest you go for a power that's over 90 percent. You can increase the power of your test by increasing the budget or the time you're running the ads. Both will increase the number of ad impressions, which will give you more actions to compare in each group. Imra has now completed all the steps to create this test. As you've probably seen during the test creation, there are other variables Imra could have tested. First, she could have varied other parts of the creative, like the text or the call to action. She could also have kept the ads the same, but change the target audience, which could help understand how effective the ads are at reaching different audiences or demographics. For example, Imre could add or remove certain regions or compare an interest-based core audience with a custom audience. You could also vary delivery optimization. This allows you to compare campaigns with or without campaign budget optimization. It's basically comparing whether your manual distribution of your budget across ad sets is better than letting the machines do the work through campaign budgets optimization. Finally, you can vary the placement you choose for the ads. For example, you can choose automatic placements and compare it to placements you specifically pick. It's best practice to only vary one variable at a time, and you should make sure that your test has enough power so you will get a reliable result. To achieve that, you should make sure that you have enough budget for the campaign and that you run the campaign long enough ideally, two weeks or longer. If you can, it's a good idea to keep testing as you build new campaigns. Use the winner of your test as the new baseline and test against another version to see whether you could further optimize your campaign. Let's go back to Imra, her test results came back and she learned to the team surprise that the ad with the close-up image was the winner. In the campaign she ran with this test, she had targeted people in cities in the Netherlands. But Imra was wondering what would happen if she targeted the same ad to people in cities in Belgium? Shipping costs to Belgium were the same for her, so if she got better results in Belgium, she might spend more of her budget there. Imra decided to run her traffic focused campaign again. She used the image with a close-up of the flower for her ads and set up an A\B test. This time, she chose to vary the audience for her ad. She targeted the A ad set to people in cities in the Netherlands and the B set to people in cities in Belgium. She kept everything else the same. When results came back, A was the winner with the lowest cost per result. Or in other words, Imra got more traffic to her site from people in the Netherlands, than from people in Belgium for the budget she spent. As soon as Imra got those results, she decided to keep the campaign running targeting people in the Netherlands only. As you can see from this example, A\B testing is a really great tool to have in your toolbox as an advertiser. It helps you to answer simple questions you may have, and it can take the guesswork out of building an ad. Since A\B tests are built into most ad platforms, it's usually really easy to set them up and make them part of your campaigns. In our next video, let's take a look at what the results of an A\B test look like.