Digital marketing is not an exact science. There is no secret formula or combination of image and text that will always work no matter the client or situation. That isn’t to say there aren’t ad sets that are typically better performers and other sets or styles that will almost never work. A perfect ad, however, cannot be determined through only past experience and knowledge.
But while a single ad style won’t crush it for every client, a strategy that promotes consistent testing will lead to success in almost every situation. Creative testing is crucial for all ad types, video, display, search, native and more. Since our agency focuses primarily on search ads, I will discuss the strategy for maximizing the testing procedure on that channel.
Search ads consist of several different parts, including 3 headlines, 2 description lines and 2 display paths. There are also a few different types of ads on the Google platform (some are also available in Bing), including the standard expanded ad format (as described above), a dynamic search ad, and a responsive search ad. Each ad type has its own function but testing is a bit limited with dynamic ads, since Google itself determines most of the ad text a user will see. The majority of the testing we do revolves around expanded and responsive ads.
During an initial campaign launch, it is considered best practice to specifically tailor the Headline 1 to the ad groups they are associated with. With ad relevancy and expected CTR being 2 of the 3 known factors of quality score, beginning with the best potential for high quality scores will lead to lower initial costs and better starting ad position. Although less tailored and more generic ad copy is not ideal as a final scenario, this strategy can also be used as an initial setup. Beginning with fewer, less specific ads runs the risk of lower quality scores but ultimately produces a standardized starting block for ad copy testing.
It is important to understand the difference between a regular expanded text ad and a responsive one. Expanded text ads have set ad copy, with the 3 different headlines and 2 different description lines. Responsive ads (still in beta) allow for advertisers to input up to 15 headline options as well as up to 4 description line options, with the ability to pin any of the items into a specific headline or description line location. With this type of ad, different ad sets are created in real time based on different combinations of the ad copy components. The result is a ton of variation and testing among all of the utilized components.
Whether campaigns are relatively new or have been running for several months, the process for ad copy testing is relatively similar. During each round of testing, it is crucial to determine the main components test between headlines, description lines, even display paths. In my personal opinion, it is best to stick to testing between 3 and 6 sets of options; any less can be a wasted opportunity and slow down future testing, while any more can delay data collection and take too long to achieve significant data readings. Testing can also be centered around achieving higher quality score metrics (and thus potential scale opportunity) or better CTRs and CVRs.
During a main testing round, most comparison should be done with expanded text ads only, either via Google Ads experiments (when using only 2 ads this can be the best way for pushing 50% of traffic either way) or simply by keeping ad rotation on the “rotate indefinitely” setting and analyzing data a bit more manually at the end of the testing period. When utilizing responsive search ads, the main issue to note is that while responsive ads allow for larger tests with several headlines and descriptions in rotation, the data you receive on each variation is limited to volume metrics. Hopefully, more data will be made available once this ad type is moved out of the beta phase. But at the moment, it is considered irresponsible to test with only responsive ads. A good way to balance out the lack of data is to use responsive ads first to determine which ad variations Google serves more (as these should be the best sets), and then creating expanded text ads using those variations for proper, data-based tests.
Figuring out how long to test ad copy for can be tricky. If a test is running across a campaign or account, data can accumulate quickly enough to be significant in just a week or two, whereas smaller, more limited tests can take months. In the end, you can utilize set timeframes with statistical calculations for statistical significance. After working in the space for a number of years, it becomes easier to spot winners by comparing certain key metrics, most notably CTRs and CVRs. Click through rates are going to be the best direct comparison of the quality of ad copy, but looking at conversion metrics will help determine whether the ads represent the product you are offering and are in fact driving qualified users to your site.
Ad copy testing is an ongoing process that should be prioritized in the running of a successful ad campaign. While there are many intricate details and different ways of going about the testing process, it is a worthy investment in performance and campaign freshness. As with any other process in the marketing space, it eventually becomes easier to determine and utilize ad copy variations that promote success and account growth.