As part of the Personalization Pioneers event series, Nathan Richter, VP Program Strategy & Insights at Dynamic Yield hosted a webinar with Melanie Kyrklund, Global Experimentation Lead at Specsavers, and Chase Bruch, Director of Analytics at Blue Acorn iCi. The webinar, “How to Drive Big Learnings and Big Wins with Optimization Analytics,” focused on four topics:
- Identifying high impact personalization opportunities
- How to approach prioritization
- Test design and evaluation to create success
- Program reporting and communication
Watch the recap here or read the summary below.
Opportunity Identification: Where to Start and Which Methods to Use
Identifying opportunities for testing requires understanding the needs of the customers and digging into existing data. When working with our optimization clients, Chase Bruch focuses on the end-to-end shopping funnel to uncover pain points and identify tests to solve for those problems. Regardless of the analytics platform used by the client, the number one priority is making sure the data is accurate and actionable.
Melanie from Specsavers uses a three-prong approach to identify testing opportunities:
- User Needs: Identify the needs for your business and match the functionality that facilitates the need.
- Growth Opportunities: Evaluate the performance of the marketing journeys the business is driving.
- Behavioral Analysis: Compare segments of users and use the data to see how you can trigger desired behaviors.
Test Prioritization: Scoring Methods and Balancing the Big Swings with Low Effort
Once she has a backlog of tests, Melanie takes a data-driven approach to prioritization. “I shy away from subjective dimensions and rely on data,” said Melanie. It’s important to map out all of the funnel points and determine how an uplift would translate to an increase in revenue. Melanie created a scoring system to prioritize tests based on the potential increase in revenue and how well the test aligns with the optimization program’s and business’s goals.
You also need to determine the resources needed to perform a test. For example, testing copy on the homepage will take fewer development hours than testing a new payment option at checkout. “Everyone wants to go big, but from my experience, it’s about balancing smaller, quick wins and larger tests,” noted Chase. Larger tests typically require more resources, so you need to weigh those costs against the potential benefit of the test.
Download Blue Acorn iCi’s “Amplify the Customer Experience with Analytics” white paper to learn how you can use analytics to enhance the entire customer journey.
Test Design and Evaluation: Primary KPIs vs. Everything Else
When asked, “What is your preferred methodology for test design?” during the webinar, 87% of the audience answered A/B tests. “It’s the bread and butter of testing,” as Chase puts it. A/B testing focuses on finding a statistically significant lift to the variance compared to the control. It’s a practical, data-driven approach to test changes to the site and finding an optimal solution.
While A/B testing is the standard approach most companies use to test two different variants, it takes time to reach valid results. Bandit testing allows companies to test multiple variations in a shorter period of time. For example, if you’re testing messaging for Father’s Day, you likely won’t have enough time to test ideas until you reach statistical significance. Bandit testing helps you make the best decision faster.
In preparation for running a test, it’s key to build out a measurement plan. While many tests will have the same primary KPI, such as revenue or conversion rate, there are many secondary metrics that can be used to support your test. For example, if you’re testing personalized product recommendations on the checkout page, your primary KPI would be conversion rate, but a secondary metric could be average order value.
Program Reporting: Communicating Wins and Key Learnings
The long-term success of an optimization program largely relies on support from key stakeholders. Melanie socializes the optimization program among her organization to ensure leadership understands the problems driving the experiments, the yearly incremental revenue each experiment drives, and ultimately the return on investment.
Chase uses data visualization to show clients the performance of their experiments, how they’re impacting their KPIs, and if there was a return on the experiment investment. By using a storytelling approach, stakeholders can easily digest and understand the information, no matter where they are in their analytical maturity.