Advanced A/B testing: Make more profit, learn more about customers
Open your Analytics real-time recording. You see users ticking by? Each time a visitor hits your site is an opportunity to learn more about your market. Most of these opportunities are wasted.
One of the products I’ve been building over the past year is a site which helps people to learn how to lipread. Like every other site of mine, I do continuous improvement to it, using qualitative generation, and quantitative measuring methods. A profitable tool for this started out as traditional A/B testing; over time I came up with ways to improve on this methodology.
In this article, I'd like to introduce 2 improvements over traditional A/B testing in the hopes that other people can profit from it as well.
Using feature vectors instead of variants:
Feature vectors are a set of bit fields, describing which state the application renders. This includes call-to-actions, chrome text (the little snippets of wording such as "register", and "submit" , both of which copies can be vastly improved upon), available affordances (things the user is allowed to do, typically rendered as menu options, or buttons), graphics, designs, etc.
(Managing this across identities -by providing a consistent experience to the same people using different computers, and devices- is tricky)
Unlike canonical a/b tests, a feature vector can have more than 2 states; this allows for having an inventory of text/content in place, and testing which one works best (bearing in mind , that data points required for confident results scales linearly with number of distinct states being tested).
This allows for many different levers, all connected to money-making activities:
-Random enrollment people for specific features. There are many good ways to do that: coin flip (true randomness), multi-armed bandit, etc. The general idea here is testing the full possibility space of "all apps" in the market, and using metrics close to money-making operations as feedback-loop.
-Enabling beta features for enthusiastic users, either manually, or exposing small proration of the overall traffic to it. This allows measuring whether it makes sense going down that road, or is the feature stillborn.
-Measuring cohorts: onboarding (the first experience customers are exposed to) is critically important. Hence, it's profitable to create multiple landing pages for different segment of the market, and driving personalized traffic to each. By marking the first landing page users are enrolled to, segment-specific conversion rates can be measured, which informs marketing & customer development processes.
Measuring across the entire funnel instead of single hit points:
Most websites, and applications have multiple goals to funnel attention towards, not necessarily in linear dependency to each other. In a content site, typically you want to measure if people are actually reading your thoughts, and convert them to newsletter subscription, or further relevant items of interest. On product sites, and apps, conversion funnel is canonically trial - registration - subscription - cancellation.
Traditional A/B testing usually focuses on one specific improvement in these steps. This is highly malleable to trade-off by-products: it's usually very easy to improve registration rate at the expense of conversion rate.
Measuring each feature points interaction with each business-critical KPI captures this intuition by making trade offs explicit.
Case study: landing page designs optimization:
Recently, I was working on optimizing the landing & onboarding for lipreading.org; according to analytics, the landing page losts about %20 of visitors. Granted there might be self-qualifying here, but I suspected the value presentation to be weaker, than the actual value proposition.
So, I've made a few changes, and it made some impact all the way along the conversion funnel, like so:
Knowing how it impacts at each and every step allows for easier discovery of the inference chain.
I hope this was useful for folks looking to improve their sales.
Until next time,
About the author: Joel runs a product inbucation, and consulting company in San Francisco. If you'd like to read more by him, subscribe to the product development mailing list.