Uber One A|B Test Fail

A deeper look into First1000's post on Uber One's subscription page.

I recently stumbled upon an article from First1000 by discussing Uber's decision-making process in selecting a paywall design for their subscription product, Uber One. The crux of the discussion revolved around the concept of "information hierarchy" and how it influences user decisions. While the breakdown was insightful, there was a glaring omission: the absence of a critique on the proper execution of A/B testing. Let's dive deeper.

Ube One A B testing Marketing Fail of Diddy Don't Do Jingles

Understanding A/B Testing's Role

A/B testing isn't just about picking two designs and seeing which one garners a better reaction. It's a meticulous process that measures the performance of a webpage or app against each other to determine which one performs better in terms of a specific objective, be it a click-through rate, conversion, or any other specific metric.

A|B Testing When First Launching a Product

When you first launch a product, it’s important to A|B test themes and concepts rather than specifics. Using Uber One as an example, if they were just launching this subscription service for the first time. They should be focused on testing two completely different themes to determine what direction to take.

What do I mean by this?

I would be testing completely different screens and messaging. Does it work with a 7 day free trial using .. Or does a 70% off coupon work better? Would you run a listicle format or simply the coupon and CTA? The possibilities would be endless, but hopefully give you a direction on the next test.

When looking at this A/B test being ran by Uber One. They seemingly are set on a concept already as a “Hero” and are instead looking to optimize the concept further. This means, they are looking at specific messaging within the concept or images or colors to improve the overall conversion. When doing this, you have to be incredibly methodical on how you go about setting up your test.

Where the Original Post Misses the Mark

The article presents two designs, A and B, and tells us that design A won. However, the rationale behind 'why' remains speculative. The focus on information hierarchy, while crucial, cannot be the sole factor driving this decision. Several questions remain unanswered:

  1. Metrics: What specific metrics were Uber using to judge the success of one design over the other? Was it sign-ups, click-through rate, time spent on the paywall, or something else?

  2. Testing Environment: Was the testing environment controlled? Were there external factors that could have influenced user decisions during the testing phase?

  3. Audience Segmentation: Were the designs shown to random users, or were they segmented based on behavior, demographics, or another criteria?

  4. Duration: How long was the test run? Often, initial results can be misleading. A longer test duration could yield different results.

The Vital Role of Proper A/B Testing

Overlooking a thorough analysis of A/B testing in the discussion leaves readers without a comprehensive understanding of the decision-making process. Proper A/B testing is rooted in scientific methodology, ensuring that results are not only accurate but also repeatable.

While information hierarchy plays a significant role in influencing user decisions, it's just one piece of the puzzle. It's essential to consider other elements like color psychology, page load times, and the clarity of the call-to-action. But, above all, the importance of rigorous and methodical A/B testing can't be overstated. It ensures that design decisions are data-driven, not just based on gut feelings or speculative theories.

When I view Design A, I notice many changes from Design B which could be counter productive to the end goal of Uber One Subscription signs ups.

Uber One Subscription A|B Test

Here’s a list of design changes I noticed.

  • Color added to the 1 month free.

  • Color added to the icons

  • Color changed on the “Save $25.. Text”

  • Line spacing between the bullet points.

  • Different font logos

  • Font sizing

Do you see any more changes? Let me know in the comments.

Now why is this a problem to run an AB test with so many different changes? If version A won - who cares right?

Wrong!

What if adding color to “1 month free” was a negative? Now you reduced the conversion rate overall. Had you perhaps chose to run the test on line spacing first. You could identify if that made an improvement to the CVR (conversion rate). If it did, why? Is it perhaps the Save $25 Every Month above the CTA?

This gives you a direction to then say, let’s play with the messaging more above that CTA or with the CTA to further optimize it more. Is less more, or…?

My guess is ultimately the color changes were fractionally splitting hairs at best, however, that doesn’t mean you should arbitrarily use them in an AB test. They absolutely could poison your results with other changes you have made.

I have personally seen a color change do just that before. I have also seen cases where changing imagery is a big enough impact to negatively impact the text on the page. So why you would change imagery, color, etc etc all at the same time?

Big Smile Emoticon With Thumbs Up Stock Illustration - Download Image Now - Emoticon, Toothy Smile, Happiness - iStock

In conclusion, while the initial post offers some insights into the role of information hierarchy, it falls short in providing a comprehensive view of the design decision-making process. Proper A/B testing, with its emphasis on empirical data and systematic methodology, is critical to making informed and impactful design choices. Happy marketing folks, until next time!

Reply

or to participate.