CRO
A/B Testing
A practical UX and optimisation method for making controlled, data-driven decisions by comparing real user behaviour across design variations.
How to use A/B testing to compare live variations, measure performance reliably, and optimise based on evidence rather than opinion.
Quick take
If you want to know what actually works, test one version against another.
Related Services
What it is
A/B testing is a UX and glossaryOptimisationOptimisation is the process of improving a product or journey to increase performance, usability, or conversion.Open glossary term method where two or more variations of a design are tested against each other to see which performs better.
Users are split into groups, with each group seeing a different glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term of the same experience.
glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term is measured using defined metrics such as glossaryConversion RateConversion rate is the percentage of users who complete a desired action compared to the total number of users.Open glossary term, clicks, or glossaryTask CompletionTask completion measures whether users can successfully complete a specific task.Open glossary term.
The focus is on glossaryBehaviourBehaviour refers to how users interact with a system, including actions, patterns, and responses.Open glossary term and outcomes, not opinions.
The goal is to make glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term-driven decisions by identifying which glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term delivers better results.
A/B testing is most useful when you need objective evidence of what improves outcomes in a live environment.
When to use it
Use this method when you want measurable improvement.
It is most useful when:
It is less useful when:
A/B testing is often used in optimisation and live environments.
Key takeaway
Use A/B testing when the decision can be isolated into clear alternatives and measured with reliable behavioural data.
How to run it
Set up properly.
Before you start, be clear on the glossaryHypothesisA hypothesis is a testable assumption about how a change will impact an outcome.Open glossary term you are testing, the variations being compared, and the success metrics.
Only test one meaningful change at a time.
Run the method.
A/B testing is controlled and glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term-driven.
Split users into groups. Show each group a different glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term. Run the test for a defined period. Collect glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term. Ensure conditions remain consistent.
Avoid changing variables mid-test.
Capture and make sense of it.
The value comes from measurable results.
After the test: compare glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term between variations, assess glossaryStatistical SignificanceStatistical significance indicates whether the results of an experiment are likely due to real effects rather than chance.Open glossary term, identify the winning glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term, and apply learnings to future tests.
Use this to drive continuous improvement.
What to look for
Focus on:
Where it goes wrong
Most issues come from:
If the test isn’t controlled, the results are meaningless.
What you get from it
Done properly, this method gives you:
Key takeaway
It helps you prove what works.
Get in touch
If this sounds like something you need, we can help you run A/B tests that deliver measurable improvements and remove guesswork from glossaryPrioritisationPrioritisation is the process of ranking tasks, features, or initiatives based on their importance, impact, and effort.Open glossary term.
No assumptions. No opinions. Just glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term that proves what works.
FAQ
Common questions
A few practical answers to the questions that usually come up around this method.
What is A/B testing in UX?
It is a method for comparing two glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term of a design to see which performs better.
When should you use A/B testing?
Use it when optimising live products with sufficient glossaryTrafficTraffic refers to the number of users visiting a website, app, or digital product over a given period.Open glossary term.
What can you test?
glossaryLayoutLayout is the arrangement of elements on a page or screen, determining how content is organised and presented. It influences readability, usability, and overall experience.Open glossary term, content, CTAs, glossaryDelightMoments that exceed user expectations.Open glossary term, and glossaryInteractionInteraction refers to any action a user takes within a product and how the system responds. It includes clicks, taps, gestures, and inputs that drive the user experience.Open glossary term.
How long should a test run?
Long enough to reach reliable, statistically significant results.
Does A/B testing improve UX?
Yes. It helps optimise based on real glossaryUser BehaviourUser behaviour refers to how users interact with a product, including actions, patterns, and decision-making processes.Open glossary term.