CRO
Split Testing
A practical UX and optimisation method for comparing fundamentally different experience directions with controlled live traffic.
How to use split testing to compare distinct versions of an experience, measure performance reliably, and choose the stronger direction with confidence.
Quick take
If you want to compare completely different approaches, split your traffic and measure what wins.
Related Services
What it is
Split testing is a UX and glossaryOptimisationOptimisation is the process of improving a product or journey to increase performance, usability, or conversion.Open glossary term method where users are divided into groups and shown entirely different glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term of a page, glossaryDelightMoments that exceed user expectations.Open glossary term, or experience.
Unlike guideA/B TestingComparing live variations with controlled experiments to identify which design or content change performs better.Open guide, which usually tests small changes, split testing often compares significantly different designs or structures.
Each glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term is hosted separately, and glossaryTrafficTraffic refers to the number of users visiting a website, app, or digital product over a given period.Open glossary term is distributed between them.
glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term is measured using defined metrics such as glossaryConversion RateConversion rate is the percentage of users who complete a desired action compared to the total number of users.Open glossary term, glossaryEngagementEngagement refers to how users interact with a product, content, or experience, including actions like clicks, time spent, and interactions.Open glossary term, or task completion.
The focus is on identifying which overall approach performs better.
The goal is to make informed decisions when choosing between fundamentally different directions.
Split testing is most useful when the decision is between distinct strategic directions, not minor interface tweaks.
When to use it
Use this method when you are comparing big changes.
It is most useful when:
It is less useful when:
Split testing is often used in optimisation and redesign scenarios.
Key takeaway
Use split testing when you need behaviour evidence to choose between materially different experience approaches.
How to run it
Set up properly.
Before you start, be clear on the variations you are testing, the success metrics, and how glossaryTrafficTraffic refers to the number of users visiting a website, app, or digital product over a given period.Open glossary term will be split.
Ensure each glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term is stable and functional.
Run the method.
Split testing is controlled and comparative.
Create separate glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term of the experience. Divide glossaryTrafficTraffic refers to the number of users visiting a website, app, or digital product over a given period.Open glossary term between them. Run the test over a defined period. Collect glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term data. Keep conditions consistent.
Avoid introducing additional changes during the test.
Capture and make sense of it.
The value comes from clear comparison.
After the test: compare glossaryPerformancePerformance refers to how quickly and efficiently a system responds to user actions and processes tasks.Open glossary term across glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term, assess glossaryStatistical SignificanceStatistical significance indicates whether the results of an experiment are likely due to real effects rather than chance.Open glossary term, identify the stronger approach, and apply learnings to future design decisions.
Use this to guide major direction.
What to look for
Focus on:
Where it goes wrong
Most issues come from:
If the test isn’t controlled, the outcome is unreliable.
What you get from it
Done properly, this method gives you:
Key takeaway
It helps you choose the right path with confidence.
Get in touch
If this sounds like something you need, we can help you run split tests that give you clear direction and glossaryConfidenceConfidence is the level of certainty in a decision or outcome based on available evidence.Open glossary term in your decisions.
No guesswork. No assumptions. Just glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term that shows what works.
FAQ
Common questions
A few practical answers to the questions that usually come up around this method.
What is split testing in UX?
It is a method for comparing completely different glossaryVersionA version is a specific iteration of software or a product at a point in time.Open glossary term of an experience.
When should you use split testing?
Use it when testing major design or structural changes.
How is it different from A/B testing?
Split testing compares larger, more distinct variations.
What can you test?
glossaryLayoutLayout is the arrangement of elements on a page or screen, determining how content is organised and presented. It influences readability, usability, and overall experience.Open glossary term, full pages, glossaryDelightMoments that exceed user expectations.Open glossary term, or product directions.
Does split testing improve UX?
Yes. It helps validate big decisions with real glossaryDataData is raw information collected and stored for analysis, processing, or decision-making.Open glossary term.