Technical terms don’t need to be scary or complicated. We’ve compiled a list of technical terms related to a/b testing which you may often come across, and we’ve also included a simple explanations.
Term | Description |
---|---|
A/B Test (Split Test) | A method of comparing two versions of a webpage or app to determine which one performs better by presenting them to different groups of users simultaneously. It helps identify which version drives more desired actions, such as clicks or conversions. |
Control Group | The group of users in an A/B test that is exposed to the current or existing version (control) of a webpage or app. Their (A) behavior is used as a baseline to compare against the variant (B) group. |
Variant Group | The group of users in an A/B test that is exposed to the new or modified version (variant) of a webpage or app. Their behavior (B) is compared to the control (A) group to assess the impact of the changes. |
Conversion Rate | The percentage of users who take a desired action, such as making a purchase or signing up, out of the total number of users exposed to a particular version of a webpage or app. It measures the effectiveness of a design or content change. |
Sample Size | The number of users or visitors included in an A/B test for each group (control and variant). A larger sample size typically provides more reliable and statistically significant results. |
Statistical Significance | A measure that indicates whether the differences in performance between the control and variant groups are likely due to chance or are genuinely significant. A higher level of statistical significance implies greater confidence in the test results. |
Hypothesis Testing | The process of formulating a hypothesis (e.g., “Changing the button color will increase click-through rates”) and then using statistical analysis to determine if the observed results support or reject the hypothesis. |
Multivariate Testing | An A/B testing method that involves testing multiple variations of different elements simultaneously. It helps identify the best combination of changes to optimise user interactions. |
Segmentation | The practice of dividing the user base into different groups (segments) based on characteristics such as demographics, behavior, or location. A/B tests can be conducted separately for each segment to tailor optimisations. |
Cookie-Based Tracking | A technique that uses cookies to track and remember user interactions and group assignment in A/B tests. This ensures that a user consistently sees the same version of a webpage or app during their visit, even if they navigate to different pages. |
Null Hypothesis (H0) | The default assumption in A/B testing that there is no significant difference between the control and variant groups. It serves as a baseline to test whether any observed differences are statistically significant or if they could have occurred by chance. If the null hypothesis is rejected, it suggests that the changes made in the variant group had a meaningful impact. |
Confidence Interval | A range of values calculated from A/B test data that provides a level of confidence about where the true effect lies. For example, a 95% confidence interval might suggest that the actual impact of a change is likely to fall within a certain range, helping decision-makers gauge the uncertainty of results. |
Bucketing | The process of assigning users to either the control group or the variant group in an A/B test. It’s often done randomly to ensure that the groups are comparable, and it helps prevent bias in user selection. Bucketing is typically based on factors like cookies or user IDs. |
Bounce Rate | The percentage of users who visit a webpage or app but leave without interacting with it further. A high bounce rate may indicate that the content or design of a page needs improvement, making it an essential metric to monitor in A/B testing for user engagement. |
Duration of Test | The length of time an A/B test is conducted. It’s essential to run tests for a long enough duration to gather sufficient data and account for factors like day-of-week or seasonality. A short test duration may lead to inconclusive or misleading results. |
Call to Action (CTA) | A CTA is a prompt that instructs or encourages the user to take a specific action, often highlighted through buttons or links, driving engagement on a website or in an application. |
Users who test with Simple AB Test have an easy time setting up their variants right from within our dashboard. All it takes is one line of code, and you’re good to go!