Thanks to a project led by Fyber’s Data Science team, we’ve just made split testing for your app even easier. While Multi-Testing with Fyber is already a simple three step set up, we wanted to help you answer “what’s next?” The Multi-Testing dashboard interface now populates with a recommended test duration in real-time as you create your variants and enter traffic splits.
Test with confidence knowing that your results over the recommended test duration are valid and statistically significant for your experiment. Our team developed a formula based on statistical assumptions on your current and historical data with Fyber, alongside a multiplier that is composed of the number of users in each variant group. Our internal testing has shown that the winning formula consistently delivers >90% confidence in results being statistically significant – enough to come to a substantial conclusion of cause and effect rather than correlation. This confidence level indicates that applying the winning variant settings across 100% of that placement’s traffic should drive a beneficial impact on yield that is consistent with the results of the Multi-Test.
Guidance around recommended test duration ensures that results are trust-worthy, while helping you strike a balance between other crucial considerations for testing:
- Your risk tolerance (in case your test variant performs worse than your control group)
- Scale of your user base (There is an inverse relationship between the number of DAUs in the smallest test variant and recommended test duration time)
- Product development roadmap and schedule (ensuring that results are available when decisions have to be made)
For example, if the test variant has a small percentage of traffic allocated to it that tops out at 500 DAU, the recommended number of days to run the test may exceed 60 days. Such a lengthy timeframe may not align with internal initiatives that often require concrete next steps in closer timing to engineering sprints.
On the other hand, If the test variant has a larger percentage of traffic allocated to it, say 100k DAU, it is likely that the test can have conclusive results within a week, giving you plenty of time to roll out the change to 100% of traffic and move forward with your next ad monetization experiment.
Our test duration recommendation feature builds on Fyber’s drive for delivering superior results, efficiency and transparency through our technology and consultative service from our team to yours. This feature allows you to maintain full control – pause or stop your tests at any time through your Fyber publisher dashboard, regardless of our test duration recommendation. We’re on your side to ensure decisions and changes are based on statistically sound tests that will help you drive growth for your apps.
For more information about Multi-Testing use cases and best practices, contact your account manager to get the FairBid Multi-Testing Handbook.