6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Bias Variance Tradeoff in Analysis of Online Controlled Experiments

      Preprint
      , , , ,

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Many organizations utilize large-scale online controlled experiments (OCEs) to accelerate innovation. Having high statistical power to detect small differences between control and treatment accurately is critical, as even small changes in key metrics can be worth millions of dollars or indicate user dissatisfaction for a very large number of users. For large-scale OCE, the duration is typically short (e.g. two weeks) to expedite changes and improvements to the product. In this paper, we examine two common approaches for analyzing usage data collected from users within the time window of an experiment, which can differ in accuracy and power. The open approach includes all relevant usage data from all active users for the entire duration of the experiment. The bounded approach includes data from a fixed period of observation for each user (e.g. seven days after exposure) after the first time a user became active in the experiment window.

          Related collections

          Author and article information

          Journal
          10 September 2020
          Article
          2009.05015
          dd22fd17-69d4-43f6-8210-3936a40a6550

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          62K99
          stat.AP cs.SE

          Software engineering,Applications
          Software engineering, Applications

          Comments

          Comment on this article

          Related Documents Log