Researchers collect data in most experiments not all at once but sequentially over a period of time. This allows to observe outcomes early and to adapt the treatment assignment to reduce the costs of inferior treatments. This talk discusses multi-armed-bandit-type adaptive experimental designs and algorithms for balancing exploration of treatment effects and exploitation of better treatments. By design, bandits break usual asymptotic and make inference difficult. We show how a batched bandit design allows for valid confidence intervals and compare coverage of the batched bandit estimator in Monte Carlo simulations. In a real-world application, we investigate elements of a survey invitation message targeted to businesses. We implement a full factorial experiment with five elements adaptively.
Our results indicate that personalizing the message, emphasizing the authority of the sender, and pleading for help increase survey starting rates, while stressing strict privacy policies and changing the location of the survey URL have no response-enhancing effect. As a tool for researchers, we introduce bandits in Stata, which facilitates running Monte Carlo simulations to assist the design and implementation of experiments before data are collected, interactively running own bandit experiments, and analyzing adaptively collected data. Bandits implement three popular treatment assignment algorithms: ε-first, ε-greedy, and Thompson sampling. Bandits facilitates estimation, inference, and visualization.
Date
12.9.2024
, 13 p.m. till 14 p.m. (CET)
Venue
Institute for Employment Research
Regensburger Straße 104
90478 Nürnberg
Room Re100 E10
or online via MS Teams
Registration
Researchers who like to participate, please send an e-mail to IAB.Colloquium@iab.de