Building Behavioral Experimentation Engines
MetadataShow full item record
Human behavior is an incredibly complex topic, given the variation between individuals and the many ways we can be influenced by our environment. This complexity, combined with the difficulty and expense of running experiments involving humans, means there are still many aspects of how we react and learn that are poorly understood. This thesis argues that the rise of online software and new machine learning algorithms has given us a new way to study these vast behavioral topics. First, by designing software people want to use, we can collect data much faster and more cheaply than ever before. Second, by identifying the objectives scientists implictly maximize when choosing experiments to run, we can invent new algorithms to maximize for these objectives directly by automatically altering the software and measuring user responses. Third, these algorithms must take into account difficulties in implementation not present in the laboratory environment, where users are paid to participate. Four examples of algorithms, designed to automatically run experiments on online software to fulfill different scientific objectives, are presented. These algorithms can maximize for subject outcomes such as learning while discovering which factors most influence their performance, efficiently discover the most general form of scientific results and intelligently follow many simultaneous chains of research, identify and sample experimental conditions which yield the most surprising user behavior, and respect the implicit tradeoff between uncovering new scientific knowledge and providing good outcomes for users. Finally, no algorithm can make progress if the software source fails to collect useful data, so a case study of how to design new software deployment methods to collect better-quality data on algebra learning and social incentives is presented.