Building Behavioral Experimentation Engines

dc.contributor.advisorPopović, Zoranen_US
dc.contributor.authorLiu, Yun-Enen_US
dc.date.accessioned2015-09-29T18:00:48Z
dc.date.available2015-09-29T18:00:48Z
dc.date.issued2015-09-29
dc.date.submitted2015en_US
dc.descriptionThesis (Ph.D.)--University of Washington, 2015en_US
dc.description.abstractHuman behavior is an incredibly complex topic, given the variation between individuals and the many ways we can be influenced by our environment. This complexity, combined with the difficulty and expense of running experiments involving humans, means there are still many aspects of how we react and learn that are poorly understood. This thesis argues that the rise of online software and new machine learning algorithms has given us a new way to study these vast behavioral topics. First, by designing software people want to use, we can collect data much faster and more cheaply than ever before. Second, by identifying the objectives scientists implictly maximize when choosing experiments to run, we can invent new algorithms to maximize for these objectives directly by automatically altering the software and measuring user responses. Third, these algorithms must take into account difficulties in implementation not present in the laboratory environment, where users are paid to participate. Four examples of algorithms, designed to automatically run experiments on online software to fulfill different scientific objectives, are presented. These algorithms can maximize for subject outcomes such as learning while discovering which factors most influence their performance, efficiently discover the most general form of scientific results and intelligently follow many simultaneous chains of research, identify and sample experimental conditions which yield the most surprising user behavior, and respect the implicit tradeoff between uncovering new scientific knowledge and providing good outcomes for users. Finally, no algorithm can make progress if the software source fails to collect useful data, so a case study of how to design new software deployment methods to collect better-quality data on algebra learning and social incentives is presented.en_US
dc.embargo.termsOpen Accessen_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.otherLiu_washington_0250E_14882.pdfen_US
dc.identifier.urihttp://hdl.handle.net/1773/33692
dc.language.isoen_USen_US
dc.rightsCopyright is held by the individual authors.en_US
dc.subjectEducational Games; Human-Computer Interaction; Machine Learningen_US
dc.subject.otherComputer scienceen_US
dc.subject.otherEducational technologyen_US
dc.subject.othercomputer science and engineeringen_US
dc.titleBuilding Behavioral Experimentation Enginesen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_washington_0250E_14882.pdf
Size:
5.19 MB
Format:
Adobe Portable Document Format