Shea-Brown, EricFarrell, Matthew Stuart2020-10-262020-10-262020-10-262020Farrell_washington_0250E_22191.pdfhttp://hdl.handle.net/1773/46368Thesis (Ph.D.)--University of Washington, 2020Neural networks trained by machine learning optimization methods are currently being analyzed to shed light on brain function. While exciting progress is being made, the complicated nature of the network models typically considered has made ``opening the black box'' a significant challenge. In this thesis I approach the problem by starting with network models and tasks that can understood more easily, but that capture fundamental elements of more complex models. I reveal new aspects of the behavior of these models through the lens of effective dimensionality, which quantifies the number of axes needed to describe data. Through this investigation a new idea of "dimensionality balance'' emerges, where neural networks trained with stochastic gradient descent automatically strike a balance between increasing dimensionality (to more easily distinguish between different objects) and decreasing dimensionality (to build invariance to different examples of the same object). Mathematical analysis reveals the core mechanisms that may underlie the effects, and experiments with the image classification network VGG indicate that this balance is a general phenomenon. Finally, I demonstrate how and why dimensionality reduction methods can be used to extract information from network weights in a simple model, laying some guiding principles for ways of extracting insights from the recent explosion of brain connectomics data.application/pdfen-USCC BYartificial neural networksconnectomicsdeep learningdimensionalityrepresentation learningNeurosciencesArtificial intelligenceApplied mathematicsRevealing structure in trained neural networks through dimensionality-based methodsThesis