Implementing Binary Neural Networks.

dc.contributor.advisorPatel, Shwetak
dc.contributor.authorFromm, Joshua Wolff
dc.date.accessioned2020-04-30T17:40:00Z
dc.date.available2020-04-30T17:40:00Z
dc.date.issued2020-04-30
dc.date.submitted2020
dc.descriptionThesis (Ph.D.)--University of Washington, 2020
dc.description.abstractThe recent renaissance of deep neural networks has lead to impressive advancements in many domains of machine learning. However, the computational cost of these neural models in- creases in line with their performance, with many state-of-the-art models only being able to run on expensive high-end hardware. The need to efficiently deploy neural networks to commodity platforms has made network optimization a popular field of research. One particularly promising technique is network binarization, which quantizes the weights and activations of a model to only one or two bits. Although binarization offers theoretical oper- ation count reductions of up to 32X, no actual measurements have been reported. This is a symptom of the gap between theory and implementation of binary networks that exists to- day. In this work, we bridge the gap between abstract simulations and real usable high speed networks. To do so, we identify errors in the existing literature, develop novel algorithms, and introduce Riptide, an open source system that can train and deploy state-of-the-art binary neural networks to multiple hardware backends.
dc.embargo.termsOpen Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherFromm_washington_0250E_20903.pdf
dc.identifier.urihttp://hdl.handle.net/1773/45421
dc.language.isoen_US
dc.rightsnone
dc.subjectComputer Vision
dc.subjectMachine Learning
dc.subjectQuantization
dc.subjectSystems
dc.subjectArtificial intelligence
dc.subject.otherElectrical engineering
dc.titleImplementing Binary Neural Networks.
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Fromm_washington_0250E_20903.pdf
Size:
1.73 MB
Format:
Adobe Portable Document Format