Privacy-Preserving Transfer Learning for Human Activity Recognition

Loading...
Thumbnail Image

Authors

Melanson, David

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Neural network models yield high accuracy in different application domains, making them an impactful tool for machine learning system developers. In the context of Human Activity Recognition (HAR), it is common that the data a deep neural network was originally trained on will not translate optimally to new users. To remedy this, one can apply Transfer Learning (TL) to personalize the model to new end-users, leading to significant accuracy improvements. Such TL is by design done based on personal information of the end-users that, if not protected, can be (mis)used by application developers beyond the professed scope. We propose a cryptography-based solution for privacy-preserving TL to personalize a Convolutional Neural Network (CNN) for HAR with the data of end-users, without requiring the end-users to reveal their sensitive data in an unencrypted manner, and without requiring the owner of the CNN to disclose their trained model parameters or any other sensitive or proprietary information with anyone in plaintext. To this end, we use techniques from Secure Multi-Party Computation (MPC) to encrypt all data and model parameters, and to allow all parties to compute functions over that encrypted data. To demonstrate the effectiveness of our privacy-preserving solution, we compare the accuracy and runtimes against the TL algorithm in-the-clear, i.e. when no measures to protect privacy are in place. We performed tests on two datasets using two different MPC based protocols to ensure the security of data. Our tests not only demonstrate significant increases in accuracy, but also show that our approach may be fast enough to use in practice.

Description

Thesis (Master's)--University of Washington, 2021

Citation

DOI