Back to Search Start Over

A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data.

Authors :
Garcia-Ceja, Enrique
Morin, Brice
Aguilar-Rivera, Anton
Riegler, Michael Alexander
Source :
Journal of Medical Systems; Oct2020, Vol. 44 Issue 10, pN.PAG-N.PAG, 1p, 1 Black and White Photograph, 1 Diagram, 5 Charts, 3 Graphs
Publication Year :
2020

Abstract

In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of 'stealing' users' biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions' confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01485598
Volume :
44
Issue :
10
Database :
Complementary Index
Journal :
Journal of Medical Systems
Publication Type :
Academic Journal
Accession number :
146224563
Full Text :
https://doi.org/10.1007/s10916-020-01646-y