UnSplit: Data-Oblivious model inversion, model stealing, and label inference attacks against split learning

buir.contributor.authorÇiçek, A. Ercüment
buir.contributor.orcidÇiçek, A. Ercüment|0000-0001-8613-6619
dc.citation.epage124en_US
dc.citation.spage115en_US
dc.contributor.authorErdoǧan, Ege
dc.contributor.authorKüpçü, Alptekin
dc.contributor.authorÇiçek, A. Ercüment
dc.date.accessioned2023-02-26T12:34:15Z
dc.date.available2023-02-26T12:34:15Z
dc.date.issued2022-11-07
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionConference Name: 21st Workshop on Privacy in the Electronic Society, WPES 2022en_US
dc.descriptionDate of Conference: 7 November 2022en_US
dc.description.abstractTraining deep neural networks often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning aims to address this concern by distributing the model among a client and a server. The scheme supposedly provides privacy, since the server cannot see the clients' models and inputs. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected. (2) We show that if the client keeps hidden only the output layer of the model to ''protect'' the private labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using various benchmark datasets and against proposed privacy-enhancing extensions to split learning. Our results show that plaintext split learning can pose serious risks, ranging from data (input) privacy to intellectual property (model parameters), and provide no more than a false sense of security. © 2022 Owner/Author.en_US
dc.description.provenanceSubmitted by Cem Çağatay Akgün (cem.akgun@bilkent.edu.tr) on 2023-02-26T12:34:15Z No. of bitstreams: 1 UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf: 6966612 bytes, checksum: b7a4408f6e9c1f4b4ae1587f9e3e5f12 (MD5)en
dc.description.provenanceMade available in DSpace on 2023-02-26T12:34:15Z (GMT). No. of bitstreams: 1 UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf: 6966612 bytes, checksum: b7a4408f6e9c1f4b4ae1587f9e3e5f12 (MD5) Previous issue date: 2022-11-07en
dc.identifier.doi10.1145/3559613.3563201en_US
dc.identifier.isbn9781450398732
dc.identifier.urihttp://hdl.handle.net/11693/111765
dc.language.isoEnglishen_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.isversionofhttps://dx.doi.org/10.1145/3559613.3563201en_US
dc.subjectData privacyen_US
dc.subjectLabel leakageen_US
dc.subjectMachine learningen_US
dc.subjectModel inversionen_US
dc.subjectModel stealingen_US
dc.subjectSplit learningen_US
dc.titleUnSplit: Data-Oblivious model inversion, model stealing, and label inference attacks against split learningen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf
Size:
6.64 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: