UnSplit: Data-Oblivious model inversion, model stealing, and label inference attacks against split learning
buir.contributor.author | Çiçek, A. Ercüment | |
buir.contributor.orcid | Çiçek, A. Ercüment|0000-0001-8613-6619 | |
dc.citation.epage | 124 | en_US |
dc.citation.spage | 115 | en_US |
dc.contributor.author | Erdoǧan, Ege | |
dc.contributor.author | Küpçü, Alptekin | |
dc.contributor.author | Çiçek, A. Ercüment | |
dc.date.accessioned | 2023-02-26T12:34:15Z | |
dc.date.available | 2023-02-26T12:34:15Z | |
dc.date.issued | 2022-11-07 | |
dc.department | Department of Computer Engineering | en_US |
dc.description | Conference Name: 21st Workshop on Privacy in the Electronic Society, WPES 2022 | en_US |
dc.description | Date of Conference: 7 November 2022 | en_US |
dc.description.abstract | Training deep neural networks often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning aims to address this concern by distributing the model among a client and a server. The scheme supposedly provides privacy, since the server cannot see the clients' models and inputs. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected. (2) We show that if the client keeps hidden only the output layer of the model to ''protect'' the private labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using various benchmark datasets and against proposed privacy-enhancing extensions to split learning. Our results show that plaintext split learning can pose serious risks, ranging from data (input) privacy to intellectual property (model parameters), and provide no more than a false sense of security. © 2022 Owner/Author. | en_US |
dc.description.provenance | Submitted by Cem Çağatay Akgün (cem.akgun@bilkent.edu.tr) on 2023-02-26T12:34:15Z No. of bitstreams: 1 UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf: 6966612 bytes, checksum: b7a4408f6e9c1f4b4ae1587f9e3e5f12 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2023-02-26T12:34:15Z (GMT). No. of bitstreams: 1 UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf: 6966612 bytes, checksum: b7a4408f6e9c1f4b4ae1587f9e3e5f12 (MD5) Previous issue date: 2022-11-07 | en |
dc.identifier.doi | 10.1145/3559613.3563201 | en_US |
dc.identifier.isbn | 9781450398732 | |
dc.identifier.uri | http://hdl.handle.net/11693/111765 | |
dc.language.iso | English | en_US |
dc.publisher | Association for Computing Machinery | en_US |
dc.relation.isversionof | https://dx.doi.org/10.1145/3559613.3563201 | en_US |
dc.subject | Data privacy | en_US |
dc.subject | Label leakage | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Model inversion | en_US |
dc.subject | Model stealing | en_US |
dc.subject | Split learning | en_US |
dc.title | UnSplit: Data-Oblivious model inversion, model stealing, and label inference attacks against split learning | en_US |
dc.type | Conference Paper | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- UnSplit_Data_Oblivious_Model_Inversion_Model_Stealing_and_Label_Inference_Attacks_Against_Split_Learning.pdf
- Size:
- 6.64 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.69 KB
- Format:
- Item-specific license agreed upon to submission
- Description: