SplitGuard: Detecting and mitigating training-hijacking attacks in split learning

buir.contributor.authorÇiçek, A. Ercüment
buir.contributor.orcidÇiçek, A. Ercüment|0000-0001-8613-6619
dc.citation.epage137en_US
dc.citation.spage125en_US
dc.contributor.authorErdogan, Ege
dc.contributor.authorKüpçü, Alptekin
dc.contributor.authorÇiçek, A. Ercüment
dc.date.accessioned2023-02-26T11:25:54Z
dc.date.available2023-02-26T11:25:54Z
dc.date.issued2022-11-07
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionConference Name: 21st Workshop on Privacy in the Electronic Society, WPES 2022en_US
dc.descriptionDate of Conference: 7 November 2022en_US
dc.description.abstractDistributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders. Split learning, in particular, achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning any task of its choice, e.g. towards outputting easily invertible values. With a concrete example already proposed (Pasquini et al., CCS '21), such training-hijacking attacks present a significant risk for the data privacy of split learning clients. In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate our method's effectiveness, compare it with potential alternatives, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries. © 2022 Owner/Author.en_US
dc.description.provenanceSubmitted by Cem Çağatay Akgün (cem.akgun@bilkent.edu.tr) on 2023-02-26T11:25:54Z No. of bitstreams: 1 SplitGuard_Detecting_and_Mitigating_Training_Hijacking_Attacks_in_Split_Learning.pdf: 10482580 bytes, checksum: 6e3b83bd2b8b897b14ca6b60f7887343 (MD5)en
dc.description.provenanceMade available in DSpace on 2023-02-26T11:25:54Z (GMT). No. of bitstreams: 1 SplitGuard_Detecting_and_Mitigating_Training_Hijacking_Attacks_in_Split_Learning.pdf: 10482580 bytes, checksum: 6e3b83bd2b8b897b14ca6b60f7887343 (MD5) Previous issue date: 2022-11-07en
dc.identifier.doi10.1145/3559613.3563198en_US
dc.identifier.isbn97814503-98732en_US
dc.identifier.urihttp://hdl.handle.net/11693/111764en_US
dc.language.isoEnglishen_US
dc.publisherAssociation for Computing MachineryNew YorkNYUnited Statesen_US
dc.relation.isversionofhttps://dx.doi.org/10.1145/3559613.3563198en_US
dc.subjectData privacyen_US
dc.subjectMachine learningen_US
dc.subjectModel inversionen_US
dc.subjectSplit learningen_US
dc.titleSplitGuard: Detecting and mitigating training-hijacking attacks in split learningen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SplitGuard_Detecting_and_Mitigating_Training_Hijacking_Attacks_in_Split_Learning.pdf
Size:
10 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: