Back to Engineering Blog

AAAI Conference Poster Presentation: Feature Space Hijacking Attacks against Differentially Private Split Learning

  • LiveRamp
  • 2 min read

Authors: Laura Davis, LiveRamp

LiveRamp Engineering is proud to announce that Phil Stubbings and Greg Gawron’s coauthored paper, “Feature Space Hijacking Attacks against Differentially Private Split Learning” has been accepted as a poster presentation at the Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) on February 28 and March 1, 2022.  This rigorous reviewing process for acceptance into the top global AI research conference lends great prestige to their paper’s technical insights.  

Here’s an overview from Greg and Phil on what the paper discusses:

Split learning and differential privacy are technologies with growing potential to help with privacy-preserving advanced analytics on distributed datasets. Attacks against split learning are an important evaluation tool and have been receiving increased research attention recently. This work’s contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains the client’s private data reconstruction with low error rates at arbitrarily set DP epsilon levels. We also experiment with dimensionality reduction as a potential attack risk mitigation and show that it might help to some extent. We discuss the reasons why differential privacy is not an effective protection in this setting and mention potential other risk-mitigation methods.

Register for the AAAI 22 conference hereAn introduction to differentially private split learning concepts can be accessed on LiveRamp’s Engineering Blog. You can also view the full paper on Cornell University’s arXiv website.


LiveRamp is hiring! Visit technical careers @ LiveRamp for information.