Anshuman Suri's Avatar

Anshuman Suri

@iamgroot42

Postdoc @ Khoury | Previously Ph.D. @ UVA (David Evans) | IIITD Alum | Interested in machine learning privacy & security.

45
Followers
93
Following
6
Posts
22.11.2024
Joined
Posts Following

Latest posts by Anshuman Suri @iamgroot42

5/ But the broader message? It's time to give 'parameter access' another serious look in privacy research πŸ”¬

Find more details in the paper (accepted to TMLR), w/ Xiao & Dave

πŸ“œ openreview.net/pdf?id=fmKJf...
πŸ’» github.com/iamgroot42/a...

18.12.2024 03:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods sufficeβ€”or significant, showing parameter access offers better empirical upper bounds πŸ€”

18.12.2024 03:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!

18.12.2024 03:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inferenceβ€”assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD

18.12.2024 03:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1/ Most membership inference attacks (MIAs) have seemingly converged to black-box settings, driven by empirical evidence and theoretical folklore suggesting black-box access was optimal. But what if this assumption missed something critical? 😨

tl;dr? It did 🧡

18.12.2024 03:39 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Temporally shifted data splits in membership inference can be misleading ⚠️ Be cautious when interpreting these benchmarks!

26.11.2024 18:17 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0