Publications
Working Papers
Fine-Tuning Large Language Models with User-Level Differential Privacy.
Zachary Charles, Arun Ganesh, Ryan McKenna, H Brendan McMahan, Nicole Mitchell, Krishna Pillutla, Keith Rush.
Submitted (2024).
PDF
User Inference Attacks on Large Language Models.
Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu.
Submitted (2023).
PDF
Conference and Journal Publications
Efficient and Near-Optimal Noise Generation for Streaming Differential Privacy.
Krishnamurthy Dvijotham, H. Brendan McMahan, Krishna Pillutla, Thomas Steinke, Abhradeep Thakurta.
FOCS (2024).
Also presented at TPDP (2024). Oral Presentation.
PDF
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning.
Christopher Choquette-Choo*, Krishnamurthy Dvijotham*, Krishna Pillutla*, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta.
ICLR (2024).
Also presented at FL@FM-NeurIPS (2023)
PDF Slides Poster
Distributionally Robust Optimization with Bias and Variance Reduction.
Ronak Mehta, Vincent Roulet, Krishna Pillutla, Zaid Harchaoui.
ICLR (2024) Spotlight.
Also presented at DP4ML-ICML (2023)
PDF Code
MAUVE Scores for Generative Models: Theory and Practice.
Krishna Pillutla*, Lang Liu*, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui.
JMLR (2023) Best Papers Track.
Also presented at DeepMath (2023)
PDF Project Page Pip-package Code Poster
Unleashing the Power of Randomization in Auditing Differentially Private ML.
Krishna Pillutla, Galen Andrew, Peter Kairouz, H. Brendan McMahan, Alina Oprea, Sewoong Oh.
NeurIPS (2023).
Also presented at FL@ICML (2023), TPDP (2023)
PDF Poster Code
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning.
Zachary Charles*, Nicole Mitchell*, Krishna Pillutla*, Michael Reneer, Zachary Garrett.
NeurIPS Datasets and Benchmarks (2023).
PDF Software Poster Slides
Modified Gauss-Newton Algorithms under Noise.
Krishna Pillutla, Vincent Roulet, Sham Kakade, Zaid Harchaoui.
IEEE SSP (2023), HOO-NeurIPS (2022).
PDF Poster
Statistical and Computational Guarantees for Influence Diagnostics.
Jillian Fisher, Lang Liu, Krishna Pillutla, Yejin Choi, Zaid Harchaoui.
AISTATS (2023).
ASA Student Paper Award 2023 Honorable Mention (Section on Statistical Learning & Data Science)
PDF Code
Stochastic Optimization for Spectral Risk Measures.
Ronak Mehta, Vincent Roulet, Krishna Pillutla, Lang Liu, Zaid Harchaoui.
AISTATS (2023).
ASA Student Paper Award 2023 Honorable Mention (Risk Analysis Section)
PDF Code
Federated Learning with Superquantile Aggregation for Heterogeneous Data.
Krishna Pillutla*, Yassine Laguel*, Jérôme Malick, Zaid Harchaoui.
Machine Learning Journal (2023).
FL-NeurIPS ‘22, DistShift-NeurIPS ‘22 Spotlight.
PDF Publisher’s Page Project Page
Code Slides Poster
From Enormous Structured Models to On-device Federated Learning: Robustness, Heterogeneity and Optimization.
Krishna Pillutla
PhD Dissertation (2022).
PDF Slides
Federated Learning with Partial Model Personalization.
Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, Lin Xiao.
ICML 2022.
PDF Code Slides (ICML Spotlight) Poster
Robust Aggregation for Federated Learning.
Krishna Pillutla, Sham Kakade, Zaid Harchaoui.
IEEE Transactions on Signal Processing (2022).
IEEE SPS Top 25 Downloaded Paper in 9/22 - 9/23.
FL-ICML 2020 Long Oral Presentation, ICASSP 2023
PDF Publisher’s Page Code (TensorFlow) Code (PyTorch) Talk video Poster Webinar
MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers.
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui.
NeurIPS (2021).
NeurIPS Outstanding Paper Award (Top 6 out of 9000 submissions).
PDF Pip-package Code Poster Press
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral.
Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui.
NeurIPS (2021).
PDF Code Poster
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes.
Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi.
NeurIPS (2021).
PDF
Superquantiles at Work : Machine Learning Applications and Efficient (Sub)gradient Computation.
Yassine Laguel, Krishna Pillutla, Jérôme Malick, Zaid Harchaoui.
Set-Valued and Variational Analysis (2021).
PDF Publisher’s Page
A Superquantile Approach to Federated Learning with Heterogeneous Devices.
Yassine Laguel*, Krishna Pillutla*, Jérôme Malick, Zaid Harchaoui.
IEEE CISS (2021).
PDF Code
A Smoother Way to Train Structured Prediction Models.
Krishna Pillutla, Vincent Roulet, Sham Kakade, Zaid Harchaoui.
NeurIPS (2018).
PDF-long PDF-short Code Documentation
Poster Blog post Video summary
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares).
Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, Aaron Sidford.
FSTTCS (2017).
PDF
Data Driven Resource Allocation for Distributed Learning.
Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria-Florina Balcan, Alex Smola.
AISTATS (2017).
PDF-long PDF-short
On Skewed Multi-dimensional Distributions: the FusionRP Model, Algorithms, and Discoveries.
Venkata Krishna Pillutla*, Zhanpeng Fang*, Christos Faloutsos, Danai Koutra, Jie Tang.
SIAM International Conference on Data Mining (SDM) 2016.
PDF
Master’s Thesis: Data Driven Resource Allocation For Distributed Machine Learning.
Thesis Committee: Nina Balcan, Alex Smola, Christos Faloutsos
PDF Slides