- Theory of Differential Privacy. The selected candidate will be expected to lead research in privacy-preserving data analysis/machine learning that is motivated by practice but has a strong theoretical underpinning. A background and strong interest in differential privacy required. Potential projects could concern private linear regression and related problems, the connection between differential privacy and properties like generalization and replicability, and various relaxations or alternative privacy notions. Successful applicants will be strong technically as well as have an inclination towards real-world problems. We are looking for applicants with demonstrably strong research skills, ideally, with publications in top venues in machine learning or theoretical CS — although this is not a hard requirement (e.g., ICML, NeurIPS, ICLR, STOC, FOCS, ALT, COLT). Candidates must have a Ph.D. or equivalent degree in computer science, statistics, or a closely related field. Experience implementing DP algorithms and machine learning models in Python preferred.
- Privacy-Preserving ML (generative models, membership inference, genomic applications). The selected candidate will be expected to lead research in methodological and applied research probing privacy issues in the training and deployment of machine learning models, with a particular focus on generative models (e.g., GANs, VAEs, diffusion models, large language models etc.). We seek highly-motivated applicants with background in one or more of the following areas: generative models, differentially private learning, machine unlearning. We have a particular interest in the use of these methods on genomic data, and so experience working with large genomic datasets a plus (UK Biobank, dbGaP etc). Successful applicants will be strong technically as well as have an inclination towards real-world problems. We are looking for applicants with demonstrably strong research skills, ideally, with publications in top venues in machine learning and/or top-tier interdisciplinary journals — although this is not a hard requirement (e.g., ICML, NeurIPS, ICLR, KDD, AAAI, AI STATS, Nature/Science family of journals, PNAS). Candidates must have a Ph.D. or equivalent degree in computer science, statistics, or a closely related field. Strong programming skills (Python) and experience with machine learning and its applications are required.
Application Process. The positions are available immediately and can be renewed annually. Interested applicants should apply via this form and submit the following documents:
- Curriculum Vitae
- Link to Github account and/or any software developed
- Two representative publications (preprints are acceptable)
- Statement of Research (2 pages) describing prior research experience and future research plans
- Three letters of recommendation (will be solicited after the initial review)
We are currently reviewing applications for this position. Interested candidates are encouraged to submit their applications as soon as possible and preferably by Dec 15, 2022. We will continue accepting applications after this deadline if the position is not filled.
Questions can be directed to s + firstname.lastname@example.org.
Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at the Harvard John A. Paulson School of Engineering & Applied Sciences, and Lead PI on the Harvard Privacy Tools Project. Vadhan’s research in theoretical computer science spans computational complexity, cryptography, and data privacy. His honors include a Harvard College Professorship, a Simons Investigator Award, and a Guggenheim Fellowship.
Seth Neel is an Assistant Professor housed in the Department of Technology and Operations Management (TOM). He is Principal Investigator of the Trustworthy AI Lab in Harvard’s new D^3 Institute, a faculty member of the Theory of Computation group in the engineering school, and the AI@Harvard Initiative. His primary academic interest is in machine learning, with a particular focus on ethical notions like fairness, (differential) privacy, and more recently, interpretability. This includes data deletion and machine unlearning, differentially private learning, membership inference attacks, auditing and learning under notions of subgroup fairness, and connections between model privacy, fairness, and interpretability. Fair algorithms he co-developed during his Ph.D. have been incorporated into the open source efforts of IBM AI Research.
PhD Positions. I am looking for motivated students who are interested in contributing to theoretical, methodological, and applied research on privacy and connections between privacy and other properties of algorithms (like fairness or explainability) in machine learning. If you would like to pursue a PhD under my guidance, please apply to BOTH the following PhD programs and mention my name in your statements and applications:
- PhD Program in Technology and Operations Management at Harvard Business School (mention my name).
- PhD Program in Computer Science at Harvard SEAS (mention Salil Vadhan & my name).