Research Engineer, Privacy
About the Team The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and sec
What this role actually needs.
Research Engineer, Privacy at OpenAI in San Francisco. UpJobz keeps this listing high-signal for applicants targeting serious high-tech roles across the United States, Canada, and Mexico. About the Team The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and sec
Day-to-day expectations
A clear list of the work this role is designed to cover.
- Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale.
- Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees.
- Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams.
- Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions.
- Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring.
- Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling.
What a strong candidate brings
This keeps the job page specific, readable, and easier to match.
- Have hands-on research or production experience with PETs.
- Are fluent in modern deep-learning stacks (PyTorch/JAX) and comfortable turning cutting-edge papers into reliable, well-tested code.
- Enjoy stress-testing models—probing them for private data leakage—and can explain complex attack vectors to non-experts with clarity.
- Have a track record of publishing (or implementing) novel privacy or security work and relish bridging the gap between academia and real-world systems.
- Thrive in fast-moving, cross-disciplinary environments where you alternate between open-ended research and shipping production features under tight deadlines.
- Communicate crisply, document rigorously, and care deeply about building AI systems that respect user privacy while pushing the frontiers of capability.
Why people would want this job
Benefits help searchers understand whether the role is a real fit before they apply.
- Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale.
- Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees.
- Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams.
- Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions.
- Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring.
- Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and machine-learning instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to research engineer, privacy is a high-signal hybrid role in san francisco, and it is most realistic for united states residents.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $380K - $445K is visible, so calibrate your application around the posted range.
- Use united states residents as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in hybrid environments.
Search intent signals for this listing
Helpful keyword hooks for serious tech searchers and future programmatic job pages.
Ready to move on this role?
This page keeps the application flow simple while giving you enough context to decide quickly and move.