Software Engineer, Inference – AMD GPU Enablement
About the Team Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state-of-the-art AI models, allowing them to do things that they’ve never been able to before.
What this role actually needs.
Software Engineer, Inference – AMD GPU Enablement at OpenAI in San Francisco. UpJobz keeps this listing high-signal for applicants targeting serious high-tech roles across the United States, Canada, and Mexico. About the Team Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state-of-the-art AI models, allowing them to do things that they’ve never been able to before.
Day-to-day expectations
A clear list of the work this role is designed to cover.
- Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
- Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
- Debug and optimize distributed inference workloads across memory, network, and compute layers.
- Validate correctness, performance, and scalability of model execution on large GPU clusters.
- Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
- Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.
What a strong candidate brings
This keeps the job page specific, readable, and easier to match.
Why people would want this job
Benefits help searchers understand whether the role is a real fit before they apply.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to software engineer, inference – amd gpu enablement is a high-signal on-site role in san francisco, and it is most realistic for united states residents.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $295K - $555K is visible, so calibrate your application around the posted range.
- Use united states residents as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in on-site environments.
Search intent signals for this listing
Helpful keyword hooks for serious tech searchers and future programmatic job pages.
Ready to move on this role?
This page keeps the application flow simple while giving you enough context to decide quickly and move.