Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
We are looking for a systems-minded engineer who lives at the intersection of large-scale model inference, distributed systems, and performance optimization. This role focuses on post-training and inference infrastructure, with particular emphasis on P/D disaggregation, KV cache lifecycle management, and efficient offloading mechanisms across both inference and reinforcement learning (RL) systems.
THE PERSON:
You enjoy reverse-engineering modern ML infrastructure, reasoning about memory and compute tradeoffs, and turning research insights into production-grade features. You are comfortable diving into unfamiliar frameworks, understanding their architectural choices, identifying bottlenecks, and improving them through principled engineering.
KEY RESPONSIBILITIES:
- Research and deeply understand modern LLM inference frameworks, including:
- Architecture and design tradeoffs of P/D (prefill / decode) disaggregation
- KV cache lifecycle, memory layout, eviction strategies, and reuse
- KV cache offloading mechanisms across GPU, CPU, and storage backends
- Analyze and compare inference execution paths to identify:
- Performance bottlenecks (latency, throughput, memory pressure)
- Inefficiencies in scheduling, cache management, and resource utilization
- Develop and implement infrastructure-level features to:
- Improve inference latency, throughput, and memory efficiency
- Optimize KV cache management and offloading strategies
- Enhance scalability across multi-GPU and multi-node deployments
- Apply the same research-driven approach to RL frameworks:
- Study post-training and RL systems (e.g., policy rollout, inference-heavy loops)
- Debug performance and correctness issues in distributed RL pipelines
- Optimize inference, rollout efficiency, and memory usage during training
- Collaborate with research and applied ML teams to:
- Translate model-level requirements into infrastructure capabilities
- Validate performance gains with benchmarks and real workloads
- Document findings, architectural insights, and best practices to guide future system design
- Strong background in systems engineering, distributed systems, or ML infrastructure
- Hands-on experience with GPU-accelerated workloads and memory-constrained systems
- Solid understanding of:
- LLM inference workflows (prefill vs decode)
- Attention mechanisms and KV cache behavior
- Multi-process / multi-GPU execution models
- Proficiency in Python and C++ (or similar systems languages)
- Experience debugging performance issues using profiling tools (GPU, CPU, memory)
- Ability to read, understand, and modify complex open-source codebases
- Strong analytical skills and comfort working in research-heavy, ambiguous problem spaces
- Direct experience with LLM inference frameworks or serving stacks
- Familiarity with:
- GPU memory hierarchies (HBM, pinned memory, NUMA considerations)
- KV cache compression, paging, or eviction strategies
- Storage-backed offloading (NVMe, object stores, distributed file system)
- Experience with distributed RL or post-training pipelines
- Knowledge of scheduling systems, async execution, or actor-based runtimes
- Contributions to open-source ML or systems projects
- Experience designing benchmarking suites or performance evaluation frameworks
ACADEMIC CREDENTIALS:
- Bachelor's or master's degree in computer science, computer engineering, electrical engineering, or equivalent
LOCATION:
San Jose, CA (Hybrid). May consider other US locations.
#LI-MV!
#HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website