Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
Job Description
Kernel Performance Architect
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, we build the compute engines that power AI, high-performance computing, and next-generation data centers. Our GPU platforms drive breakthroughs across machine learning, scientific computing, and large-scale distributed systems.
We are looking for a Kernel Performance Architect who can bridge hardware, compiler, runtime, and application layers to define and drive end-to-end performance strategy for AI workloads on AMD GPUs.
This role is not just about writing fast kernels — it is about understanding why they are fast, predicting performance behavior across architectures, and shaping the abstractions that make performance portable.
THE ROLE
The Kernel Performance Architect is responsible for defining, analyzing, and optimizing performance across the full stack — from GPU microarchitecture and compiler behavior to runtime systems and deep learning frameworks.
You will:
Lead performance architecture for key AI workloads
Define cross-architecture optimization strategies
Diagnose bottlenecks across kernel, memory, compiler, and runtime layers
Design performance methodologies and benchmarking frameworks
Influence abstraction design to ensure performance portability
Collaborate with compiler, runtime, and hardware teams
This is a high-impact technical leadership role requiring strong performance intuition and systems-level thinking.
THE PERSON
You are someone who:
Can look at a kernel and predict whether it will be memory-bound or compute-bound before running it
Understands how register pressure, occupancy, memory hierarchy, scheduling, and compiler decisions interact
Is comfortable reading disassembly and compiler IR
Can design experiments to isolate performance bottlenecks
Thinks in terms of architecture invariants, not just one GPU generation
Can clearly explain performance tradeoffs to both hardware and software engineers
You combine deep low-level knowledge with broad systems perspective.
KEY RESPONSIBILITIES
Cross-Stack Performance Analysis
Analyze performance across kernel, compiler, runtime, and framework layers
Identify root causes of bottlenecks (memory bandwidth, latency hiding, scheduling limits, interconnect constraints)
Build mental and quantitative performance models
Kernel Architecture & Optimization Strategy
Define tiling, scheduling, and memory layout strategies for AI kernels
Evaluate tradeoffs between portability and peak performance
Guide implementation engineers toward correct optimization directions
Compiler & Runtime Collaboration
Work with compiler teams (LLVM, ROCm) to analyze generated code
Influence scheduling, register allocation, and code generation improvements
Align kernel design with runtime behavior (streams, synchronization, distributed execution)
Performance Methodology & Benchmarking
Design reproducible benchmarking frameworks
Define regression metrics and performance KPIs
Ensure cross-architecture comparison validity
Abstraction & Portability
Contribute to abstraction layers that enable performance portability
Ensure architecture-specific optimizations remain maintainable
Identify architectural invariants across GPU generations
Technical Leadership
Mentor kernel engineers on performance reasoning
Review optimization proposals at architectural level
Publish technical insights internally and externally
REQUIRED EXPERIENCE
Strong experience in GPU kernel development and optimization (HIP, CUDA, or similar)
Deep understanding of GPU microarchitecture concepts:
Wavefront execution
Memory hierarchy
Cache behavior
Register pressure and occupancy
Instruction scheduling
Experience analyzing compiler-generated code (LLVM IR, ISA, ASM)
Proven ability to diagnose performance bottlenecks using profiling tools
Strong C++ experience in Linux environments
PREFERRED EXPERIENCE
Experience optimizing AI workloads (e.g., attention, MoE, GEMM, convolution)
Experience with ROCm, LLVM, or GPU compiler internals
Familiarity with performance modeling or roofline analysis
Experience working across hardware and software teams
Contributions to open-source GPU performance projects
ACADEMIC CREDENTIALS
Bachelor's, Master's, or PhD in Computer Science, Computer Engineering, Electrical Engineering, or related field.
#LI-FL1
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website