Back to Search Results
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: Santa Clara, CA
Career Level: Mid-Senior Level
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  



Senior Member of Technical Staff – Algorithm & Performance Optimization - GPU Libraries 

 
THE ROLE – SMTS Algorithms & Performance Engineer 

As Senior Member of Technical Staff in the GPU Libraries group, you will provide technical leadership and strategic support across the AMD Radeon Open Ecosystem (ROCm) ecosystem. This role is focused on critically analyzing, reviewing, and improving GPU kernel algorithms within the Composable Kernel (CK) and MIOpen libraries, with a strong emphasis on performance tuning through both explainable heuristics and empirical benchmarking. Working collaboratively with library owners, kernel developers, and cross-functional performance engineering teams, you will drive kernel optimization strategies that translate directly into measurable gains for AI/ML and HPC workloads on AMD Instinct accelerators and Radeon GPUs. This position requires deep understanding and the ability to reason about performance from first principles while also leveraging data-driven analytics to guide tuning decisions. 

 

THE PERSON – GPU ML Libraries & Algorithms Expert 

To be successful in this role, you will be an experienced GPU performance engineer with a proven track record of analyzing, profiling, and optimizing compute kernels — moving fluidly between algorithmic design analysis, heuristic development, and kernel instance tuning. You are driven by curiosity about why a kernel performs the way it does, and you bring an intuitive sense for GPU execution dynamics that allows you to formulate and validate performance hypotheses rapidly. You thrive in environments where rigorous analysis meets pragmatic engineering, where you can design explainable heuristic models and run analysis on large-scale benchmark sweeps, synthesizing results into actionable tuning recommendations that uplift entire library ecosystems. 

 

KEY RESPONSIBILITIES: 

  • Algorithm Analysis & Improvement: Critically review and improve kernel algorithms, identifying opportunities for redesign, fusion, and optimization that yield measurable performance gains across AMD GPU architectures. 
  • Design explainable heuristic models for kernel selection, tile-size determination, data layout choices, and workload-to-CU mapping — ensuring tuning decisions are interpretable, maintainable, and adaptable. 
  • Partner with teams to execute large-scale kernel benchmarking campaigns, building data pipelines and analytics workflows to process, visualize, and extract actionable insights from extensive performance datasets. 
  • Partner with teams to perform deep-dive performance investigations using AMD profiling and tracing tools (rocProf, Omniperf, Omnitrace), correlating hardware counter data with kernel behavior to identify bottlenecks in compute, memory bandwidth, LDS utilization, Matrix Core throughput, and instruction issue rates. 
  • Initiate, influence, and drive architecture, design, and documentation efforts as they arise across teams. Work closely with principal engineering staff to plan and execute technical governance activities across integrated libraries and engineering teams.  
  • Leverage AI-assisted software development tools to accelerate design, implementation, review, and documentation of complex software libraries. Establish best practices for responsible use of AI assistance, including validation, review, and traceability of generated code and technical artifacts.  

 

PREFERRED EXPERIENCE: 

  • Extensive and broad hands-on experience with C++, with relevant applied experience using CUDA, HIP, OpenMP, MPI, or OpenCL for accelerated computing on CPUs and GPUs. Familiarity with other programming languages e.g. Python, Rust. Knowledge or applied experience with popular AI/ML Frameworks (PyTorch, TensorFlow, JAX). 
  • Proven experience with kernel performance tuning — both through principled heuristic design and through systematic empirical benchmarking. Ability to articulate why a tuning configuration works, not just that it does. Ability to reason about performance at the hardware level and translate architectural insight into kernel optimization strategies. 
  • Familiarity with Composable Kernel (CK), MIOpen, or equivalent GPU kernel libraries (e.g., CUTLASS, cuDNN, NeuronSDK).  Understanding of GEMM, convolution, attention, pooling and other core compute primitives used in AI/ML workloads. 
  • Applied experience using AI-assisted coding tools in professional software engineering workflows, including code generation, refactoring, test creation, documentation, and design exploration. 

 
ACADEMIC CREDENTIALS: 

  • Advanced degrees, such as M.Sc./M.Eng. or Ph.D. are preferred — ideally in Computer Science, Computer Engineering, Electrical Engineering, Applied Mathematics, or a related field with focus on high-performance computing, GPU architecture, or numerical methods. 

 

LOCATION: 

California, USA or remote 



Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD's “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.


 Apply on company website