We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Fellow GPU Performance Optimization Engineer

Advanced Micro Devices, Inc.
$252,000.00/Yr.-$378,000.00/Yr.
United States, California, San Jose
2100 Logic Drive (Show on map)
Mar 28, 2026


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

THE ROLE:

We are seeking a Fellow GPU Performance Optimization Engineer to join our Models and Applications team. This role focuses on maximizing performance and efficiency of large-scale AI training workloads on AMD GPU platforms. You will drive innovations across the full software-hardware stack, optimizing distributed training at scale and pushing the limits of system throughput, scalability, and utilization for generative AI workloads.

This position requires deep expertise in GPU performance analysis, distributed systems, and ML workloads, along with the ability to influence architecture, software ecosystems, and best practices across the organization.

THE PERSON:

The ideal candidate is a recognized technical leader with deep expertise in GPU performance optimization, large-scale distributed training, and system-level bottleneck analysis. You have a strong understanding of GPU architecture, interconnects, memory hierarchies, and communication patterns, and can translate this knowledge into measurable improvements in training efficiency at scale.

You are comfortable operating across layers-from kernels and runtimes to frameworks and distributed strategies-and have a track record of driving impactful optimizations and influencing technical direction.

KEY RESPONSIBILITIES:

- Lead performance optimization of large-scale AI training workloads on AMD GPU platforms across single-node and multi-node environments.

- Identify and eliminate system bottlenecks across compute, memory, and communication (e.g., kernel efficiency, memory bandwidth, network utilization).

- Optimize distributed training strategies (Data, Tensor, Pipeline Parallelism, ZeRO, etc.) for scalability and efficiency on AMD hardware.

- Drive cross-stack optimizations spanning kernels, compilers, runtimes, communication libraries, and ML frameworks.

- Develop and apply advanced profiling, benchmarking, and performance modeling methodologies.

- Collaborate with hardware, compiler, and framework teams to influence next-generation GPU architecture and software stack design.

- Contribute to and lead open-source efforts to improve ecosystem performance on AMD platforms.

- Define best practices and guide teams on performance tuning for large-scale training workloads.

- Stay at the forefront of advancements in large-scale training systems and performance optimization techniques.

PREFERRED EXPERIENCE:

- Deep expertise in GPU architecture and performance characteristics (compute units, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA).

- Strong experience with performance profiling tools (e.g., ROCm tools, Nsight-like systems, custom profilers) and bottleneck analysis.

- Proven experience optimizing large-scale distributed training workloads across thousands of GPUs.

- Experience with distributed training frameworks such as Megatron-LM, Torchtitan, MaxText, or equivalent.

- Strong understanding of communication libraries and patterns (e.g., NCCL/RCCL, collective ops, overlap of compute and communication).

- Expertise in ML frameworks (PyTorch, JAX, TensorFlow) with a focus on performance tuning.

- Proficiency in Python and at least one systems language (C++/CUDA/HIP), including debugging and low-level optimization.

- Experience with compiler stacks, kernel optimization, or graph-level optimization is a strong plus.

- Demonstrated technical leadership and ability to influence cross-functional teams.

ACADEMIC CREDENTIALS:

- Ph.D. in Computer Science, Computer Engineering, or a related field preferred, or equivalent industry experience with significant technical impact.

LOCATION:

- San Jose, CA

This role is not eligible for visa sponsorship.

#LI-MV1

#HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.

Applied = 0

(web-bd9584865-ksnsn)