Principal GenAI Inference Optimization Engineer
Advanced Micro Devices, Inc. | |
$226,400.00/Yr.-$339,600.00/Yr.
| |
United States, California, San Jose | |
2100 Logic Drive (Show on map) | |
Mar 28, 2026 | |
|
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE We are seeking a Principal GenAI Inference Optimization Engineer to join our Models and Applications team. This role focuses on improving performance, efficiency, and scalability of generative AI inference workloads on AMD GPU platforms. You will contribute to optimizing latency, throughput, and cost efficiency for real-world deployment of large-scale models, working across the software-hardware stack. THE PERSON The ideal candidate is a strong technical contributor with expertise in GenAI inference optimization, GPU performance, and large-scale serving systems. You have a solid understanding of GPU architecture, memory systems, and communication patterns, and can apply this knowledge to improve inference efficiency. You are comfortable working across multiple layers-from kernels and runtimes to frameworks and serving systems-and can independently drive optimization efforts while collaborating with cross-functional teams. KEY RESPONSIBILITIES - Optimize performance of GenAI inference workloads on AMD GPU platforms across single-node and distributed environments. - Improve latency, throughput, and cost efficiency for LLM and multimodal model serving in production. - Analyze and resolve bottlenecks across compute, memory, and communication (e.g., kernel efficiency, KV-cache usage, memory bandwidth, scheduling). - Contribute to cross-stack optimizations spanning kernels, runtimes, communication libraries, and inference/serving frameworks (e.g., vLLM, SGLang, Triton, or similar systems). - Implement and evaluate inference optimization techniques such as batching strategies, quantization, prefix caching, and speculative decoding. - Support development and optimization of scalable serving systems, including request scheduling and resource utilization. - Develop and use profiling, benchmarking, and performance analysis tools for inference workloads. - Collaborate with hardware, compiler, and framework teams to improve overall system performance. - Contribute to internal tools and, where applicable, open-source projects for inference optimization on AMD platforms. - Document best practices and contribute to performance guidelines for GenAI deployment. PREFERRED EXPERIENCE - Strong understanding of GPU architecture and performance fundamentals (compute, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA). - Experience with GenAI inference optimization techniques (e.g., quantization, KV-cache optimization, batching). - Hands-on experience with inference/serving frameworks such as vLLM, SGLang, Triton, TensorRT-LLM, or similar. - Experience working on LLM or multimodal inference workloads. - Familiarity with distributed systems and serving architectures. - Experience with ML frameworks (PyTorch, JAX, or TensorFlow), especially for inference. - Proficiency in Python and at least one systems language (C++/CUDA/HIP). - Experience with profiling, debugging, and performance tuning tools. - Ability to work collaboratively across teams and deliver impactful optimizations. ACADEMIC CREDENTIALS - B.S., M.S. or Ph.D. in Computer Science, Computer Engineering, or a related field preferred, or equivalent industry experience. LOCATION - San Jose, CA #LI-MV1 #HYBRID This role is not eligible for visa sponsorship. Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.
This posting is for an existing vacancy. | |
$226,400.00/Yr.-$339,600.00/Yr.
Mar 28, 2026