We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Senior Software Development Engineer- LLM Kernel & Inference Systems

Advanced Micro Devices, Inc.
$192,000.00/Yr.-$288,000.00/Yr.
United States, California, Santa Clara
2485 Augustine Drive (Show on map)
Jan 21, 2026


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

THE ROLE

As a Senior Member of Technical Staff, you will be a technical leader in Large Language Model (LLM) inference and kernel optimization for AMD GPUs. You will play a critical role in advancing high-performance LLM serving by optimizing GPU kernels, inference runtimes, and distributed execution strategies across single-node and multi-node systems.

This role is deeply focused on LLM inference stacks, including vLLM, SGLang, and internal inference platforms. You will work at the intersection of model architecture, GPU kernels, compiler technology, and distributed systems, collaborating closely with internal GPU library teams and upstream open-source communities to deliver production-grade performance improvements.

Your work will directly impact throughput, latency, scalability, and cost efficiency for state-of-the-art LLMs running on AMD GPUs.

THE PERSON:

You are a senior systems engineer with deep LLM domain knowledge who enjoys working close to the metal while keeping a strong understanding of end-to-end inference systems. You are comfortable reasoning about attention, KV cache, batching, parallelism strategies, and how they map to GPU kernels and hardware characteristics.

You thrive in ambiguous problem spaces, can independently define technical direction, and consistently deliver measurable performance gains. You balance strong execution with thoughtful upstream collaboration and maintain a high bar for software quality.

KEY RESPONSIBILITIES

  • Optimize LLM Inference Frameworks
    Drive performance improvements in LLM inference frameworks such as vLLM, SGLang, and PyTorch for AMD GPUs, contributing both internally and upstream.
  • LLM-Aware Kernel Development
    Design and optimize GPU kernels critical to LLM inference, including attention, GEMMs, KV cache operations, MoE components, and memory-bound kernels.
  • Distributed LLM Inference at Scale
    Design, implement, and tune multi-GPU and multi-node inference strategies, including TP / PP / EP hybrids, continuous batching, KV cache management, and disaggregated serving.
  • Model-System Co-Design
    Collaborate with model and framework teams to align LLM architectures with hardware-aware optimizations, improving real-world inference efficiency.
  • Compiler & Runtime Optimization
    Leverage compiler technologies (LLVM, ROCm, Triton, graph compilers) to improve kernel fusion, memory access patterns, and end-to-end inference pipelines.
  • End-to-End Inference Pipeline Optimization
    Optimize the full inference stack-from model execution graphs and runtimes to scheduling, batching, and deployment.
  • Open-Source Leadership
    Engage with open-source maintainers to upstream optimizations, influence roadmap direction, and ensure long-term sustainability of contributions.
  • Engineering Excellence
    Apply best practices in software engineering, including performance benchmarking, testing, debugging, and maintainability at scale.

PREFERRED EXPERIENCE

  • Good LLM Knowledge
    Deep understanding of Large Language Model inference, including attention mechanisms, KV cache behavior, batching strategies, and latency/throughput trade-offs.
  • LLM Inference Frameworks
    Hands-on experience with vLLM, SGLang, or similar inference systems (e.g., FasterTransformer), with demonstrated performance tuning.
  • GPU Kernel Development
    Proven experience optimizing GPU kernels for deep learning workloads, particularly inference-critical paths.
  • Distributed Inference Systems
    Experience designing and tuning large-scale inference systems across multiple GPUs and nodes.
  • Open-Source Contributions
    Track record of meaningful upstream contributions to ML, LLM, or systems-level open-source projects.
  • Programming & Debugging Skills
    Strong proficiency in Python and C++, with deep experience in performance analysis, profiling, and debugging complex systems.
  • High-Performance Computing
    Experience running and optimizing large-scale workloads on heterogeneous GPU clusters.
  • Compiler & Systems Background
    Solid foundation in compiler concepts and tooling (LLVM, ROCm, Triton),
    applied to ML kernel and runtime optimization.

ACADEMIC CREDENTIALS:

  • Master's or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field

#LI-JG1

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.

Applied = 0

(web-df9ddb7dc-hhjqk)