We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
Remote New

Senior Staff Engineer

DataDirect Networks
United States
Apr 14, 2026

Senior Staff Engineer
Job Locations

US-Remote




Job ID
2026-5740


Name Linked

Remote: US


Country

United States


City

Remote

Worker Type
Regular Full-Time Employee



Overview

This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing.

"DDN's A3I solutions are transforming the landscape of AI infrastructure." - IDC

"The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments" - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA

DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence.

Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management.

Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage.



Job Description

DDN is seeking a highly experienced Senior Staff Engineer specializing in AI Data Path & Storage to lead hands-on development and integration of advanced storage systems with next-generation AI inference pipelines. This role involves coding, prototyping, and rapidly iterating on solutions in close collaboration with architects to design and deliver high-performance data movement architectures. You will leverage NVIDIA's NIXL (Inference Transfer Library) alongside the Infinia Data Intelligence Platform to enable ultra-low-latency, high-throughput data movement across GPU, memory, and distributed storage layers, including workloads involving KV cache management and vector database retrieval. The ideal candidate brings deep expertise in distributed storage, GPU data paths, and large-scale system optimization, with a proven track record of building and shipping production-grade AI infrastructure.

Key Responsibilities

    Lead the design and implementation of high-performance data movement pipelines using NVIDIA NIXL across GPU, CPU, and storage tiers.
  • Architect and drive integration of DDN Infinia with GPU-accelerated inference platforms for large-scale, real-time AI workloads.
  • Own end-to-end optimization of I/O paths between GPU memory and storage using technologies such as NVIDIA GPUDirect Storage, RDMA, and NVMe-over-Fabrics.
  • Define and implement multi-tier storage architectures (NVMe, SSD, object storage) optimized for inference latency, throughput, and scalability.
  • Lead development of advanced KV cache management strategies, including offloading, prefetching, and persistence across distributed storage layers.
  • Partner with AI/ML engineering teams to optimize inference performance in frameworks such as PyTorch and TensorFlow.
  • Establish benchmarking frameworks and lead performance tuning efforts for storage and data movement in production inference environments.
  • Diagnose and resolve complex system bottlenecks across storage, networking, and GPU subsystems.
  • Influence architecture decisions for distributed inference systems, ensuring scalability, resilience, and efficient data locality.
  • Drive engineering excellence through best practices in observability, performance monitoring, automation, and reliability engineering.
  • Mentor junior engineers and provide technical leadership across cross-functional teams.

Required Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 12+ years of experience in storage systems, distributed systems, or performance engineering.
  • Proven track record of architecting and delivering large-scale, high-performance infrastructure systems.
  • Deep expertise in distributed storage architectures (object storage, scalable file systems, or cloud-native storage platforms).
  • Strong understanding of Linux I/O stack, filesystem internals, and storage protocols.
  • Extensive hands-on experience with NVMe, SSD optimization, and high-performance storage environments.
  • Strong experience with RDMA, InfiniBand, or other high-speed data transfer technologies.
  • Solid understanding of GPU computing concepts and CPU-GPU data movement patterns.
  • Proficiency in Python and/or C/C++, with advanced debugging, profiling, and performance tuning skills.
  • Demonstrated ability to optimize latency-sensitive, high-throughput production systems.

Preferred Skills

  • Hands-on experience with NVIDIA NIXL or similar data movement frameworks.
  • Experience with GPU-aware storage pipelines and GPUDirect Storage.
  • Strong understanding of AI inference systems, LLM serving architectures, and KV cache optimization.
  • Experience with Retrieval-Augmented Generation (RAG) pipelines and open vector search ecosystems.
  • Background in high-performance computing (HPC) or hyperscale distributed environments.
  • Expertise in caching strategies, memory tiering, and data locality optimization.
  • Experience designing disaggregated compute and storage architectures.

What You'll Work On

  • Leading the evolution of storage systems into GPU-native data layers for AI inference
  • Building next-generation distributed AI infrastructure using NIXL and Infinia
  • Driving performance breakthroughs in real-time LLM inference at scale
  • Designing storage architectures for large-scale AI datasets and retrieval systems


DDN

Join our dynamic and driven team, where engineering excellence is at the heart of everything we do. We seek individuals who love to challenge themselves and are fueled by curiosity. Here, you'll have the opportunity to work across various areas of the company, thanks to our flat organizational structure that encourages hands-on involvement and direct contributions to our mission. Leadership is earned by those who take initiative and consistently deliver outstanding results, both in their work ethic and deliverables, making strong prioritization skills essential. Additionally, we value strong communication skills in all our engineers and researchers, as they are crucial for the success of our teams and the company as a whole.

Interview Process: After submitting your application, one of our recruiters will review your resume. If your application passes this stage, you will be invited to a 30-minute interview during which a member of our team will ask some basic questions. If you clear the interview, you will enter the main process, which can consist of up to four interviews in total:

  • Coding assessment: Often in a language of your choice.
  • Systems design: Translate high-level requirements into a scalable, fault-tolerant service (depending on role).
  • Real-time problem-solving: Demonstrate practical skills in a live problem-solving session.
  • Meet and greet with the wider team.
  • Our goal is to finish the main process in 2-3 weeks at most.

DataDirect Networks (DDN) is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, gender expression, transgender, sex stereotyping, sexual orientation, national origin, disability, protected Veteran Status, or any other characteristic protected by applicable federal, state, or local law.

#LI-Remote

Applied = 0

(web-bd9584865-8jwgc)