NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities that are hard to solve, that only we can take on, and that matter to the world. This is our life's work, to amplify human imagination and intelligence. Join us today!

As a member of the GPU/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technology leader to identify architectural changes and/or completely new approaches for improving HPC schedulers for serving many simultaneous and large multi-node GPU workloads with many complex dependencies. This role offers you an excellent opportunity to deliver production grade solutions, get hands on with ground-breaking technology, and work closely with technical leaders solving some of the biggest challenges in machine learning, cloud computing, and system co-design.

What you'll be doing:

Design and develop enhancements to the HPC batch scheduler(s).

Work extensively with HPC scheduler vendor on bug fixes and feature releases

Provide support to staff and end users to resolve batch scheduler issues

Build and improve our ecosystem around GPU-accelerated computing

Performance analysis and optimizations of deep learning workflows

Develop large scale automation solutions

Root cause analysis and suggest corrective action for problems large and small scales

Finding and fixing problems before they occur

What we need to see:

Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience with 5+ years of work experience

Strong understanding of HPC batch schedulers, such as Slurm, RTDA or LSF and HPC workflows that use MPI

Significant experience in Programming in C/C++ and advanced scripting in languages such as Python, Go, bash scripting

Established experience in Linux operating system, environment and tools

Accomplished in computer architecture and operating systems

Deep knowledge of Networking Protocols like InfiniBand, Ethernet

Experience analyzing and tuning performance for a variety of HPC workloads

In-depth understating of container technologies like Docker, Singularity, Podman

Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements

Excellent communication, interpersonal and customer collaboration skills

Ways to stand out from the crowd:

Knowledge in MPI and High-performance computing

Background in RDMA technology

Experience in kernel programming

Open Source Software Contributor

Experience with deep learning frameworks like PyTorch and TensorFlow

Passionate about SW development processes

Want to make what was impossible possible!

The base salary range is 148,000 USD - 419,750 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.