Seeking a Research Scientist/Engineer to design and optimize distributed training systems for large-scale multimodal AI models using thousands of GPUs.
About Luma AI
About the Role
Responsibilities
Experience
Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.
- Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
- Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
- Build monitoring, visualization, and debugging tools for large-scale training runs
- Optimize training stability, convergence, and resource utilization across massive clusters
- Extensive experience with distributed PyTorch training and parallelisms in foundation model training
- Deep understanding of GPU clusters, networking, and storage systems
- Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
- (Preferred) Strong Linux systems administration and scripting capabilities
- (Preferred) Experience managing training runs across >100 GPUs
- (Preferred) Experience with containerization, orchestration, and cloud infrastructure
The base pay range for this role is $187,500 – $395,000 per year.
Top Skills
Cloud Infrastructure
Containerization
Cuda
Distributed Systems
Linux
Mpi
Nccl
Orchestration
PyTorch
Similar Jobs
Healthtech • Software
This role involves owning technical hiring, partnering with leaders, leading sourcing efforts, and delivering a high-quality candidate experience, focusing on Engineering, Data, and Machine Learning functions.
Top Skills:
Greenhouse
Software
As a Senior Quality Engineer, you will lead quality initiatives, mentor junior engineers, and drive improvements in testing processes, focusing on UI and API testing to enhance customer experience.
Top Skills:
BashBitbucketDockerGitGitPlaywrightPostmanPytestPython
Healthtech • Social Impact • Software
The ROI Operations Supervisor ensures compliance in medical record requests, leads a team, monitors performance metrics, and drives operational improvements while being a subject matter expert in HIPAA and privacy regulations.
What you need to know about the Calgary Tech Scene
Employees can spend up to one-third of their life at work, so choosing the right company is crucial, not just for the job itself but for the company culture as well. While startups often offer dynamic culture and growth opportunities, large corporations provide benefits like career development and networking, especially appealing to recent graduates. Fortunately, Calgary stands out as a hub for both, recognized as one of Startup Genome's Top 100 Emerging Ecosystems, while also playing host to a number of multinational enterprises. In Calgary, job seekers can find a wide range of opportunities.


%20copy.jpg)
.png)