You will work on large-scale deployment of machine learning models for real-time inference.
Our current stack is Python, C++, TensorRT-LLM, Kubernetes.
Responsibilities
- Develop APIs for AI inference that will be used by both internal and external customers
- Benchmark and address bottlenecks throughout our inference stack
- Improve the reliability and observability of our systems and respond to system outages
- Explore novel research and implement LLM inference optimizations
Qualifications
- Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
- Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
- Understanding of GPU architectures or experience with GPU kernel programming using CUDA
The cash compensation range for this role is $190,000 - $250,000.
At Perplexity, we've experienced tremendous growth and adoption since publicly launching the world's first fully functional conversational answer engine in 2022. We've grown from answering 2.5 million questions per day at the start of 2024 to around 20 million daily queries in December 2024. We also offer Perplexity Enterprise Pro, which counts leading companies like Nvidia, the Cleveland Cavaliers, Bridgewater, and Zoom as customers.
To support our rapid expansion, we've raised significant funding from some of the most respected technology investors. Our investor base includes IVP, NEA, Jeff Bezos, NVIDIA, Databricks, Bessemer Venture Partners, Elad Gil, Nat Friedman, Daniel Gross, Naval Ravikant, Tobi Lutke, and many other visionary individuals. In 2024, our employee base grew nearly 300%, and we're just getting started.
Final offer amounts are determined by multiple factors, including, experience and expertise, and may vary from the amounts listed above.
Equity: In addition to the base salary, equity is part of the total compensation package.
Benefits: Comprehensive health, dental, and vision insurance for you and your dependents. Includes a 401(k) plan.