Rackspace Technology
Machine Learning Operations (MLOPs) Architect - GCP - (Canada)
Be an Early Applicant
Similar Jobs
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
As a Senior Software Engineer at Coinbase, you will build scalable services using Golang, design systems, maintain backend infrastructure, and collaborate with teams to implement product technical visions.
Top Skills:
DockerDynamoDBGoMongoDBPostgresRuby
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
Design and deliver backend systems for the Prime Onchain Wallet team, focusing on blockchain technology and institutional finance. Lead projects, mentor engineers, and uphold high engineering quality standards.
Top Skills:
AWSBlockchain Protocols (EvmGoGraphQLKubernetesMpcRest ApisSolanaUtxo)
Information Technology • Security • Software • Cybersecurity • Data Privacy
As a Senior Software Engineer, you'll design, build, and refine core product features, owning user-impacting projects from conception to completion, working in various programming languages, and collaborating with teams across the company.
Top Skills:
GoHaskellRustSQLTypescript
About the Role: 100% REMOTE
We are looking for a seasoned Machine Learning Operations (MLOPs) Architect to build, and optimize ML inference platform. The role demands an individual with significant expertise in Machine Learning engineering and infrastructure, with an emphasis on building Machine Learning inference systems. Proven experience in building and scaling ML inference platforms in a production environment is crucial. This remote position calls for exceptional communication skills and a knack for independently tackling complex challenges with innovative solutions.
What you will be doing:
- Architect and optimize our existing data infrastructure to support cutting-edge machine learning and deep learning models.
- Collaborate closely with cross-functional teams to translate business objectives into robust engineering solutions.
- Own the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art LLMs.
- Provide technical leadership and mentorship to foster a high-performing engineering team.
- Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins.
- Automate model training, validation, and deployment across development, staging, and production environments.
- Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics.
- Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow.
- Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions.
- Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems.
- Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks.
- Implement data and model governance policies, including auditability, security, and access control using IAM and Cloud DLP.
- Stay current with evolving GCP MLOps practices, tools, and frameworks to continuously improve system reliability and automation.
Requirements:
- Proven track record in designing and implementing cost-effective and scalable ML inference systems.
- Hands-on experience with leading deep learning frameworks such as TensorFlow, Pytorch, HuggingFace, Langchain, etc.
- Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
- Strong grasp of fundamental computer science concepts including algorithms, distributed systems, data structures, and database management.
- Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.
- Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.
- Expertise in public cloud services, particularly in GCP and Vertex AI.
Must have:
- Proven experience in building and scaling Agentic AI systems in a production environment.
- In-depth understanding of LLM architectures, parameter scaling, and deployment trade-offs.
- Technical degree: Bachelor's degree in Computer Science with a minimum of 10+ years of relevant industry experience, or
- A Master's degree in Computer Science with at least 8+ years of relevant industry experience.
- A specialization in Machine Learning is preferred.
Travel
- Travel as needed per business requirements
Sponsorship
- This role is not sponsorship eligible
- Candidates need to be legally able to work in the US for any employers
What you need to know about the Calgary Tech Scene
Employees can spend up to one-third of their life at work, so choosing the right company is crucial, not just for the job itself but for the company culture as well. While startups often offer dynamic culture and growth opportunities, large corporations provide benefits like career development and networking, especially appealing to recent graduates. Fortunately, Calgary stands out as a hub for both, recognized as one of Startup Genome's Top 100 Emerging Ecosystems, while also playing host to a number of multinational enterprises. In Calgary, job seekers can find a wide range of opportunities.

.png)
