Tiger Analytics Logo

Tiger Analytics

Data Engineer - AWS

Posted 10 Days Ago
Remote
Hiring Remotely in United States
Senior level
Remote
Hiring Remotely in United States
Senior level
As an AWS Data Engineer, you will design, build, and maintain scalable data pipelines using AWS cloud infrastructure. Responsibilities include implementing data transformation workflows, optimizing performance, and collaborating with cross-functional teams on analytics and BI initiatives.
The summary above was generated by AI

Description

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Engineering, Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Snowflake.

Key Responsibilities:

  • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
  • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
  • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
Requirements
  • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Hands-on experience in designing and building data pipelines
  • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
  • Strong experience with Databricks, Pyspark for data processing and analytics.
  • Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Strong problem-solving skills and attention to detail.
Benefits

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

Top Skills

Apache Airflow
Spark
AWS
Databricks
Snowflake
SQL

Similar Jobs

10 Days Ago
Los Angeles, CA, USA
Remote
100 Employees
Senior level
100 Employees
Senior level
Information Technology • Software • Consulting
This Sr. Data Engineer role involves transforming on-premise Python applications to cloud-native implementations. You will work on strategic initiatives for a major financial client, focusing on data pipeline development in AWS environments while maintaining high standards of quality, security, and transparency.
14 Hours Ago
OR, USA
Remote
8 Employees
Entry level
8 Employees
Entry level
Information Technology • Consulting
Kynite, a technology company, seeks an AWS Data Engineer specialized in Snowflake to contribute to their cloud and big data services. Responsibilities likely include designing and implementing data solutions, working with AI models, and ensuring integration with existing systems.
14 Hours Ago
Seattle, WA, USA
Remote
9 Employees
Mid level
9 Employees
Mid level
Software • Consulting
The job involves designing, developing, and enhancing the AWS data environment. Responsibilities include setting up AWS Data Platform, building scalable Data Lake solutions, and troubleshooting complex Data Pipelines. It requires expertise in data engineering and leading proofs-of-concept to production.

What you need to know about the Calgary Tech Scene

Employees can spend up to one-third of their life at work, so choosing the right company is crucial, not just for the job itself but for the company culture as well. While startups often offer dynamic culture and growth opportunities, large corporations provide benefits like career development and networking, especially appealing to recent graduates. Fortunately, Calgary stands out as a hub for both, recognized as one of Startup Genome's Top 100 Emerging Ecosystems, while also playing host to a number of multinational enterprises. In Calgary, job seekers can find a wide range of opportunities.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account