Endpoint Clinical, Inc. Logo

Endpoint Clinical, Inc.

Data Engineer

Posted 13 Days Ago
Remote
Senior level
Remote
Senior level
The Data Engineer is responsible for designing, implementing, and maintaining the data infrastructure to support business intelligence and analytics. Key tasks include developing scalable ETL/ELT pipelines, optimizing data workflows, and ensuring data quality while collaborating with cross-functional teams.
The summary above was generated by AI

About Us:


Endpoint is an interactive response technology (IRT®) systems and solutions provider that supports the life sciences industry. Since 2009, we have been working with a single vision in mind, to help sponsors and pharmaceutical companies achieve clinical trial success. Our solutions, realized through the proprietary PULSE® platform, have proven to maximize the supply chain, minimize operational costs, and ensure timely and accurate patient dosing. Endpoint is headquartered in Raleigh-Durham, North Carolina with offices across the United States, Europe, and Asia.


Position Overview:

The Data Engineer plays a critical role in designing, implementing, and maintaining the data infrastructure that drives our business intelligence, analytics, and data science initiatives. In this role, the Data Engineer will work closely with cross-functional teams to ensure data is accurate, accessible, and optimized for various business needs. This position requires expertise in Databricks, SQL, Python, Spark, and other Big Data tools, with a strong emphasis on ELT/ETL processes. The engineer will collaborate with various stakeholders to ensure data quality and to build efficient, scalable data solutions.

Responsibilities:

  • Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and other big data technologies.
  • Optimize data workflows to handle large volumes of data efficiently.
  • Build and manage data warehouses and data lakes to store structured and unstructured data.
  • Utilize SQL, Python, Spark for data extraction, transformation, and loading (ETL) processes.
  • Work closely with data analysts and data scientists to understand their data needs and ensure the availability of clean, reliable data.
  • Integrate data from various sources, ensuring consistency and accuracy across the data ecosystem.
  • Implement data quality checks to ensure data accuracy, completeness, and consistency.
  • Develop and enforce data governance policies and procedures to maintain high data quality standards.
  • Develop and support BI tools and dashboards, providing business insights and data-driven decision-making support.
  • Work with stakeholders to understand reporting requirements and deliver actionable insights.
  • Automate repetitive data processing tasks to improve efficiency and reduce manual work.
  • Continuously monitor and improve data pipeline performance, addressing bottlenecks and optimizing resources.
  • Document data processes, workflows, and architecture for future reference and knowledge sharing.
  • Ensure compliance with data security and privacy regulations, such as GDPR, HIPAA, etc.

Education:

  • Bachelor's degree in Computer Science, Software Engineering, Mathematics, or a related technical field is preferred.

Experience:

  • 8+ years of technical experience with a strong focus on Big Data technologies in any of these areas: software engineering, integrations, data warehousing, data analysis, business intelligence, preferably at a technology or biotech/pharma company
  • Proficiency in Databricks for data engineering tasks.
  • Advanced knowledge of SQL for complex queries, data manipulation, and performance tuning.
  • Strong programming skills in Python for scripting and automation.
  • Experience with Big Data tools (e.g., Spark, Hadoop) and data processing frameworks.
  • Familiarity with BI tools (e.g., Tableau, Power BI) and experience in developing dashboards and reports.
  • Experience with cloud platforms & tools like Azure ADF or Databricks.
  • Familiarity with data modeling and data architecture design.
  • Understanding of machine learning concepts and their application in data engineering.

Skills:

  • Keen attention to detail and bias for action.
  • Excellent organizational skills and proven ability to multi-task.
  • Ability to influence without authority and lead successful teams.
  • Strong interpersonal skills with the ability to work effectively with a wide variety of professionals.
  • Ability to lead back-end data initiatives – identify data sources, write scripts, transfer and transform data, and automate processes.
  • Ability to understand source data, its strengths, weaknesses, semantics, and formats.
  • Excellent knowledge of logical and physical data modeling concepts (relational and dimensional).

Endpoint Clinical does not accept unsolicited resumes from search firms or any other third parties. Any unsolicited resume sent to Endpoint Clinical will be considered Endpoint Clinical property, and Endpoint Clinical will not pay a fee should it hire the subject of any unsolicited resume.


Endpoint Clinical is an equal opportunities employer AA/M/F/Veteran/Disability.

 

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment, qualified applicants with arrest and conviction records.

#LI-MT

Top Skills

Python
SQL

What you need to know about the Calgary Tech Scene

Employees can spend up to one-third of their life at work, so choosing the right company is crucial, not just for the job itself but for the company culture as well. While startups often offer dynamic culture and growth opportunities, large corporations provide benefits like career development and networking, especially appealing to recent graduates. Fortunately, Calgary stands out as a hub for both, recognized as one of Startup Genome's Top 100 Emerging Ecosystems, while also playing host to a number of multinational enterprises. In Calgary, job seekers can find a wide range of opportunities.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account