
Senior Data Engineer
Phreesia
Full timeThe Senior Data Engineer is a senior-level technical expert responsible for architecting, developing, and optimizing data pipelines and infrastructure to support enterprise data initiatives. This role collaborates closely with data scientists, analysts, software engineers, and business stakeholders to design scalable, secure, and high-performance data solutions.
The ideal candidate possesses deep expertise in data modeling, ETL processes, and cloud data platforms, and plays a key role in setting data engineering best practices and mentoring junior team members. As a strategic contributor, a Senior Data Engineer ensures availability, integrity, and quality of data is critical to business operations and decision-making.
What You’ll Do:
- Design, build, and maintain scalable ETL/ELT pipelines to ingest and transform structured and unstructured data from various sources.
- Develop performant data models and service layers to enable seamless analytics and reporting across business domains.
- Continuously improve tooling, workflows, and processes to enhance data reliability and engineer productivity.
- Implement automated testing, monitoring, and alerting pipelines to ensure data quality and minimize downtime.
- Partner with architects, product managers, and business stakeholders to gather requirements and translate them into technical solutions.
- Mentor and train a team of developers, and coordinate with the team to meet project milestones and deadlines
What You’ll Bring:
- Bachelor's Degree required, BS or higher in Computer Science or related discipline.
- 8+ years of hands-on experience developing enterprise-grade solutions, with at least 3 years in a PaaS environment (preferably AWS).
- Strong experience with AWS services including Lambda, API Gateway, S3, DynamoDB (or other NoSQL), AWS Glue, and Glue Data Catalog and familiarity with infrastructure-as-code tools such as Terraform, Terragrunt, or CDK.
- Strong understanding of modern, cloud-based data platforms, particularly Snowflake.
- Expertise in building data warehousing solutions and developing ETL/ELT pipelines.
- Advanced SQL skills, with experience handling large-scale structured, semi-structured and unstructured datasets.
- Strong proficiency with dbt (models, tests, macros, documentation, deployments) and proficiency in Python and/or PySpark for data processing and automation.
- Hands-on experience with ETL/ELT tools such as dbt, Airflow, or SSIS and familiarity with CI/CD pipelines and DevOps workflows for data engineering.
How to apply
To apply for this job you need to authorize on our website. If you don't have an account yet, please register.
Post a resumeSimilar jobs


