Integris Group is partnering with a leading New York City hospital to hire 5 Senior Data Engineers for full-time, permanent roles.
This is a Hybrid position (4 days onsite, 1 remote – no exceptions) offering a competitive salary, annual bonus, 401(k), PTO, and top-tier medical, dental, and vision benefits, along with excellent company perks.
Job Title: 5-Senior Data Engineer's
Location: New York, NY
Position Type: Full-Time, Hybrid (4 day's Onsite | 1 day Remote)
About the Role:
Our client is seeking 5 highly motivated and experienced Senior Data Engineer to join their data team and help design, build, and optimize their modern data platform. You will work with tools such as Snowflake, Fivetran, and DBT to create scalable, high-performance data pipelines and analytics solutions. This role will be instrumental in enabling advanced analytics, machine learning, and data-driven decision-making across the organization. The job itself is not just architecture (these senior engineers will do some as they are designing the data models, pipelines, version control, governance, etc.), but then they will also execute (engineer) & deliver that work.
Key Responsibilities:
- Design, build, and maintain robust, scalable, and secure ETL/ELT pipelines.
- Develop and optimize data models to support analytics, BI, and machine learning use cases.
- Partner closely with data analysts, scientists, and business stakeholders to translate requirements into technical solutions.
- Monitor, tune, and optimize data processing workflows for efficiency, scalability, and cost-effectiveness.
- Troubleshoot pipeline failures and performance issues quickly and effectively.
- Support production workflows through CI/CD pipelines and version control (Git).
- Ensure data quality, integrity, and governance across the data lifecycle.
- Prototype and research new tools, technologies, and architectures for improving the existing data infrastructure.
The ideal candidate has:
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
- Hands-on experience with cloud-based data warehouse platforms (e.g. Snowflake).
- Experience designing and deploying data pipelines using orchestration tools like DBT or Airflow.
- 5+ years’ experience supporting data-driven or ML applications in production environments.
- Understanding of modern data architecture, ELT/ETL best practices, and cloud data platforms i.e. (Azure or AWS or GCP).
- Ability to analyze large datasets to identify data quality issues and other contextual insights.
- Experience in developing data models for integration and analysis that support business intelligence and data analytics initiatives.
- Experience with CI/CD and version control tools (e.g. Git).
- Proficiency with SQL and experience in at least one programming language (e.g., Python, Scala).
- A proven ability to operate in a fast-paced work environment.
- Excellent communication and organizational skills with a demonstrated team orientation with strong ability to drive projects autonomously as needed.
We’d like to see:
- Familiarity with Infrastructure as Code (IaC), and containerization (Docker, Kubernetes).
- Knowledge of data governance, security, and compliance frameworks.
- Experience in data modeling for large-scale distributed data warehouses with a strong understanding of design trade-offs and best practices.