Must Have Technical/Functional Skills
- 8+ years of professional experience in data engineering or backend data development roles.
- Expert-level proficiency in SQL, including performance tuning, window functions, CTEs, and analytical queries.
- Strong Python skills for scripting, data processing, and building ETL logic.
- Hands-on experience with relational databases (PostgreSQL, MySQL, etc.) and data warehouses (Snowflake, BigQuery, Redshift).
- Proven experience designing and maintaining large-scale data pipelines in production environments.
- Solid understanding of data modeling techniques (dimensional modeling, normalization/denormalization).
- Strong grasp of data quality, validation, and governance practices.
- Proficiency with version control systems (e.g., Git) and agile development workflows.
- Ability to troubleshoot complex data issues and provide scalable solutions.
Roles & Responsibilities
- Design, build, and maintain robust and scalable ETL/ELT pipelines using SQL and Python.
- Architect and implement high-performance data models that support both real-time and batch processing.
- Optimize complex SQL queries for performance, scalability, and reliability across large datasets.
- Lead the data engineering efforts across projects, collaborating closely with analysts, data scientists, and software engineers.
- Own and improve data quality, validation, and governance processes.
- Monitor data pipelines and proactively address performance and reliability issues.
- Mentor and support junior data engineers through code reviews, design sessions, and technical leadership.
- Document data flows, definitions, models, and architecture for internal stakeholders and governance purposes.
- Participate in architectural decisions and contribute to long-term data platform strategy.
Salary Range: $63,400-$137,000 a year