Advance Local is looking for a Data Engineer to build and maintain data pipelines and integration solutions for the cloud data platform. This position involves the practical implementation of data ingestion, transformation, and quality processes within Snowflake, AWS, and other third-party platforms. You’ll help to support data accessibility, prototyping, and audience insights through technical implementations and automation. Collaboration with data project managers, platform owners, and technical teams is required to deliver consistent data solutions that support analytics, audience insights, and business decision-making.
The base salary range is $100,000 - $120,000 per year.
What you’ll be doing:
Implement data integration solutions working with platform owners across business units to ensure seamless data flow.
Build and maintain data pipeline that ingest data from various sources.
Develop scalable data preparation pipelines that serve ML modeling needs, reducing manual data engineering work by the data science team.
Build and maintain ML feature pipelines and model deployment workflows in Snowflake, enabling efficient model iteration and production deployment.
Support rapid prototyping of new data products by building flexible pipeline components and proof-of-concepts, enabling quick iteration and validation of ideas.
Develop solutions for data audience modeling, leveraging advanced data engineering techniques to enhance targeting and personalization.
Collaborate with data product, data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
Develop data transformations and quality checks to ensure reliable, clean data for downstream analytics and business intelligence.
Develop and maintain documentation for data engineering processes and systems.
Implement monitoring, alerting and logging for data pipelines and ML workflows to ensure reliability and quick issue resolution.
Troubleshoot and resolve data pipeline issues, escalating complex problems as needed.
Stay up to date with the latest data engineering technologies and industry trends.
Our ideal candidate will have the following:
Bachelor’s degree in computer science, data engineering, information systems, or related field
Minimum three years’ experience in data engineering, with demonstrated proficiency in SQL and data modeling
Experience with ETL tools, data integration frameworks and building data pipelines in Snowflake (SQL, stored procedures, streams, tasks)
Hands-on experience with AWS services (S3, Lambda, Glue) or similar cloud data services
Experience working with data scientists to operationalize ML models and build model training/inference pipelines
Strong proficiency in Python for data processing and automation
Familiarity with version control and CI/CD practices
Understanding of audience segmentation, analytics and business use cases
Understanding of data quality, testing and validation approaches
Knowledge of data orchestration tools (Airflow, dbt or similar)
Familiarity with ML workflows and model deployment patterns
Strong problem-solving and analytical abilities with attention to detail
Ability to work collaboratively in cross-functional teams
Excellent communication skills for working with both technical and non-technical stakeholders
Equal Opportunity Employer Minorities/Women/Protected Veterans/Disabled