If you’re hands on with modern data platforms, cloud tech, and big data tools and you like building solutions that are secure, repeatable, and fast, this role is for you.
As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines that transform raw information into actionable insights. The ideal candidate will have strong experience across modern data platforms, cloud environments, and big data technologies, with a focus on building secure, repeatable, and high-performing solutions.
Responsibilities:
- Design, develop, and maintain secure, scalable data pipelines to ingest, transform, and deliver curated data into the Common Data Platform (CDP).
- Participate in Agile rituals and contribute to delivery within the Scaled Agile Framework (SAFe).
- Ensure quality and reliability of data products through automation, monitoring, and proactive issue resolution.
- Deploy alerting and auto-remediation for pipelines and data stores to maximize system availability.
- Apply a security first and automation-driven approach to all data engineering practices.
- Collaborate with cross-functional teams (data scientists, analysts, product managers, and business stakeholders) to align infrastructure with evolving data needs.
- Stay current on industry trends and emerging tools, recommending improvements to strengthen efficiency and scalability.
Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, or related field (or equivalent experience).
- At least 3 years of experience with Python and PySpark, including Jupyter notebooks and unit testing.
- At least 2 years of experience with Databricks, Collibra, and Starburst.
- Proven work with relational and NoSQL databases, including STAR and dimensional modeling approaches.
- Hands-on experience with modern data stacks: object stores (S3), Spark, Airflow, lakehouse architectures, and cloud warehouses (Snowflake, Redshift).
- Strong background in ETL and big data engineering (on-prem and cloud).
- Work within enterprise cloud platforms (CFS2, Cloud Foundational Services 2/EDS) for governance and compliance.
- Experience building end-to-end pipelines for structured, semi-structured, and unstructured data using Spark.