Who we are
At CarGurus (NASDAQ: CARG), our mission is to give people the power to reach their destination. We started as a small team of developers determined to bring trust and transparency to car shopping. Since then, our history of innovation and go-to-market acceleration has driven industry-leading growth. In fact, we're the largest and fastest-growing automotive marketplace, and we've been profitable for over 15 years.
What we do
The market is evolving, and we are too, moving the entire automotive journey online and guiding our customers through every step. That includes everything from the sale of an old car to the financing, purchase, and delivery of a new one. Today, tens of millions of consumers visit CarGurus.com each month, and ~30,000 dealerships use our products. But they're not the only ones who love CarGurus—our employees do, too. We have a people-first culture that fosters kindness, collaboration, and innovation, and empowers our Gurus with tools to fuel their career growth. Disrupting a trillion-dollar industry requires fresh and diverse perspectives. Come join us for the ride!
Role overview
We're hiring a Data Platform Engineer II to build, automate, and support our modern data platform. You'll develop reliable ingestion pipelines, productionize best practices in infrastructure-as-code, automate guardrails into our CI/CD pipelines, and collaborate across analytics, governance, operations, and data science to deliver trustworthy, well-documented, and cost-efficient data.
What you'll do
- Own and enhance Fivetran connector configurations and destinations in Snowflake.
- Administer and deploy tooling around DBT and its CI/CD pipelines with strong documentation, tests, and sources to help analytics engineers run and automate their data pipelines.
- Author and maintain Airflow DAGs for orchestration, dependency management, and SLAs.
- Provision and manage Snowflake resources (roles/RBAC, warehouses, databases, resource monitors) through Terraform.
- Implement and maintain GitHub Actions CI/CD for dbt, Airflow, and Terraform workflows (linting, testing, environment promotion).
- Contribute reusable Terraform modules and internal tooling to standardize patterns.
- Configure and tune Metaplane monitors; triage alerts; drive issue remediation with DAE/DOPs.
- Implement syncs for administrative and policy enforcement duties and tooling to maintain catalog/lineage and stewardship workflows in Secoda; improve data discoverability and access request flows.
- Embed data contracts and validation where appropriate; champion documentation and operational runbooks.
- Optimize Snowflake performance (clustering, caching, query tuning) and warehouse sizing.
- Manage storage/compute costs across AWS and Snowflake stacks
- Implement least-privilege RBAC in Snowflake; automate grants and secrets management for jobs and services.
- Partner with DG to enforce governance policies and support audit/readiness efforts.
- Work closely with DAE on data modeling standards; with DS on feature/data access and reproducibility; with DOPs on incident response and SLAs.
- Participate in an on-call/rotation for platform issues; drive root-cause analysis and prevention.
What you'll bring
- 3–6 years in data/platform engineering or related backend roles
- Strong SQL and practical Python for data tooling and Airflow operators/hooks.
- Hands-on Snowflake (RBAC, warehouses, performance tuning, resource monitors).
- Production dbt experience (tests, exposures, docs, macros, packages).
- Airflow orchestration in production (DAG design, retries, SLAs, sensors).
- Terraform for cloud + Snowflake (modular code, workspaces, state management).
- Git/GitHub workflows and GitHub Actions (lint/test/build/deploy pipelines).
- Operating on AWS (S3, IAM basics; exposure to RDS, Elasticsearch/OpenSearch).
- Experience with data quality and catalog tools (Metaplane, Secoda or equivalents).
- Comfort debugging across ingestion transform serving; solid observability mindset (logging/metrics/alerts).
- Clear written/verbal communication; collaborative approach with analytics and ops stakeholders.
- Ownership & scope: Delivers medium-sized projects end-to-end with minimal guidance; breaks work into iterative milestones; proactively reduces toil through automation.
- Quality bar: Merges only code with tests/docs; adds monitors/alerts with each new pipeline; writes runbooks for handoffs to DOPs.
- Collaboration: Co-designs models and SLAs with DAE/DOPs; partners with DG to keep Secoda current; unblocks DS with reliable feature data.
Working at CarGurus
We reward our Gurus' curiosity and passion with best-in-class benefits and compensation, including equity for all employees, both when they start and as they continue to grow with us. Our career development and corporate giving programs, as well as our employee resource groups (ERGs) and communities, help people build connections while making an impact in personally meaningful ways. A flexible hybrid model and robust time off policies encourage work-life balance and individual well-being. Thoughtful perks like daily free lunch, a new car discount, meditation and fitness apps, commuting cost coverage, and more help our people create space for what matters most in their personal and professional lives.
We welcome all
CarGurus strives to be a place to which people can bring the ultimate expression of themselves and their potential—starting with our hiring process. We do not discriminate based on race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. We foster an inclusive environment that values people for their skills, experiences, and unique perspectives. That's why we hope you'll apply even if you don't check every box listed in the job description. We also encourage you to tell your recruiter if you require accommodations to participate in our hiring process due to a disability so we can provide the appropriate support. We want to know what only you can bring to CarGurus. #LI-Hybrid