Job Description
Why GMF Technology?
GM Financial is set to change the auto finance industry and is leading the path of embarking on tech modernization – we have a startup mindset, and preserve our small company culture, in a public company environment with financial stability and intense growth over a decade-plus history. We are
data junkies and trust in
data and insights to advance our business objectives. We take our goal of zero emission, zero collision, zero congestion, and zero friction very seriously. We believe as an auto finance market leader we are in the driver's seat to lead us in the GM EV mission to change the world. We are building global platforms, in LATAM, Europe, China, U.S. and Canada – and we are looking to grow our high-performing team. GMF is comprised of over 10,000 team members globally. Join our fintech culture within a Blue-Chip company where we are changing the way we use technology to support our customers, dealers and business.
As a Data Engineer I, you will apply software and data engineering practices to design, develop, and maintain scalable DataOps and MLOps solutions. You will support the full lifecycle of analytical models, focusing on automation, deployment, monitoring, and compliance. This role requires collaboration with cross-functional teams to deliver impactful data and analytics solutions.
This position is not eligible for visa sponsorship now or in the future.
Responsibilities
- Develop scalable, cloud-based DataOps pipelines for batch and streaming data.
- Ensure data quality, freshness, cost monitoring, and audit trail automation.
- Support ML model readiness and lifecycle automation.
- Implement CI/CD pipelines for software and data workflows.
- Contribute to test automation and peer reviews.
- Maintain production systems with a focus on uptime and rapid issue resolution.
- Collaborate with stakeholders across Data Science, IT, Legal, and Compliance.
- Conduct research and proof-of-concepts to improve performance and reduce costs.
- Develop success metrics and present findings to support decision-making.
- Work with Cloud Architects and SREs to develop robust solutions.
Qualifications
- Bachelor’s degree in a related field or equivalent work experience preferred.
- 0–2 years of experience in data engineering and large-scale data processing.
- Hands-on experience with Spark, Python, and SQL.
- Experience with Azure Databricks, Synapse, PySpark, Delta Lake, and Unity Catalog.
- Familiarity with CI/CD, DevOps, DataOps, MLOps, and LLMOps practices.
- Proficiency in Spark, Python, SQL, and handling large datasets.
- Experience with REST APIs, feature stores, and ML lifecycle tools like MLflow.
- Understanding of containerization tools (Docker, Kubernetes, AKS).
- Knowledge of NoSQL databases (CosmosDB, MongoDB), object storage (ADLS Gen2, S3), and Agile methodologies.
- Basic understanding of IaC tools (Terraform, ARM Templates) and cloud platforms (Azure, AWS, GCP).
- Strong problem-solving and troubleshooting abilities.
- Effective communication and collaboration in Agile teams.
- Understanding of big data platforms, data lakes, and stream processing.
- Awareness of data privacy and security principles
What We Offer: Generous benefits package available on day one to include: 401K matching, bonding leave for new parents (12 weeks, 100% paid), tuition assistance, training, GM employee auto discount, community service pay and nine company holidays.
Our Culture: Our team members define and shape our culture — an environment that welcomes innovative ideas, fosters integrity, and creates a sense of community and belonging. Here we do more than work — we thrive.
Compensation: Competitive pay and bonus eligibility
Work Life Balance: Flexible hybrid work environment, minimum of 2-days a week in office in Irving, TX.
This position is not eligible for visa sponsorship now or in the future.
#GMFJobs