The Intersect Group is partnered with our client seeking a Data Engineer. 
Direct Hire (No C2C or third-party submissions)
Location: Fully Remote (Central Standard Time based)
- Occasional travel may be required.
- Flexible work hours may be necessary to meet project deadlines.
Interview Process: Typically 3 rounds of virtual interviews
 We’re looking for a highly skilled, execution-driven Data Engineer to join our growing Data & Analytics team. In this role, you’ll be responsible for designing, building, and maintaining scalable data pipelines on the Databricks Lakehouse Platform, enabling reliable data delivery across the organization.
The ideal candidate brings strong expertise in Python, PySpark, SQL, and medallion architecture, with a proven track record of deploying production-grade pipelines in enterprise environments. You’ll work closely with the Sr. Data Architect, collaborating with business analysts, BI developers, solution architects, and offshore engineering teams to meet evolving data needs.
We’re seeking someone who is not only technically proficient but also curious and adaptable—ready to grow with us as we expand our platform to include generative AI, agent-based automation, and intelligent orchestration frameworks.
Needs:
- Bachelor’s degree in Computer Science, Engineering, or a related technical field
- 3+ years of hands-on experience in data engineering, with proven production use of Databricks, Python, PySpark, and SQL
- Strong knowledge of Delta Lake, Databricks DLT pipelines, and medallion architecture principles
- Experience with Azure DevOps or GitHub for version control and CI/CD automation
- Demonstrated ability to collaborate with and support offshore engineering teams, ensuring accountability and delivery quality
- Interest or familiarity with Generative AI, agent-based tooling, and modern orchestration/ML workflows
- Databricks certification (Associate Data Engineer or higher) is preferred
- Experience working in Azure-based data environments is a plus
- Excellent documentation and communication skills
Other skills:
- Meticulous attention to detail with a commitment to delivering high-quality outcomes
- Proven ability to thrive in hybrid onshore/offshore team environments
- Strong organizational skills with a sense of ownership over tasks and deliverables
- Self-driven and enthusiastic about learning new tools, technologies, and frameworks
- Comfortable working under tight deadlines in a dynamic, fast-paced setting
Duties: 
- Design, build, and maintain ELT pipelines in Databricks using Delta Live Tables, Python, PySpark, and SQL, following the medallion architecture (Bronze → Silver → Gold)
- Package and deploy reusable pipeline components using Databricks Asset Bundles (DABs)
- Manage source control and CI/CD workflows across environments using Azure DevOps or GitHub
- Conduct code reviews and oversee offshore data engineering deliverables to ensure consistency, quality, and proper documentation
- Actively participate in sprint planning, design reviews, and proactively identify and resolve data issues
- Continuously explore and evaluate emerging technologies—including LLMs, Generative AI, and agent-based workflows—to enhance and automate data delivery
- Support the development and enforcement of data modeling, lineage, and governance standards using Unity Catalog and related tools
- Maintain clear, comprehensive documentation and foster effective communication across technical and business teams