Location: Chicago / Hybrid
Position Overview
We are looking for a highly skilled and experienced Senior Data Engineer with expertise designing, developing, and optimizing large-scale data processing systems and analytical solutions. Will need the ability to work both independently on assigned projects, and collaboratively with other team members. As a senior level team member you will be involved in implementing best practices, mentoring junior team members, and driving the evolution of our data infrastructure to support complex analytics and data science initiatives.
Core Responsibilities
- Architect & Design: Take part in the design and architecture of a scalable, reliable, and performant data solutions on the Databricks Lakehouse Platform
- Develop & Implement: Build, test, and maintain complex ELT/ETL pipelines using Spark (PySpark/Scala), SQL, notebooks, DLT, and other relevant technologies to ingest, transform, and load data from diverse sources.
- Optimize Performance: Identify & resolve performance bottlenecks in Spark jobs and Databricks Workflows; optimize data storage (Delta Lake tuning, partitioning, liquid clustering) and compute usage for efficiency and cost-effectiveness.
- Data Modeling: Design and implement effective data models within the Lakehouse paradigm (e.g. Medallion Architecture, Star-Schema) suitable for various analytical and reporting needs.
- Collaboration: Work closely with data scientists, analysts, BI developers, software engineers, and business stakeholders, to understand requirements and deliver high-quality data solutions.
- Mentorship: Provide technical guidance, mentorship, and code reviews for junior and mid-level data engineers, fostering a culture of technical excellence.
- Governance & Standards: Define, document, and enforce data engineering best practices, code standards, and governance processes.
- Communication: Good communication skills, as well as the ability to work effectively across internal and external organizations and virtual teams.
- Skill Development: Stay familiar with industry changes, especially in the areas of cloud data and analytics technologies.
Required Skills & Qualifications
- Bachelor’s Degree or equivalent experience is required. Preferred in Computer Science or related degree
- 7+ years of development experience building data solutions
- Experience building meta-data driven data ELT/ETL processes
- Experience developing and deploying Lakehouse solutions using the Medallion architecture in Azure Databricks
- Experience building data pipelines using Azure Data Factory
- Experience designing and building Lambda (batch + streaming) data architectures
- Experience working within an agile development process (Scrum, Kanban, etc)
- Familiarity with CI/CD concepts
- Demonstrated proficiency in creating technical documentation
Good to Have (Preferred)
- Insurance & Finance domain experience
- Experience coordinating offshore teams and working in the onsite-offshore model
- Ability and experience in BI and Data Analysis, end-to-end development in data platform environments.
Benefits:
Highstreet offers a comprehensive benefits package, including Medical, Dental, and Vision insurance, a 401(k) retirement plan with company matching, Paid Time Off (PTO), Life and Disability insurance, and an Employee Assistance Program (EAP). Additional voluntary benefits may be available.
Compensation:
Salary - $130,000 - $150,000 annually.