Job Description
We’re seeking a skilled Data Engineer based in Columbus, OH, to support a high-impact data initiative. The ideal candidate will have hands-on experience with Python, Databricks, SQL, and version control systems, and be comfortable building and maintaining robust, scalable data solutions.
Key Responsibilities
- Design, implement, and optimize data pipelines and workflows within Databricks.
- Develop and maintain data models and SQL queries for efficient ETL processes.
- Partner with cross-functional teams to define data requirements and deliver business-ready solutions.
- Use version control systems to manage code and ensure collaborative development practices.
- Validate and maintain data quality, accuracy, and integrity through testing and monitoring.
Required Skills
- Proficiency in Python for data engineering and automation.
- Strong, practical experience with Databricks and distributed data processing.
- Advanced SQL skills for data manipulation and analysis.
- Experience with Git or similar version control tools.
- Strong analytical mindset and attention to detail.
Preferred Qualifications
- Experience with cloud platforms (AWS, Azure, or GCP).
- Familiarity with enterprise data lake architectures and best practices.
- Excellent communication skills and the ability to work independently or in team environments.