Job Title: Data Engineer Level II
Location/ Work Structure: Cincinnati, OH
Compensation: $110,000-$120,000 + bonus
Who we are:
Vernovis is a Total Talent Solutions company that specializes in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help these professionals achieve their career goals, matching them with innovative projects and dynamic direct hire opportunities in Ohio and across the Midwest.
Client Overview:
Vernovis is partnering with a local Fortune 500 company to help identify talented Data Professionals for their growing Data Team. This is a great opportunity to join a well-established local company and help them transform their business.
If interested, please contact Jonathon Juriga at jjuriga@vernovis.com
What You’ll Do
- Design, build, and maintain data engineering pipelines, data models, and analytics solutions while troubleshooting major issues.
- Review and improve data acquisition strategies; develop programs to load data into the Data Lake/Data Warehouse.
- Collaborate with the data team to leverage Google Cloud Platform for data analysis, model building, and generating reports/visualizations.
Required Experience
- Bachelor’s degree in Statistics, Mathematics, Data Science, Engineering, or related quantitative field.
- 3+ years in Information Technology, including hands-on experience with SQL, Python, BigQuery, Cloud Composer, and Cloud Pub/Sub.
- 2+ years building and operationalizing enterprise-scale data solutions, including end-to-end production pipelines in a hybrid big data architecture.
- 1+ year working with CI/CD pipelines using code repositories (GitLab, GitHub) and deployment tools (Cloud Build, GitLab Runners).
- 1+ year in reporting, data analytics, and pipeline development.
- Cloud experience (GCP, Azure, or AWS) — GCP preferred.
- Solid understanding of enterprise technical environments, including SAP development and architecture.
Preferred Experience
- Master’s degree in a related quantitative field plus 1+ year in IT.
- Experience with medallion architecture and the Google Cortex framework.
- Familiarity with containerization tools (Docker, Kubernetes).
- Proficiency in PySpark, Cloud Dataproc, Cloud Dataflow, Terraform, Hadoop, Hive, Apache Spark, Cloud Spanner, Cloud SQL, and Data Fusion.