Our client is looking for a Data Engineer (Level 2) to join their Team! This role is responsible for working on data engineering pipelines, data modelling, analytics and supporting major issues. Specifically, the data engineer will work to review data acquisition strategies and develop programs to acquire data into Data Lake/Data Warehouse from transactional SAP ERP systems like ECC, CRM, EWM, EM BW, Hybris suing various tools like HANA SDI & SDA Adaptors, SLT, file, OData and other 3rd party adaptors as applicable; work with data team to efficiently use Google Cloud platform to analyze data, build data models and generate reports/visualizations; work with data scientists to build the views for ML use cases and work with Data Analysts to build reports and dashboards in visualization tools Looker or Power BI; design and develop database artifacts like Tables, Calculation Views, SQL Script Procedures to stage large amounts of input and output data in Data Lake; and participate in software development life cycle (SDLC) tasks like design reviews and approvals, detailed functional and technical documentation, migration of artifacts to production.
Top Skills You Need To Have
- Bachelor’s degree in Statistics, Mathematics, Data Science, Engineering or related quantitative field
- Minimum 3 years of experience in Information Technology, including experience with SQL, Python, Big Query, Cloud Composer, Cloud Pub/Sub
- Minimum 2 years of experience in building and operationalizing large-scale enterprise data solutions and applications, with the ability to build production data pipelines from data ingestion to consumption within a hybrid big data architecture
- Minimum 1 year of experience working with CI/CD pipelines using Code repositories like Gitlab, GitHub, etc. and deployment tools like Cloud Build or Gitlab Runners
- Minimum 1 year of experience in Reporting, Data Analytics and building data pipelines
- Experience in any cloud platforms such as GCP, Azure, AWS; with preference to GCP
- Proven understanding of technical environment, including technical area development and architecture of SAP
Preferred - Master’s degree in Statistics, Mathematics, Data Science, Engineering or related quantitative field with 1 year of experience in Information Technology, including exposure to:
- SQL, Python, Big Query, Cloud Composer, Cloud Pub/Sub
- Reporting and data analytics, such as building and operationalizing large scale enterprise data solutions and applications, building production data pipelines from data ingestion to consumption within a hybrid big data architecture
- Working with CI/CD pipelines using Code repositories like Gitlab, GitHub and deployment tools like Cloud Build or Gitlab Runners
- Experience in any cloud platforms such as GCP, Azure, AWS, etc. with preference to GCP
- Understanding of technical environment, including development and architecture of SAP
- Experience in medallion architecture with Google Cortex framework
- Experience with containerization technologies like Docker or Kubernetes
- Experience in PySpark, Cloud Dataproc, Cloud Dataflow, Terraform, Hadoop, Hive, Apache Spark, Cloud Spanner, Cloud SQL and Data Fusion
About Golden Technology
Golden Technology was founded in 1997 with the goal of developing people and driving innovation. In other words, our aim is to pair world-class technologists like you with amazing companies that are doing impactful work.
After an initially slow start, and way too many late nights playing Final Fantasy 7, Golden Technology built a unique recruiting engine that would quickly prove itself to deliver top-tiered talent to fortune 500 clients across the US, time and time again.
Golden Technology has built a culture around family and helping the people we touch succeed in both their work and personal lives. Oh, everyone says that? Try us, you’ll see it.
We’re helping people find their calling and their dream jobs; and through our Golden Community initiatives we are actively working to improve the communities in which we work, live, and play.