Hi,
Greetings from Smart IT Frame, Hope you are doing well!!!
Role: Data Engineer
Location: Bellevue, WA (Hybrid)
Duration: Contract
Job description:
- Experience in Synapase with pyspark
- Knowledge of Big Data pipelines Data Engineering
- Working Knowledge on MSBI stack on Azure
- Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage
- Handson in Visualization like Power BI
- Implement end to end data pipelines using cosmos Azure Data factory
- Should have good analytical thinking and Problem solving
- Good communication and coordination skills
- Able to work as Individual contributor.
Requirement Analysis:
- Create Maintain and Enhance Big Data Pipeline
- Daily status reporting interacting with Leads
- Version control ADOGIT CICD
- Marketing Campaign experiences
- Data Platform Product telemetry Analytical thinking
- Data Validation of the new streams
- Data quality check of the new streams
- Monitoring of data pipeline created in Azure Data factory
- updating the Tech spec and wiki page for each implementation of pipeline
- Updating ADO on daily basis.
Skills:
Mandatory Skills: Java, Python, Scala, Snowflake, Azure BLOB, Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE, ANSI-SQL, Databricks, HDInsight
Good to Have Skills: Python, Azure BLOB, Azure Data Factory, AZURE DATA LAKE