Role Overview
We are seeking a hands-on Data Engineer with deep expertise in data onboarding (streaming and historical), ETL pipeline development, and dashboard visualization. This role is critical to scaling our analytics infrastructure and supporting high-impact projects across Fabric, ADX/Kusto, and Azure ecosystems.
Typical Day in the Role
- Purpose of the Team: The operations team, specifically the analytics subgroup, focuses on migrating standalone data estates into Fabric and integrating data from new vendors, supporting analytics and dashboard creation.
- Key Projects: Migration of data to Fabric, onboarding data from a new vendor, rebuilding dashboards and analytic workflows, and integrating telemetry for unified operational views.
- Typical Task Breakdown & Operating Rhythm:
- Onboarding and ingesting data from diverse sources.
- Building scalable, automated ETL pipelines using Azure Data Factory, Logic Apps, and Kusto Query Language.
- Designing and maintaining real-time dashboards with Azure Data Explorer and Power BI.
- Implementing data modeling and governance standards.
- Collaborating with cross-functional teams for infrastructure integration and telemetry.
Key Responsibilities
- Data Onboarding & Parsing: Ingest and normalize structured/unstructured data (live and historical data) from diverse sources (using Event Hubs, SQL, Vector, etc.).
- ETL Pipeline Development: Build scalable, automated pipelines using Azure Data Factory (ADF), Logic Apps, and Kusto Query Language (KQL).
- Dashboard Development: Design and maintain real-time dashboards using Fabric (Power BI), Azure Data Explorer (ADX), and Azure Data Studio.
- Data Modeling & Governance: Implement robust data models and enforce governance standards across analytics workflows.
- Infrastructure Integration: Collaborate with cross-functional teams to integrate telemetry, license usage, and operational metrics into unified views.
- Performance Optimization: Tune queries and pipelines for speed, reliability, and cost-efficiency.
Compelling Story & Candidate Value Proposition
- What Makes This Role Interesting?
- Opportunity to work on cutting-edge migration to Fabric.
- Involvement in building new analytics infrastructure and dashboards from the ground up.
- Exposure to modern data engineering tools and real-time telemetry integration.
- Flexibility to work remotely from anywhere in the US.
Candidate Requirements
- Years of Experience Required: Minimum 5 years, ideally 5-7 years for a Level 3 candidate.
- Degrees or Certifications Required: Not strictly required; experience is prioritized. Certifications such as Azure Data Engineer Associate, Data Analyst Associate, and Splunk Core Certified User are nice to have.
- Best vs. Average: Top candidates will have hands-on experience with Fabric, Azure Data Factory, Kusto, and Splunk, and can demonstrate end-to-end project delivery. Average candidates may lack depth in these areas or practical experience.
- Performance Indicators: Within one month, the candidate should onboard data, transform it, and deliver a simple dashboard based on ingested data.
Required Skills
- Proficiency in Fabric, Azure Data Explorer (ADX/Kusto), SQL, KQL, Azure Data Studio, Event Hubs, Logic Apps, Vector, Splunk, and ADF.
- Strong scripting skills in Python or Bash.
- Experience with data visualization tools (ADX, Power BI, Grafana).
- Familiarity with license operations, SLI/SLO tracking, and real-time telemetry.
- Understanding of data governance, security, and compliance in cloud environments.
Preferred Certifications
- Microsoft Certified: Azure Data Engineer Associate
- Microsoft Certified: Power BI Data Analyst Associate
- Splunk Core Certified Power User (Not required but a plus)
Top 3 Hard Skills Required Years Of Experience
- Data Onboarding & Ingestion - Minimum 5 years hands-on experience.
- ETL Pipeline Development (Azure Data Factory, Logic Apps, Kusto) - Minimum 5 years hands-on experience.
- Dashboard Development (Azure Data Explorer, Power BI) - Minimum 5 years hands-on experience.