JOB TITLE:
 Senior Data Engineer
   REPORTS TO:
 Director of Technology
   DEPARTMENT:
 Business Intelligence
   F.L.S.A. STATUS:
 Exempt
About LimeIQ
LimeIQ is an innovative FinTech/InsurTech platform transforming the way the global insurance marketplace ingests, validates, aggregates, and analyzes its data. LimeIQ empowers managing agents, brokers, underwriters, and delegated claims authorities with real-time insights—delivered without disrupting existing workflows or requiring costly system overhauls.
At the core of our platform is LincolnSmart™, a proprietary ingestion and transformation engine that streamlines bordereaux (BDX) processing at scale. With a seamless, API-free experience, LimeIQ eliminates the need for manual reconciliation and standardizes fragmented data into a centralized, structured format.
Through our four key pillars—Ingestion, Validation, Aggregation, and Intelligence—we help insurance clients unlock:
✔ Real-time visibility into portfolio and claims data
✔ Automated data validation and FCP reconciliation
✔ Custom reporting for reserve, incurred, and catastrophe analysis
✔ KPI, cost, and portfolio performance insights for underwriting and finance teams
Our mission is to build the leading platform for insurance data intelligence by powering intelligent applications and fostering a data-driven culture across the global insurance ecosystem.
Position Summary
We’re looking for an experienced Senior Data Engineer to join our growing Data Insights & Analytics team. You’ll play a critical role in designing, building, and scaling the data infrastructure that powers our core products and client-facing insights.
In this role, you’ll architect data solutions using modern Azure technologies, including Microsoft Fabric, Synapse, Azure SQL, and Data Factory. You'll develop robust pipelines to process, transform, and model complex insurance data into structured, reliable datasets that fuel analytics, dashboards, and data science products.
In addition to your technical responsibilities, your daily routine will include participating in standup meetings, managing work items based on your capacity, collaborating with the team’s growing technology team to define new projects or initiatives, and engage in development activities. In addition to traditional data engineering tasks, you will directly interact with the teams developing the tools we utilize, enabling you to provide direct product feedback and witness your input driving changes in the products over time. 
Our mission is to empower our team to achieve their potential. We come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our founder’s values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.
The position is based in LimeIQ’s Dallas, TX Headquarters. 
Key Responsibilities·      Design and build robust, scalable ETL/ELT pipelines using Azure SQL, Azure Data Factory, Synapse, and Microsoft Fabric
·      Model and transform raw insurance data into structured datasets for reporting and analytics use cases
·      Collaborate with analysts, engineers, and business stakeholders to align data solutions with company goals
·      Implement orchestration logic, metadata-driven processing, and Spark-based transformations (PySpark, Spark SQL)
·      Optimize performance of data workflows and pipelines to support real-time and batch processing scenarios
·      Drive best practices in data governance, documentation, code quality, and DevOps automation
·      Monitor production workloads, troubleshoot pipeline failures, and support live environments
·      Evaluate new Azure data services and tools for potential adoption
Key Skills & Expertise·      Data Engineering: Advanced ETL/ELT experience with large, complex data sets
·      Azure Stack: Strong knowledge of Azure SQL, Azure Data Factory, Synapse Analytics, and Microsoft Fabric
·      Spark Ecosystem: Experience with Spark development using PySpark and Spark SQL (especially in Fabric notebooks)
·      Data Modeling: Expertise in dimensional modeling, normalization, and schema design
·      Coding Proficiency: Fluent in Python, SQL, Spark SQL, and scripting for orchestration and automation
·      Performance Tuning: Familiarity with optimizing query performance, parallel processing, and cloud-based workloads
Qualifications
·      Bachelor’s degree in Computer Science, Data Engineering, or related field (Master’s a plus)
·      5+ years of experience in data engineering or analytics engineering roles
·      3+ years working with Azure data services and cloud-native platforms
·      Experience with Microsoft Fabric is highly desirable
·      Proven ability to transform business requirements into scalable, maintainable data workflows
·      Experience working with Lloyd’s of London bordereaux data is a strong plus, particularly in contexts involving ingestion, validation, and transformation of complex insurance data sets.
What We Offer
·      A high-impact role within a fast-growing FinTech/InsurTech company
·      A collaborative environment that values curiosity, innovation, and continuous learning
·      Career growth opportunities and exposure to modern Azure analytics tools and practices
·      A mission-driven workplace built on integrity, inclusion, and mutual respect