Role: Senior Data Engineer
Location: Hybrid schedule of 2-3 days onsite in Richmond, VA
Duration: 6-12 months
MOI: 2 video interviews, 1 resume deep dive and the 2nd will be very technical with a live coding assessment in Java/Spark.
Must haves: 5+ years of Java programming,
5+ years of experience with Spark, current experience working in an AWS environment,
5+ years of experience as a Data Engineer.
The Senior Data Engineer will design and deliver large-scale data pipelines and cloud-native platforms, leveraging Java, Apache Spark, and AWS to support mission-critical product initiatives. This is a hands-on, high-visibility role that requires live coding skills and deep technical expertise across Spark and AWS.
What You’ll Do
- Architect and implement data pipelines using Java and Apache Spark (batch and streaming).
- Build and optimize cloud-native infrastructure in AWS (EMR, S3, Glue, Redshift, Lambda, etc.).
- Write, refactor, and optimize efficient code; live coding and scripting will be part of the interview and daily work.
- Partner with product, data science, and platform teams to deliver secure, scalable solutions.
- Drive engineering standards, performance improvements, and technical innovation across the team.
Required Experience
- 7–8 years of professional data engineering experience.
- Strong Java coding ability (must be able to write Spark jobs in Java under interview conditions).
- Deep Spark expertise, including joins, accumulators, broadcast variables, and performance tuning.
- Hands-on AWS pipeline experience (not just through platform/infrastructure engineers).
- Strong SQL skills with experience in data reconciliation and validation.
- Excellent communication skills to clearly explain technical decisions in real time.