Spartan capital provides top world leading data-driven Finance solutions for Small and Medium size businesses -SMB’s. Spartan Capital is a dynamic, fast-growing FinTech company that is revolutionizing the financial services industry with cutting-edge technology and innovative solutions.
Job Summary
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and architectures that facilitate the collection, storage, and analysis of large datasets. You will work closely with data scientists and analysts to ensure that data is accessible and usable for business intelligence and analytics purposes.
Duties
- Design and implement data processing systems using AWS, Azure Data Lake, Hadoop, and Spark.
- Develop ETL processes to extract, transform, and load data from various sources into data warehouses.
- Collaborate with cross-functional teams to define data requirements and ensure alignment with business objectives.
- Utilize SQL, Oracle, Microsoft SQL Server, and other database technologies for data storage and retrieval.
- Create and maintain documentation related to database design, ETL processes, and data architecture.
- Implement analytics solutions using tools such as Looker for reporting and visualization.
- Conduct model training and support machine learning initiatives by providing clean datasets.
- Optimize existing data pipelines for performance improvements.
- Engage in Agile methodologies to enhance project delivery efficiency.
Skills
- Proficiency in programming languages such as Java, Python, and SQL.
- Experience with big data technologies including Hadoop, Apache Hive, Spark, and Informatica.
- Familiarity with cloud platforms like AWS or Azure Data Lake.
- Strong understanding of database design principles and experience with relational databases (Oracle, Microsoft SQL Server).
- Knowledge of ETL tools such as Talend or similar frameworks.
- Ability to work with linked data concepts and RESTful APIs for data integration.
- Competence in shell scripting (Bash/Unix shell) for automation tasks.
- Excellent analytical skills with the ability to interpret complex datasets effectively.
- Experience in Agile project management methodologies is a plus.
- Familiarity with VBA for automation tasks can be advantageous. Join us in leveraging your expertise in data engineering to drive impactful business decisions through effective data management.
Required Experience:
- +5 years experience building complex data pipelines and working with both technical and business stakeholders
- Experience in at least one primary language (e.g., Java, Scala, Python) and SQL (any variant)
- Experience with technologies like BigQuery, Spark, AWS Redshift, Kafka, or Kinesis streaming
- Experience creating and maintaining ETL processes
- Experience designing, building, and operating a DataLake or Data Warehouse
- Experience with DBMS and SQL tuning
- Strong fundamentals in big data and machine learning
Responsibilities
- Spartan Capital is looking for a Data Engineer for the Credit Modeling team to conceptualize, design and implement improvements to ETL processes and data through independent communication with our data-hungry stakeholders
- Design, build, and maintain distributed batch and real-time data pipelines and data models
- Facilitate real-life actionable use cases leveraging our data with a user- and product-oriented mindset
- Be curious and eager to work across a variety of engineering specialties (i.e., Data Science, and Machine Learning to name a few)
- Enforce privacy and security standards by design
PREFERRED SKILLS
Methodologies: SDLC, Agile, Waterfall
Programming Languages: Python, SQL, R
Libraries: TensorFlow, PyTorch, Scikit-learn, Keras, Pandas, NumPy, SciPy
Big Data and ETL Tools: PySpark, Apache Kafka, Hadoop (HDFS, Hive), Apache Airflow, dbt
Databases & Warehousing: MySQL, PostgreSQL, SQL Server, Snowflake, MongoDB
Data Visualization: Tableau, Power BI, Advanced Excel, Matplotlib
Cloud Platforms: AWS (S3, Glue, Redshift), Azure (Data Factory, Databricks, Blob Storage)
Version Control and CI/CD: Git, GitHub, Bitbucket, Azure DevOps, Jenkins
Infrastructure as Code and Containerization Tools: Terraform, Docker, Kubernetes
Soft Skills: Problem-Solving, Effective Communication, Cross-Functional Collaboration, Attention to Detail, Stakeholder
Job Type: Full-time
Pay: $100,900.24 - $155,234.70 per year
Benefits:
- 401(k)
- Dental insurance
- Health insurance
- Life insurance
- Paid time off
- Professional development assistance
- Relocation assistance
- Vision insurance
Education:
Experience:
- ETL: 5 years (Required)
- Big data: 5 years (Required)
Ability to Commute:
- Hazlet, NJ 07730 (Required)
Ability to Relocate:
- Hazlet, NJ 07730: Relocate before starting work (Required)
Work Location: In person