Title: Data Engineer
Location: Greenville- Hybrid scheduled 3 days in office/ 2 days remote
Responsibilities
- Communicate with non-technical stakeholders to determine technical solutions to business problems.
- Maintain an accurate and comprehensive inventory of data, data systems, and data storage
- Integrate External Systems with internal systems to ensure proper data flow between systems
- Process File-based data extracts using data retrieval and management tools to provide timely loading of critical business data
- Implement infrastructure required to optimize ETL and ELT operations across a variety of data sources
- Model and Assemble data sets that meet functional and technical business requirements
- Provide Expertise in data warehousing and data delivery across all departments of the organization
- Design, build, and maintain data delivery solutions in accordance with governing data architecture patterns
- Assist the business and technology stakeholders to deliver secure data solutions and projects
Requirements
- Ability to problem-solve, work independently, and provide creative solutions for the business using data
- Strong Communications Skills with both internal team members and external business stakeholders
- Expertise in Python and Python Scripting
- Skilled in working with various cloud based data warehousing systems for both internal and customer facing applications - Azure Data Factory, Snowflake, Etc.
- Skilled in working with large scale data sets
- MS SQL Server, Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS) required.
- Advanced SQL skills, with expertise in creating and maintaining data structures in SQL
- 3+ years of hands on technical experience in data engineering
- Bachelor's Degree in Computer Science, Information Technology or related field
Preferred Qualifications
- Previous experience working in the financial services/consumer finance industry is a plus
- Experience with Microsoft PowerBI is a plus, experience with a business intelligence and reporting platform preferred.
- Proficiency in Electronic Data Interchange (EDI) is a plus.
Duties
- Design, develop, and implement ETL processes to extract, transform, and load data from various sources into data warehouses.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
- Build and maintain scalable data pipelines using technologies such as Apache Hive, Spark, Hadoop, and Azure Data Lake.
- Ensure high-quality data by implementing validation checks and monitoring systems for data integrity.
- Utilize SQL for database querying and management across platforms including Microsoft SQL Server, Oracle, and other relational databases.
- Develop analytics solutions using Looker for visualizing data insights.
- Write efficient code in Python or Java for data processing tasks.
- Implement RESTful APIs for seamless integration between applications and databases.
- Participate in Agile development processes to ensure timely delivery of projects.
- Conduct model training and analysis to improve the performance of machine learning models.
Experience
- Proven experience as a Data Engineer or in a similar role with a strong understanding of database design principles.
- Proficiency in big data technologies such as Hadoop, Spark, and Informatica is highly desirable.
- Familiarity with linked data concepts and analytics methodologies.
- Experience with shell scripting (Bash/Unix) for automation tasks is a plus.
- Knowledge of Talend for ETL processes is an advantage.
- Strong analytical skills with the ability to troubleshoot complex issues effectively.
- Experience working with Agile methodologies to enhance project delivery. If you are passionate about working with large datasets and have the technical skills required to drive impactful data solutions, we encourage you to apply for this exciting opportunity as a Data Engineer.