Title: Data Engineer
Location: Dallas, Philadelphia, Cleveland(Hybrid)
Job Type: Contract/Full-time
Experience:9+Years
Required Skills:
Strong programming skills in Python and PySpark.
Hands-on experience with Hadoop ecosystem — HDFS, Hive, HBase, Sqoop, Oozie, etc.
Experience with Spark SQL and data frame APIs.
Solid understanding of data warehouse and ETL concepts.
Experience with workflow orchestration tools (e.g., Airflow, Oozie, Luigi).
Familiarity with cloud platforms (AWS, Azure, or GCP) and services like EMR, Dataproc, or HDInsight.
Strong understanding of SQL and performance optimization.
Proficiency in version control (Git) and CI/CD processes.