Description:
sg360° partners with Fortune 1000 brands to pursue unmatched direct marketing performance. We leave no stone unturned in our efforts to drive smarter targeting, stronger messaging and improved ROI. Everything we do - audience analytics, strategic planning, creative development, production and distribution - we do in the pursuit of performance.
When you join us, you gain access to a comprehensive benefits package, including paid time off, holiday pay, health, dental, and vision insurance, life insurance, an education assistance program, short- and long-term disability, wellness resources, identity theft protection, and a 401k with employer match. Be part of a legacy of excellence and growth with sg360°!
We are a technology-driven company leveraging AWS cloud infrastructure to build and host high-performance online applications. Our platforms are developed using modern frameworks such as Java/J2EE, Python, Spring Boot, Angular, and SQL. We are looking for an experienced AWS Data Engineer with deep hands-on expertise in Python (PySpark) and SQL. The ideal candidate will design, build, and optimize scalable data pipelines and lakehouse architectures on AWS. This role will involve exploratory data analysis, transforming and modeling large datasets, implementing CDC (Change Data Capture) patterns, and enabling analytics and reporting through Redshift, Athena, and QuickSight.
SG360° does not offer employment-based visa sponsorship now or in the future. Candidates must be legally authorized to work in the United States without the need for current or future visa sponsorship. This policy applies to all applicants, including those whose employment authorization may expire in the future and would require sponsorship to remain employed.
PRIMARY RESPONSIBILITIES
Ingesting & Storing Data
-
Build and manage data lakes on Amazon S3.
-
Configure and maintain AWS Glue Crawlers and the Glue Data Catalog.
-
Implement streaming and CDC ingestion using AWS DMS / Kinesis.
-
Apply proper IAM configurations to ensure secure data access.
Transforming & Querying Data
-
Develop robust ETL pipelines using AWS Glue (PySpark and Python Shell jobs).
-
Leverage AWS Lambda for lightweight, event-driven transformations.
-
Query and analyze datasets using Amazon Athena over S3.
-
Design and optimize data warehouses using Amazon Redshift (including stored procedures and complex SQL).
-
Support business insights through Amazon QuickSight dashboards and visualizations.
Building Scalable Data Platforms
-
Design and implement Lakehouse architectures (e.g., Iceberg on S3 with time travel).
-
Apply fine-grained security controls via AWS Lake Formation.
-
Orchestrate data workflows using Step Functions.
-
Integrate DynamoDB and SQS for event-driven and metadata-driven processing.
-
Tune performance using compaction, distribution keys, sort keys, and parallelism strategies.
-
Manage costs via intelligent use of S3 storage tiers, Redshift RA3 nodes, and Glue job optimization.
-
Implement and monitor data quality and observability mechanisms.
Requirements:
MINIMUM REQUIRED EDUCATION & EXPERIENCE
-
Bachelor’s degree in computer science, Information Systems, or equivalent experience.
-
5+ years of Python development experience (must include data-focused work).
-
3+ years of PySpark experience developing large-scale ETL jobs.
-
Advanced SQL skills, including query optimization, and analytics functions.
-
Strong experience with AWS data ecosystem: Glue, Redshift, Athena, DMS, S3, IAM, Lambda, and QuickSight.
-
Working knowledge of Change Data Capture (CDC) techniques and tools (e.g., DMS, Debezium).
-
At least 1 year of data visualization experience (QuickSight, Power BI, or Tableau).
-
Solid understanding of data modeling, data partitioning, schema design, and metadata management.
-
Familiarity with data quality frameworks and observability tools.
ADDITIONAL ELIGIBILITY QUALIFICATIONS
-
Experience managing EMR clusters for high-volume Spark workloads.
-
Exposure to data governance, lineage, and cataloging at scale.
-
Experience with DevOps for data (CI/CD pipelines, IaC with Terraform/CloudFormation).
ON-SITE ROLE
This is an on-site role. Candidates must be in a commutable distance or willing to relocate independently.
SG360° is an Equal Opportunity Employer. We make employment decisions based on merit, qualifications, and business needs. SG360° does not discriminate on the basis of race, color, religion, sex, national origin, age, disability, veteran status, or any other status protected by applicable law.
SG360° will provide reasonable accommodations to individuals with disabilities in the hiring process, in accordance with applicable laws. If you require an accommodation to complete your application, please contact the location to which you are applying and ask to speak with the Human Resources representative.