Data Engineer (Hybrid – Ridgefield, CT) - 1760
Location: on-site 2-3X per week in Ridgefield CT location
Relocation Offered: Considered
Employment Type: Full-time employment – no consulting or corp to corp
Salary Range: $140K - $185K + bonus
Visa restrictions: US Citizen or Green Card only. This role isn't eligible for H-1B, TN, F1 or OPT
Overview
We are looking for a hands-on Data Engineer to design, build, and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture, optimizing data flow, and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact, growth, balance, and well-being.
What You’ll Do
- Design, build, and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
- Partner with architects and engineering teams to define and evolve our cloud-based data architecture, including data lakes, data warehouses, and streaming data platforms.
- Work closely with data scientists, analysts, and business partners to understand requirements and deliver reliable, reusable data solutions.
- Develop and maintain scalable data storage solutions (e.g., AWS S3, Redshift, Snowflake) with a focus on performance, reliability, and security.
- Implement data quality checks, validation processes, and metadata documentation.
- Monitor, troubleshoot, and improve pipeline performance and workflow efficiency.
- Stay current on industry trends and recommend new technologies and approaches.
Qualifications
Data Engineer (Mid-Level)
- Strong understanding of data integration, data modeling, and SDLC.
- Experience working on project teams and delivering within Agile environments.
- Hands-on experience with AWS data services (e.g., Glue, Lambda, Athena, Step Functions, Lake Formation).
- Associate degree + 8 years experience, or Bachelor's + 4 years, or Master’s + 2 years. Or Associate degree + 4 years experience, or Bachelor's + 2 years, or Master's + 1 year experience.
- Expert-level proficiency in at least one major cloud platform (AWS preferred).
- Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
- Experience with big data processing (e.g., Spark, Hadoop, Flink) is a plus.
- Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
- Familiarity with CI/CD pipelines and DevOps principles.
- Proficiency in Python and SQL (required).
Desired Skills
- Experience with ETL/ELT tools (e.g., Airflow, dbt, AWS Glue, ADF).
- Understanding of data governance and metadata management.
- Experience with Snowflake.
- AWS certification is a plus.
- Strong problem-solving skills and ability to troubleshoot pipeline performance issues.