
Sr Data Engineer
- Rosemont, IL
- $165,000-175,000 per year
- Permanent
- Full-time
- Design and implement comprehensive AWS data pipeline leveraging S3 data lake architecture, Glue ETL workflows, and serverless data processing solutions using Lambda and Step Functions for event-driven architecture.
- Establish robust security, monitoring, and automation through implementation of IAM roles and policies for secure access controls, CloudWatch monitoring and alerting for pipeline health and performance metrics.
- Build and optimize Snowflake data pipelines utilizing Streams and Tasks for incremental processing, Snowpipe for continuous data loading, and stored procedures for complex transformations.
- Configure and maintain Snowflake security features including role-based access control, Dynamic Data Masking, Row Access Policies, query optimization to manage compute costs effectively.
- Develop and maintain IDMC Cloud Data Integration mappings and workflows for complex data transformation requirements, utilizing appropriate transformation types (Router, Joiner, Aggregator, Lookup) and implementing parameterization for reusable and flexible pipeline components.
- Develop automated data validation frameworks between Informatica IDMC and Snowflake to ensure data integrity
- Have a solid understanding of delivery methodology (SDLC) and lead teams in the implementation of the solution according to the design/architecture.
- Implement Informatica PowerCenter to Informatica IDMC migration strategies for legacy data integration workflows
- Lead cross-functional engineering teams through agile methodologies, sprint planning, and delivery management
- Participate as active agile team members to help drive feature refinement, user story completion, code review, etc.
- Collaborate with Data governance team and Implement data governance frameworks ensuring compliance with banking regulations (GDPR, CCPA, PCI-DSS).
- Establish CI/CD practices for data engineering assets using tools like GitHub Actions, Jenkins, and AWS Code Pipeline.
- Optimize data platform performance through query tuning, compute resource management, and architectural enhancements.
- Develop data quality monitoring solutions that provide visibility into data completeness, accuracy, and timeliness.
- Create and maintain technical documentation for engineering processes, architecture decisions, and system configurations
- Monitor and troubleshoot production data pipelines to ensure reliability and minimize business disruption
- Establish testing practices including unit, integration, and performance testing for data engineering solutions
- Evaluate emerging technologies and recommend strategic adoption to enhance data engineering capabilities
- Define and track key performance metrics for data engineering operations, identifying areas for improvement
- Lead knowledge transfer sessions and technical training to upskill junior engineers and consultants
- Collaborate with enterprise architecture teams to align data engineering solutions with broader technology strategies
- Represent the data engineering pillar in cross-divisional planning sessions and strategic initiatives, advocating for necessary resources and priorities
- Minimum of then (10) years of experience in implementing large scale Data & Analytics platform in AWS, Azure, or Google Cloud, on-prem and Hybrid environment.
- A minimum of seven (7) years of hands-on experience in Datawarehouse and Data Integration (ELT/ETL)
- A minimum of five (5) years of robust experience in data engineering, data analytics, or a similar leadership role.
- Demonstrated experience designing and implementing scalable data pipelines using AWS services (Glue, Lambda, Step Functions, S3, EMR)
- Strong background and problem-solving skills with Enterprise Data warehouse, ETL/ELT development, Database Replication, metadata management and data quality.
- Experience in delivering technical solutions in an iterative, agile environment (Scrum/Kanban)
- Experience working with Data Lakes loading disparate data sources- Structured, semi-structured data (Flat files, XML, JSON, Parquet) and unstructured data.
- Strong foundational leadership in Python and / or Apache Spark for processing, analyzing, and innovating in data engineering.
- Demonstrated experience migrating on-premises ETL workloads to cloud-native solutions
- Experience migrating traditional PowerCenter workflows to Informatica IDMC Cloud Data Integration.
- Strategic leadership experience with a variety of databases, including Oracle, SQL Server, and MySQL, paired with advanced knowledge of SQL and PL/SQL for sophisticated data querying, manipulation, and database management.
- Deep understanding and hands-on experience with big data platforms such as Snowflake, Redshift, Big query, Spark, or similar transformative technologies.
- Strong data warehouse applications knowledge in preferably in financial/insurance domain is required.
- Knowledge of real-time data integration patterns using technologies like Kafka, Kinesis, or Snowpipe
- Track record of successful large-scale data migration projects from legacy systems to modern cloud platforms
- Exceptional ability to interpret complex data and communicate strategic insights to both technical and non-technical stakeholders in a leadership capacity.
- Exceptional critical thinking, problem-solving skills, and a strong vision for the analytics domain's future.
- A strong commitment to continuous learning, development, and staying ahead of industry trends in the data analytics domain.
- Stellar written and verbal communication skills coupled with a meticulous attention to detail and the ability to convey strategic visions and plans.
- Demonstrated leadership abilities in a collaborative, dynamic, and fast-paced environment, managing multiple priorities with strategic acumen.
- Demonstrated proficiency in agile-based development methodologies, championing methodologies like Scrum and Kanban for collaborative, innovative, and efficient data product development.
- Visionary ability to manage big data, with proven hands-on leadership in managing and processing voluminous, complex datasets.
- Foundational leadership in managing diverse data feeds, including batch, near real-time, and real-time, ensuring strategic, efficient data management and processing.
Learn more about us at and keep updated with our latest job postings at .
Connect with us!
| | |If you are a California resident, please to learn more about your privacy rights.