
Principal Data Engineer
- Greenwood Village, CO
- $10,000 per year
- Permanent
- Full-time
- Lead the design and implementation of highly scalable data pipelines and ETL processes.
- Architect and optimize data storage and retrieval mechanisms, leveraging cloud-native solutions.
- Drive data governance, quality, and metadata management initiatives.
- Mentor junior engineers and promote innovation within the team.
- Communicate effectively with stakeholders at all levels to ensure alignment on data initiatives.
- Bachelor's Degree in Computer Science, Information Technology, or related field. required
- Bachelor's degree may be substituted with four years of related experience (four years is in addition to what is minimally required for the role), or an equivalent combination of education and related experience.
- Master's Degree in Computer Science, Information Technology, or related field. preferred
- 8 years of relevant experience in data engineering, with a proven track record of designing and implementing scalable data solutions, required
- Prior Experience in a leadership or senior technical role within a data engineering team, required
- Prior Experience with cloud platforms, particularly AWS, and proficiency in leveraging AWS native services for data processing, storage, and analytics, required
- Prior Experience with other cloud platforms, such as Google Cloud Platform (GCP) or Microsoft Azure, preferred
- Prior Experience with on-premises development environments, preferred
- Prior Experience building self-service tooling and platforms, preferred
- Prior Experience building and design of Data Mesh architecture platforms preferred
- 1. Expertise in designing and implementing complex data pipelines and ETL processes, preferably using cloud-native technologies.
- 2. In-depth knowledge of data architecture principles, including data modeling, schema design, and optimization techniques.
- 3. Proficiency in programming languages commonly used in data engineering, such as Python, Scala, or Java.
- 4. Strong understanding of distributed computing frameworks and big data technologies, such as Apache Spark, Hadoop, or Flink.
- 5. Excellent problem-solving and analytical skills, with the ability to troubleshoot complex data engineering challenges.
- 6. Effective communication and collaboration skills, with the ability to work cross-functionally and influence decision-making.
- 7. Core Competencies:
- - Expertise with AWS development environment and services
- - Strategic thinking and planning.
- - Continuous learning and innovation.
- - Collaboration and teamwork.
- - Results-driven mindset.
- 8. Certifications in cloud computing or data engineering, particularly AWS certifications (preferred).
- 9. Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (preferred).
- 10. Knowledge of data governance frameworks and regulatory requirements, such as GDPR or CCPA (preferred).
- 11. A passion for building and running continuous integration pipelines (preferred).
- 12. Contributed to open-source projects (Ex: Operators in Airflow) (preferred).