
Sr Principal Software Engineer
- Chicago, IL
- $169,400-242,000 per year
- Permanent
- Full-time
- Lead the design and implementation of next-generation map-making automation pipelines within Foundation Engineering, leveraging big data, distributed processing, and ML-driven automation frameworks.
- Architect scalable systems to process, fuse, and validate heterogeneous content (probe data, aerial imagery, sensor data, etc.) to ensure high-quality map content.
- Partner with product managers, data scientists, and engineers to develop end-to-end solutions that integrate automation, AI/ML models, and rule-based systems.
- Provide technical leadership and mentorship, setting engineering standards and best practices for automation, data engineering, and content processing.
- Evaluate emerging technologies, tools, and architectures to continually improve automation efficiency, content accuracy, and processing scalability.
- Collaborate with cross-functional teams to drive innovation, accelerate deployment cycles, and meet content quality KPIs.
- Act as a subject matter expert on big data platforms, content pipelines, and automated validation frameworks, evangelizing their adoption across teams.
- Challenging and impactful problems to solve at global scale
- Opportunities to learn and implement innovative technologies
- Work that influences the evolution of automated map-making and location services
- Freedom to design and drive solutions with high ownership
- Exposure to a variety of cutting-edge projects in big data, ML, and automation
- Constructive feedback and professional growth opportunities
- Collaborative, supportive colleagues across engineering and product teams
- MS or Bachelors in Computer Science, Data Engineering, Software Engineering, or a related discipline.
- 12+ years of professional experience in large-scale software engineering, data engineering, or automation for geospatial or content-driven systems.
- Proven expertise in big data technologies such as Hadoop, Spark, Flink, Kafka, or equivalent.
- Strong hands-on experience in distributed data processing, ETL pipeline development, and automation of data-driven workflows.
- Solid programming skills in Python, Java, or Scala, along with SQL and shell scripting.
- Experience working with NoSQL databases (e.g., DynamoDB, MongoDB) and GIS datasets.
- Familiarity with containerization and deployment using Docker and orchestration tools (Kubernetes preferred).
- Knowledge of data mining, machine learning, and/or automation frameworks for large-scale content processing.
- Demonstrated ability to lead complex projects, influence stakeholders, and drive technical vision at an organizational level.
- Excellent communication skills to interact with technical and non-technical teams, including external partners.
- Experience with automation in geospatial data processing or digital map production pipelines.
- Background in software development life cycle (SDLC), including CI/CD, testing, and production deployment.
- Experience in agile development environments.
- Track record of innovation in building automation frameworks and delivering measurable improvements in system performance and content quality.