
Machine Learning Hardware Architect, Accelerator
- Mountain View, CA
- Permanent
- Full-time
- Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience.
- 8 years of experience of silicon core architectural domains, including computer architecture, TPU or parallel processor architecture (VPU/DSP), micro-architecture and silicon design.
- Master's degree or PhD in Electrical Engineering, Computer Engineering or Computer Science, with an emphasis on computer architecture.
- Experience in architecting and designing machine learning hardware IP in SoCs for machine learning networks.
- Experience collaborating cross-functionally with product management, SoC architecture, IP design and verification, ML algorithm and software development teams.
- Experience in algorithms for machine learning accelerators and compute cores.
- Experience in micro architecture, power and performance optimization.
- Experience in interconnect/fabric, caching and security architectures.
- Develop TPU (Tensor Processing Unit) architecture for next-generation tensor SOC to boost performance, power efficiency and area optimization based on machine learning workload analysis.
- Define the product roadmap for machine learning accelerator capabilities on System on a Chip (SoCs) for various Google devices by collaborating with Google research and silicon product management teams.
- Drive hardware Internet Protocol (IP) architecture specifications into design implementation for SoCs by partnering with core IP design teams across global sites.
- Align with SoC architects and system or experience architects to address dynamic power, performance, and area requirements at the SoC level for multimedia and Artificial intelligence (AI) use cases and experiences.
- Define and deliver hardware IP architecture specifications that meet power, performance, area and image quality goals, while owning the process through tape-out and product launch.