
Data Engineer / Sr Data Engineer (GCP, BigQuery)
- Snowflake, AZ
- $60,000-160,000 per year
- Permanent
- Full-time
- Design, build, and maintain scalable ETL pipelines using cloud native services (GCP, BigQuery, Pub/Sub, etc.)
- Automate deployment and configuration using Terraform, Helm, and Kubernetes
- Develop dashboards and other tooling around Looker performance metrics
- Ensure Looker datasets are backed by performant and reliable queries
- Diagnose and address performance issues in dashboard performance, LookML codebases, Explores, and derived tables
- Build and maintain CI/CD pipelines for microservices and data workflows
- Support and maintain custom tools for data processing and orchestration
- Monitor and troubleshoot data lake operations and cloud resources
- Design efficient and logical data models that align with business reporting needs
- Write complex SQL queries for data extraction, transformation, and reporting
- Collaborate with data engineers and analysts to deliver reliable, high quality data products
- Assess opportunities and risks of various solutions to provide insights and input needed for technical decisions as we continuously build for scalability and security while maintaining high velocity
- Enhance data lake operation and optimize complex SQL queries for data extraction, transformation, and reporting
- Support continuous improvement of internal processes and documentation as a champion of principles-based approaches to design, implementation, and testing
- Share advanced knowledge to support the team as we continue building our data infrastructure over time
- Can work remotely or from an Applied office
- 3+ years of experience with Google Big Query for data warehousing
- Experience with GCP (Pub/Sub, Dataflow, Cloud functions, IAM)
- Experience in Looker environments, including performance metrics and monitoring
- Experience with CI/CD pipelines and version control systems (e.g., Git)
- Proficiency in SQL, including stored procedures, windows functions, and query optimization.
- Proficiency in building and maintaining ETL pipelines to process large datasets
- Knowledge of data modeling best practices (star/snowflake schemas, normalization).
- Ability to work cross functionally in an agile, fast paced environment.
- Experience with scripting languages (Python, Bash, etc.); Go experience a plus.
- Proficiency with Kubernetes, Terraform, CI/CD tools (GitHub Actions, Jenkins, etc.)
- Familiarity with CDC tools (Debezium, Kafka, etc.) and data lake architecture
- 5+ years of additional experience with Google Big Query for data warehousing
- Proven impact managing and enhancing Looker environments, including performance metrics and monitoring
- Advanced knowledge of SQL and Python with the proven ability to optimize queries
- Demonstrated ability to address complex problems by proposing solutions based on advanced knowledge of data lake architectures and technical considerations
- We proudly support and encourage people with military experience, as well as military spouses, to apply
- Medical, Dental, and Vision Coverage
- Holiday and Vacation Time
- Health & Wellness Days
- A Bonus Day for Your Birthday
Our candidates’ personal information and online safety are top of mind for us. At Applied, we proactively protect your personal information and only communicate with candidates via a secure @appliedsystems.com email or through our official careers portal. Recruiters will never request payments, ask for financial account information or sensitive information like social security numbers.EEO StatementApplied Systems is proud to be an Equal Employment Opportunity Employer. Diversity and Inclusion is a business imperative and is a part of building our brand and reputation. At Applied, we don’t discriminate, and we are committed to recruit, develop, retain, and promote regardless of race, religion, color, national origin, sexual orientation, gender identity, disability, age, veteran status, and other protected status as required by applicable law.#LI-Remote