
Delivery Consultant - Machine Learning Engineer, AWS Professional Services, AWS Professional Services
- Boston, MA
- Permanent
- Full-time
Key job responsibilities
As an experienced technology professional, you will be responsible for:
1. Implementing end-to-end AI/ML and GenAI projects, from understanding business needs to data preparation, model development, deployment and monitoring.
2. Designing and implementing machine learning pipelines that support high-performance, reliable, scalable, and secure ML workloads.
3. Designing scalable ML solutions and operations (MLOps) using AWS services and leveraging GenAI solutions when applicable.
4. Collaborating with cross-functional teams (Applied Science, DevOps, Data Engineering, Cloud Infrastructure, Applications) to prepare, analyze, and operationalize data and AI/ML models.
5. Serving as a trusted advisor to customers on AI/ML and GenAI solutions and cloud architectures
6. Sharing knowledge and best practices within the organization through mentoring, training, publication, and creating reusable artifacts.
7. Ensuring solutions meet industry standards and supporting customers in advancing their AI/ML, GenAI, and cloud adoption strategies.This is a customer-facing role with potential travel to customer sites as needed.About the team
About AWS:
Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture - Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness.
Mentorship & Career Growth - We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.BASIC QUALIFICATIONS- 3+ years cloud architecture and implementation
- Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience
- 5+ years data, software, or ML engineering, with strong understanding of distributed computing. (e.g., data pipelines, training and inference, ML infrastructure design)
- 3+ years developing predictive modeling, natural language processing, and deep learning, with a proven track record of building and deploying ML models on cloud. (e.g., Amazon SageMaker or similar)
- 3+ years developing with SQL, Python, and at least one additional programming language (e.g., Java, Scala, JavaScript, TypeScript). Proficient with leading ML libraries and frameworks (e.g., TensorFlow, PyTorch)PREFERRED QUALIFICATIONS- AWS experience preferred, with proficiency in a range of AWS services (e.g., SageMaker, Bedrock, EC2, ECS, EKS, OpenSearch, Step Functions, VPC, CloudFormation)
- AWS Professional certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional)
- Experience with automation (e.g., Terraform, Python), Infrastructure as Code (e.g., CloudFormation, CDK), and Containers & CI/CD Pipelines.
- Knowledge of common security and compliance standards (e.g., HIPAA, GDPR)
- Strong communication skills with ability to explain complex concepts to technical and non-technical audiences
- Experience building ML pipelines with MLOps best practices, including: data preprocessing, model hosting, feature selection, hyperparameter tuning, distributed & GPU training, deployment, monitoring, and retraining
- Experience with MLOps (e.g., MLFlow, Kubeflow) and orchestration (e.g., Airflow, AWS Step Functions). Experience building applications using GenAI technologies (LLMs, Vector Stores, LangChain, Prompt Engineering)Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,200/year in our lowest geographic market up to $204,300/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit . This position will remain posted until filled. Applicants should apply via our internal or external career site.