
Principal AI Security Architect
- Bellevue, WA
- Permanent
- Full-time
- Architect security controls, guardrails, and policy enforcement layers for LLM-driven agents and workflows.
- Define mechanisms for real-time prompt filtering, output moderation, and tool access restrictions to prevent abuse or unsafe behavior.
- Design secure multi-tenant agent runtime environments (sandboxing, isolation, permissions) for enterprise deployments.
- Implement dynamic policy enforcement for agent tool usage and sensitive data handling.
- Establish a Responsible AI framework for fairness, bias detection, hallucination control, and ethical AI usage in agentic workflows.
- Define and enforce AI model governance policies, including model versioning, explainability, and approval workflows.
- Build auditability pipelines to track model prompts, outputs, and decision-making chains (critical for compliance and forensics)
- Collaborate with legal, compliance, and risk teams to align with AI regulatory standards (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Architect privacy-preserving AI systems, including data minimization, PII redaction, encryption (at rest/in transit), and secure embedding storage.
- Ensure regional data residency and cross-border compliance (GDPR, HIPPA, CCPA).
- Design mechanisms for secure API integrations with enterprise systems (OAuth2, JWT, zero-trust patterns).
- Implement audit trails and tamper-proof logging for sensitive agent activity.
- Lead threat modeling for AI agents, including prompt injection, data exfiltration, adversarial inputs, and model poisoning attacks.
- Design AI-specific intrusion detection and anomaly detection pipelines for agent workflows.
- Define risk scoring frameworks for agents, tools, and knowledge sources used within the platform.
- Build explainability frameworks to trace agent decisions (reasoning chains, tool invocation logs).
- Enable trust dashboards for customers to audit model performance, decisions, and compliance adherence.
- Incorporate AI transparency reporting (e.g., usage logs, fairness audits) as part of platform deliverables.
- Partner with platform architects, backend engineers, and ML teams to embed security and governance into every layer of the AI stack.
- Provide technical leadership and mentorship to engineers on AI security patterns and best practices.
- Serve as the subject matter expert for internal and external security/compliance reviews, audits, and certifications.
- 10+ years in security architecture, including SaaS and AI/ML security
- Proven expertise in AI security, responsible AI frameworks, and model governance
- Strong knowledge of LLM security threats (prompt injection, data leakage, adversarial attacks) and mitigation strategies
- Experience designing policy enforcement layers, guardrails, and AI moderation pipelines
- Familiarity with NIST AI Risk Management Framework, EU AI Act, and ISO/IEC AI governance standards
- Hands-on experience with cloud security (AWS, GCP, Azure), Kubernetes security, and zero-trust principles
- Proficiency with privacy-preserving AI techniques (encryption, differential privacy, data masking).
- Understanding of auditing and forensic analysis for AI-driven systems
- Programming expertise in Java & Python with a focus on integrating AI security controls.
- Prior experience securing agentic AI platforms, conversational AI systems, or autonomous agents.
- Knowledge of AI explainability techniques (SHAP, LIME, model introspection) in LLM contexts
- Familiarity with secure prompt and response pipelines (LangChain, Guardrails, NeMo Guardrails, etc.)
- Contributions to open-source AI security/governance tools
- Experience in AI policy advocacy, compliance certifications (SOC2, ISO27001), or security leadership in regulated industries