
Data Engineer (Fabric, PySpark, Salesforce) - 100% Remote
- Norfolk, VA
- Permanent
- Full-time
- Engineering in Microsoft Fabric
- Develop, optimize, and maintain Fabric Data Pipelines for ingestion from on-prem and cloud sources.
- Build PySpark notebooks to implement scalable transformations, merges/upserts, and medallion-lakehouse patterns.
- Ensure reliability and performance in Lakehouse and Delta Lake design.
- Architecture & Design
- Contribute to the design and evolution of our Fabric-based platform.
- Define standards and frameworks for schema management, versioning, governance, and data quality.
- Collaborate with peers to evaluate trade-offs and guide enterprise-scale architecture.
- DevOps & CI/CD
- Build and maintain deployment pipelines for Fabric artifacts (notebooks, pipelines, lakehouses).
- Establish environment-aware configuration and promotion workflows across Dev/QA/Prod.
- Drive automation to reduce manual effort and improve reliability.
- Mentorship
- Work as a peer-leader with other senior engineers and architects to shape platform strategy.
- Mentor other engineers and contribute to building a strong engineering culture.
- 7+ years in data engineering, with proven impact in enterprise environments.
- Strong hands-on expertise in Microsoft Fabric (Pipelines, Lakehouse, Notebooks, OneLake).
- Advanced PySpark skills for data processing at scale.
- Expertise in Delta Lake, medallion architecture, schema evolution, and data modeling.
- Experience with CI/CD for data engineering, including Fabric asset deployments.
- Strong SQL and experience with SQL Server/Azure SQL.
- Experience helping launch or scale Microsoft Fabric adoption.
- Familiarity with data governance, lineage, and compliance frameworks.
- Knowledge of real-time/streaming data patterns.
- Exposure to Salesforce, CRM, or DMS integrations.
- Excellent communication and collaboration skills for working with peer-level experts.
- In depth knowledge of the Azure ecosystem (API Management, Azure Functions, etc.).
#LI-REMOTE