
Data Scientist
- Pittsburgh, PA
- Permanent
- Full-time
- Design experiments, test hypotheses, and build models for advanced data analysis and complex algorithms
- Apply advanced statistical and predictive modeling techniques to build, maintain, and improve multiple real-time decision systems
- Make strategic recommendations on data collection, integration, and retention requirements, incorporating business requirements and knowledge of data industry best practices
- Model and frame business scenarios that are meaningful and which impact critical processes and decisions; transform, standardize, and integrate datasets for client use cases
- Convert custom, complex and manual client data analysis tasks into repeatable, configurable processes for consistent and scalable use within the Govini SaaS platform
- Optimize processes for maximum speed, performance, and accuracy; craft clean, testable, and maintainable code
- Partner with internal Govini business analysts and external client teams to seek out the best solutions regarding data-driven problem solving
- Participate in end-to-end software development, on an agile team in a scrum process, collaborating closely with fellow software, machine learning, data, and QA engineers
- US Citizenship is Required
- Minimum 3 years of hands-on data science experience
- Minimum 3 years deriving key insights and KPIs for external and internal customers
- Regular development experience in Python
- Prior hands-on experience working with data-driven analytics
- Proven ability to develop solutions to loosely defined business problems by leveraging pattern detection over large datasets
- Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms
- Experience using machine learning algorithms (e.g., gradient-boosted machines, neural networks)
- Ability to work independently with little supervision
- Strong communication and interpersonal skills
- A burning desire to work in a challenging fast-paced environment
- Experience in or exposure to the nuances of a startup or other entrepreneurial environment
- Experience working on agile/scrum teams
- Experience building datasets from common database tools using flavors of SQL
- Expertise with automation and streaming data
- Experience with major NLP frameworks (spaCy, fasttext, BERT)Familiarity with big data frameworks (e.g., HDFS, Spark) and AWS
- Familiarity with Git source control management
- Experience working in a product organization
- Experience analyzing financial, supply chain/logistics, or intellectual property data