Minimum qualifications:
Bachelor's degree or equivalent practical experience.
3 years of experience coding in one or more programming languages such as SQL and Python.
3 years of experience working with data infrastructure and data models by performing exploratory queries and scripts.
3 years of experience designing data pipelines, and dimensional data modeling for synch and asynch system integration and implementation using internal (e.g., Flume, etc.) and external stacks (DataFlow, Spark, etc.).
Preferred qualifications:
3 years of experience partnering with stakeholders (e.g., users, partners, customer), and managing stakeholders/customers.
3 years of experience developing project plans and delivering projects on time within budget and scope.
Experience with Machine Learning for production workflows.
Responsibilities
Acquire understanding of business processes, tools and customer expectations and design and develop scalable data infrastructure to support reporting needs.
Design, implement, test, optimize and troubleshoot analytics and reporting solutions to solve business performance management challenges as well as to meet regulatory reporting requirements.
Collaborate with and influence business and engineering stakeholders to ensure our data infrastructure and products meet constantly evolving requirements.
Work closely with analysts to productionize analytics and reporting prototypes, and various statistical and machine learning models.
Design, implement and own technical implementation of production-level data pipelines, documentation, check-in process, etc. Write and review technical documents, including design, requirements, and process documentation.