Lead the development of the Data Infrastructure
Work with the latest cutting edge technologies
Attractive salary & benefits
Our client is a venture capital firm that invests in startups with significant impact on society. They are committed to helping them reach the next level with highly unique perspective from investing in startups across 9 countries. With a wide range of startups, they are seeking for talented individuals for roles in their portfolio companies.
You will be responsible for :
Designing, developing and supporting data pipelines, warehouses and reporting systems to solve business operations, users and product problems.
Creating extracting, transforming, and loading (ETLs) and reporting systems for new data using a variety of traditional as well as large-scale distributed data systems.
Collaborating and influencing Users and Products stakeholders and support engineers to ensure our data infrastructure meets constantly evolving requirements.
Working closely with analysts to produce various statistical and machine learning models using data processing pipelines.
Continuously learning emerging new technologies and seek for the possibilities of their adaptation in order to improve continuously the technology stack we use.
You have a Bachelor's degree in Computer Science, related technical field or equivalent practical experience.
You have experience with one general purpose programming language (e.g., Java, C/C++, Python).
You possess experience in data processing using traditional and distributed systems (e.g., Hadoop, Spark, Dataflow, Airflow).
You have experience designing data models and data warehouses and using SQL and NoSQL database management systems.
You possess strong analytical skills and are comfortable dealing with numerical data
You pay strong attention to detail and deliver work that is of a high standard
You are a strong team player who can manage multiple stakeholders