- As a Big Data Engineer, you will develop data pipelines from multiple sources (Big Query, Sprinklr, AWS S3 , Kafka, Databases and other) to our Hadoop infrastructure. You will also work in teams with Data Scientists to build Machine Learning prototypes.
- Additionally, you will improve the current level of automation and Continuous Integration and Continuous Delivery of our platform as well as do code reviews for your peers.
- We work with Apache Spark in Scala and Python, GIT, Jenkins, JFROG Artifactory, Linux, Docker and Hadoop ecosystem. We are looking for people with formal training in software engineering, who is experienced in Object Oriented programing, and passionate about writing clean, testable and maintainable code.
- We require you have at proven experience in object-oriented language, unit testing and versioning with GIT. Also, it is required for you to have experience with SQL queries.
Your Duties :
- Research, evaluate, and develop Enterprise Data Platform capabilities to solve new data problems and challenges.
- Handle production support issues as they arise.
- Work in one or more cross-functional teams to develop prototypes, proof of concepts and implementing big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets leveraging Enterprise Data Platform.
- Support internal Data Scientist team during use cases delivery.
- Extend and improve our internal frameworks, develop guidelines and best practices.
- Perform, in collaboration with Enterprise Architecture team, technology and product research to better define requirements, resolve important issues and improve overall capability of Enterprise Data Platform.
- Communicate with various teams, keeping everyone up-to-date on deployments, outages, issues, and solutions.
Your Experience and Skills :
- Bachelor degree in Computer Science or similar
- Knowledge/proven experience working with
- SQL queries
- Unit Testing
- Software Engineering best practices
- Additional knowledge of python will be an asset