Маєш запитання стосовно вакансій, проєктів, умов роботи? Напиши нашому рекрутеру!
Надіслати резюме

Project Overview:

Smava is one of the biggest fintech employers and has received several awards, including the prize for Top Employer 2020 and for being one of the 50 most promising start-ups in Europe. Become part of the smava story now and let us grow together! We are smava – the online credit comparison. We make loans transparent, fair and affordable!

We are part of the client's team. Together we are working closely to drive further market innovations and help to stay the market leader.
 
We are looking for a talented open-minded Data Engineer who will bring his/her experience, intelligence, and inspiration to support our Data Infrastructure Team.

The smava Data Infrastructure is responsible for building the new Data Platform of smava. This team acts as a bridge between the production teams, the Data Scientists and Analytics.

The client's team builds data pipelines for a variety of use cases and considers data engineering to be one of the most interesting areas where software engineering can be combined with challenging algorithmic problems.

In this role, you will develop distributed services that process data in batch and real-time with a focus on scalability, data quality, and business requirements while delivering the groundwork for the governance team. You will have the opportunity to work on challenging data-related problems and building a self-serve data platform.

And your responsibilities:

  • Develop, test and maintain data infrastructures based on our tech stack combining proprietary and 3rd party solutions: Java, Python, Terraform, Jenkins, Kubernetes, AWS Ecosystem especially Lake Formation, Glue, MSK;
  • Build and develop reusable technology that enables teams to capture, process, store, and serve their data products in an easy way;
  • Build and maintain complex and scalable ETL pipelines between internal & external sources;
  • Collaborate closely with business stakeholders such as Marketing, Product, Analytics, etc.;
  • Drive data quality and governance checks (e.g. validation, consolidation, deduplication, anonymization, GDPR compliance etc.);
  • Identify and evaluate current developments and new technologies;
  • Teach, guide, and leadless experienced colleagues from a content perspective.

Requirements:

  • Development experience in any of languages: Scala, Java, Kotlin, and Python;
  • Experience in building:
    • ETL / ELT pipelines;
    • Data lakes & raw data layers;
    • Cloud data warehousing (ideally with AWS MSK; AWS lake formation, Spark, EMR, Glue, Snowflake).
  • Familiar with Spring Framework;
  • Experience in designing and implementing state of the art, modular data infrastructures - always driven by use cases and end-user value;
  • Strong solution focus, constantly trying to reduce the “time to insight” (analytics data) and “time to product enhancement” (production data);
  • Good written and spoken English (Upper Intermediate);
  • Quick on the uptake, team player, thrives in an international environment.
Надіслати резюме