Надсилай резюме
Підпишись на вакансії
Запитай в рекрутера

Senior or Strong Middle Data Engineer

Проект: CRX Markets
ЛьвівКиїв SQL Intermediate Strong 14607

Ми працюємо з платформою, яка вирішує проблему supply chain financing. Клієнтами нашого замовника є Lufthansa, Nestle, Vattenfall. CRX Markets отримав чимало нагород, одна з яких - перемога у конкурсі Fintech Startup та звання найкращого фінансового стартапу Німеччини. Дізнатись більше про проект

Надіслати резюме

Project Overview:

Сompany of our customer is the award winner of Best Fintech Startup and Best Financial Product contests in Germany. The product, they are creating is an innovative electronic trading marketplace for Asset-Based Financing solutions that connects Buyers, Suppliers, Banks and Institutional Investors. This platform will dramatically change SCF (Supply Chain Financing) processes, providing completely new financing possibilities on the b2b market. Within 6 years, our client managed to create a product that meets high standards of international financial industry, and to acquire and integrate several international corporate clients like Lufthansa, Nestle, Vattenfall and Daimler.

One of the project cornerstones is the effective use of data. Use cases range across a wide spectrum including business intelligence, process automation, recommendation engines and risk scoring. Help us build the data infrastructure today that we need to deliver the data products of tomorrow. You will be working closely with different domains to figure out ways to aquire new customers, scale existing programs and optimize user experience – all through data. As the data expert, you will build a strong relationship with internal and external stakeholders and you are required to deeply understand our current and future challenges. As part of a fast paced tech start-up, you are exepected to gain hands-on experience with advanced open source data technologies, and actively drive the data culture.

You role:

  • Collaborate with other teams in the business domain to design data models and data processing logic which translate operational data to valuable business information;
  • Build and maintain scalable low-latency data warehouse using batch ETL and stream-processing technologies;
  • Collaborate in the development of applications for the purposes of reporting, business intelligence and data analytics product.

Requirements:

  • Strong SQL experience (ad hoc queries, optimization techniques, preferably PostgreSQL);
  • Python;
  • ETL/microservices;
  • Data processing & visualization (pandas; matplotlib / hvPlot / plotly; jupyter / jupyterhub);
  • Basic web development (Django / Flask / Falcon; SQLAlchemy);
  • ETL experience / Data Warehouse concepts;
  • Schema Architecture (e.g. Flat-file / Star / Snowflake);
  • Processing Paradigm ( Batch / mini-batch / streaming / change data capture / lambda / kappa);
  • Orchestration: Airflow / Prefect / Luigi / Jenkins;
  • Storage infrastructure for scalable OLAP processing: Postgres / Amazon Redshift;
  • Experience with any batch processing technology: SQL / Talend / Elastic / Spark / Custom Built;
  • Experience with any streaming processing technology: Kafka / Spark Streaming / Storm / Flink;
  • Basic experience with Business Intelligence Visualization Tool (e.g. PowerBI, Tableau );
  • Good conceptual knowledge on streaming processing (Eventual consistency, duplication handling, data latency, water mark, stateless stream);
  • Good knowledge of Docker;
  • English upper-intermediate or advanced.

Nice to have:

  • AWS infrastructure;
  • Kubernetes;
  • CI/CD knowledge;
  • Basic Java.

Higher Education: Bachelor’s Degree.

Надіслати резюме