Our client is a leading private equity firm specializing in investments in software, e-commerce, and technology-enabled businesses. With a rich history dating back to 2003, the company has established itself as a prominent player in the industry.

As an investor, the company takes a long-term approach to value creation, partnering with ambitious and growth-oriented businesses to unlock their full potential. With a team of experienced professionals and a strong track record, the company has successfully guided numerous companies to achieve exceptional growth and success.

At the company, we believe in the power of innovation and the transformative impact of technology. One of our core initiatives involves collecting market research, conducting interviews, and aggregating data from multiple sources to understand and oversee global economy and market trends. This project plays a crucial role in informing our investment decisions and driving strategic initiatives.

Project Overview:

As a data engineer at the company, you will play a vital role in supporting this project. You will be responsible for designing, building, and maintaining the data infrastructure and pipelines that enable the collection, processing, and analysis of diverse data sets. By ensuring the availability, reliability, and security of our data assets, you will empower our team to derive meaningful insights and make data-driven decisions.

Working closely with cross-functional teams, you will contribute to the development of advanced analytics capabilities, leveraging cutting-edge technologies and methodologies. Your expertise in data engineering will help us streamline data workflows, optimize data storage and retrieval, and facilitate seamless integration with analytics and visualization tools.

Рекрутерка
Тетяна Рудченко
Responsibilities:

As a data engineer at the company, you will play a vital role in supporting this project. You will be responsible for designing, building, and maintaining the data infrastructure and pipelines that enable the collection, processing, and analysis of diverse data sets. By ensuring the availability, reliability, and security of our data assets, you will empower our team to derive meaningful insights and make data-driven decisions.

Working closely with cross-functional teams, you will contribute to the development of advanced analytics capabilities, leveraging cutting-edge technologies and methodologies. Your expertise in data engineering will help us streamline data workflows, optimize data storage and retrieval, and facilitate seamless integration with analytics and visualization tools.

We are looking for candidates who are passionate about data engineering, have a solid understanding of modern data technologies and tools, and possess strong problem-solving and analytical skills. The ideal candidate should have experience in Azure data services (preferably Synapse, Databricks, ADF, Azure Storage, and Purview) and a proven track record of delivering high-quality data solutions in a fast-paced environment.

Requirements:
  • 4 years or more of experience in the field;
  • Most of the work will be in SQL, so being skilled in SQL is a requirement;
  • Some work will probably be in Python, so at least one of the profiles should be skilled enough to write custom pipelines if necessary;
  • Good understanding of data modeling, normalization, fact vs dimension tables and so on;
  • Experience with llms, open AI, Automl, DBT, Azure DataBricks, Databricks PySpark, Azure prompt flows and/or similar technologies is important;
  • Works well in a setting where he or she needs to make sure the principles and requirements are met, but where speed is valued over extreme scalability;
  • Actively use AI pair programmer, such as Github Copilot.

Nice to have:

  • Familiar with data/dev ops principles (versioning, ci/cd, automated tests, etc);
  • Familiar with the Microsoft data stack (know about the possibilities in data factory, purview, anomaly detection, key vault, etc);
  • Advantage if they have experience with dbt (https://www.getdbt.com/), but it is mostly just SQL so can easily be learned if they know SQL and data modeling;
  • We might want to do some work in Spark, so knowing that is an advantage, but not a requirement;
  • Be a driving force for good data governance.
What it’s like to
work at Intellias:

Join the project and become part of a dynamic and collaborative team that values innovation, integrity, and excellence. Together, we can make a significant impact on understanding the global economy and market trends, shaping our investment strategies, and driving the success of our portfolio companies.

#LI-TR1

Тебе також можуть зацікавити

Чому варто приєднатись до команди INTELLIAS

У нас ти знайдеш доброзичливе середовище та можливості навчатися й зростати щодня.

Можливості релокації в INTELLIAS

Отримуй новий досвід та відкривай нові горизонти, знаходячись лише в декількох годинах подорожі…

Підтримка здоров’я та спорту

Ми докладаємо максимум зусиль, щоб забезпечити комфортні умови для консультантів компанії, та піклуємося…

Як стати частиною команди INTELLIAS

Ми робимо все можливе, щоб спростити та прискорити твій шлях до нашої команди. Будемо раді бачити тебе...
Dropzone.autoDiscover = false;