Project Overview:

A global professional services company providing organizations with tools for cloud enablement and transformation. Through a unique combination of expertise and agility, the company accelerates cloud innovation and helps organizations fully unlock the value received from cloud technology.
The company is supported by a robust ecosystem of technology partners, proven methodologies, and well-documented best practices. Thereby, elevating customers by achieving operational excellence on the cloud, within a secure environment, at every milestone of the journey to becoming cloud-first.
With over 12 years of experience and a portfolio with thousands of successful cloud deployments, the company serves clients across the globe. The company has offices in Israel, Europe, and North America.

рекрутер
Ангеліна Харчук
Responsibilities:
  • Keep our customers’ data separated and secure to meet compliance and regulations requirements;
  • Design, Build and Operate the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, cloud migration tools, and ‘big data’ technologies;
  • Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operation problems;
  • Design, Build, and Operate large, complex data lakes that meet functional / non-functional business requirements;
  • Optimize various data types ingestion, storage, processing, and retrieval from near real-time events, and IoT, to unstructured data as images, audio, video and documents, and in between;
  • Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development, and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
Requirements:
  • 5+ years of experience in a Data Engineer role in a cloud-native eco-system;
  • Bachelor (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field;
  • Working experience with the following technologies/tools: big data tools: Spark, ElasticSearch, Hadoop, Kafka, Kinesis etc.;
  • Relational SQL and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra;
  • Cloud data services (AWS most wanted as kinesis, EMR, Redshift etc.);
  • Functional and scripting languages: Python, Java, Scala, etc. Advanced SQL;
  • Experience building and optimizing ‘big data’ pipelines, architectures, and data sets;
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ stores;
  • Experience supporting and working with external customers in a dynamic environment;
  • Articulate with great communication and presentation skills (german language is a plus);
  • Team player that can train as well as learn from others;
  • Advantage: Experience with various ML models for classification, scoring, and more;
  • Advantage: Experience with Deep Learning Neural Networks (Convolution, NLP etc.).

Тебе також можуть зацікавити

Чому варто приєднатись до команди INTELLIAS

У нас ти знайдеш доброзичливе середовище та можливості навчатися й зростати щодня.

Можливості релокації в INTELLIAS

Отримуй новий досвід та відкривай нові горизонти, знаходячись лише в декількох годинах подорожі…

Підтримка здоров’я та спорту

Ми докладаємо максимум зусиль, щоб забезпечити комфортні умови для консультантів компанії, та піклуємося…

Як стати частиною команди INTELLIAS

Ми робимо все можливе, щоб спростити та прискорити твій шлях до нашої команди. Будемо раді бачити тебе...