Our fast-growing and professional team is looking for Data Engineer who will join to the development of Cloud Voice Platform application based on SaaS solution. Our client is one of the leading telecommunications companies based in the United States. Our client, together with our teams in Kyiv and Kharkiv, uses the Agile oriented approaches and trending technologies such as AWS, Cassandra, Hadoop, Python, Scala, Microservices. By joining our team, you will get exceptional technical\multinational experience working with highly scalable, resilient solutions, and innovative products that already on production.
This Senior Data Engineer will play a vital role in collaborating as part of this cross-functional Agile team to create and enhance data ingestion pipelines and address big data challenges. The Senior Data Engineer will work closely with the Chief Architect, systems engineers, software engineers, and data scientists on the following key tasks:
- Provide Extraction, Transformation, and Load (ETL) experience coupled with enterprise search capabilities to solve Big Data challenges;
- Design and implement high-volume data ingestion and streaming pipelines using Open Source frameworks like Apache Spark, Flink, and Kafka on AWS Cloud;
- Leverage strategic and analytical skills to understand and solve customer and business centric questions;
- Create prototypes and proofs of concept for iterative development;
- Learn new technologies and apply the knowledge in production systems;
- Monitor and troubleshoot performance issues on the enterprise data pipelines and the data lake;
- Partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives.
- 2 years of experience with big data tools: Hadoop, Spark, Kafka;
- 1 years of experience with object-oriented programming with Java;
- 3 years of experience with and managing data across relational SQL and NoSQL databases like MySQL, Postgres, Cassandra, HDFS, Redis, and Elasticsearch;
- 2 years of experience working in a Linux environment;
- 2 years of experience working with and designing REST APIs;
- Experience in designing/developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks;
- Experience developing data ingest workflows with Kafka Streams;
- Experience transforming data in various formats, including JSON, XML, CSV, and zipped files;
- Experience with performance tuning of ETL jobs;
- Strong interpersonal and communication skills necessary to work effectively with customers and other team members;
- Experience with software configuration management tools such as Git/Gitlab.
- Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, Athena and/or Glue;
- Experience with Microservices architecture components, including Docker and ECS;
- Experience developing microservices to fit data cleansing, transformation and enrichment needs;
- Experience developing flexible data ingest and enrichment pipelines, to easily accommodate new and existing data sources;
- Experience with continuous integration and deployment (CI/CD) pipelines;
- Experience with Jira, Confluence and extensive experience with Agile methodologies;
- Detailed oriented/self-motivated with the ability to learn and deploy new technology quickly;
- A technical degree from a reputable University or equivalent years of experience.