Intellias logo

Senior Data Engineer

Intellias
Department:Data Engineer
Type:REMOTE
Region:EU
Location:Poland
Experience:Mid-Senior level
Estimated Salary:PLN90,000 - PLN130,000
Skills:
PYTHONKAFKAAPACHE FLINKAPACHE SPARKAPACHE HUDIAPACHE ICEBERGRDBMSPOSTGRESQLMYSQLS3GCSAWSGCPAZUREETLELTDATA MODELINGDATA QUALITY
Share this job:

Job Description

Posted on: October 29, 2025

We are building a large real-time data pipeline to move data from an RDBMS to several destinations - datalake, search engine (e.g. OpenSearch). 300M+ records, 10Tb volume. Real-Time + Historical data load. Transformation and aggregations on the stream.

  • Tech Stack:

Python

Kafka

Apache Flink / Apache Spark (Streaming)

Apache Hudi / Apache Iceberg

Requirements:Required skills:

  • 5+ years of experience as a Data Engineer or similar role, with hands-on expertise in large-scale, production-grade data pipelines.
  • Proven experience designing and running real-time data streaming systems (Kafka + Flink / Spark Streaming).
  • Strong proficiency in Python for data engineering (data processing, orchestration, automation).
  • Solid understanding of distributed systems, data partitioning, checkpointing, and fault-tolerant stream processing.
  • Practical experience with Apache Hudi or Apache Iceberg for incremental data storage and schema evolution.
  • Experience with RDBMS sources (PostgreSQL, MySQL, etc.) and data lakes / object storage (S3, GCS, etc.).
  • Deep understanding of ETL / ELT design patterns, data modeling, and data quality principles.
  • Experience deploying and maintaining data infrastructure in cloud environments (AWS / GCP / Azure).
  • Excellent analytical and problem-solving skills, with the ability to design robust, scalable, and efficient architectures.
  • Strong communication skills and ability to collaborate with cross-functional teams.

Responsibilities:

  • Collaborate with business stakeholders and technical teams to understand and analyze data requirements
  • Lead the design and implementation of data models and database structures that meet business needs
  • Profile, refactor, and tune performance in the database
  • Design and implement complex ETL processes to extract, transform, and load data from various source systems into the data warehouse
  • Ensure data integrity, consistency, and accuracy through robust data quality assurance measures
  • Review and support team members, providing guidance and mentorship
  • Supervise and contribute to the data-driven strategy for the project, aligning it with business objectives
Originally posted on LinkedIn

Apply now

Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!

RemoteITJobs.app logo

RemoteITJobs.app

Get RemoteITJobs.app on your phone!