
Associate Data Engineer
Job Description
Posted on: July 11, 2025
Lensa is a career site that helps job seekers find great jobs in the US. We are not a staffing firm or agency. Lensa does not hire directly for these jobs, but promotes jobs on LinkedIn on behalf of its direct clients, recruitment ad agencies, and marketing partners. Lensa partners with DirectEmployers to promote this job for Confluent. Location: Remote, United Kingdom Employment Type: FullTime Location Type: Remote Department Finance & Operations, Business Technology, Data, & Operations Overview We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About The Role We are seeking a Data Engineer to join our team and contribute to building, optimising, and maintaining our real-time data streaming infrastructure. This role is perfect for someone passionate about distributed systems, stream processing, and data engineering best practices. You will work alongside experienced engineers to design, develop, and deploy scalable data pipelines that power mission-critical applications. What You Will Do
- Design, develop, and maintain real-time data pipelines using Apache Kafka, Flink, and other streaming technologies.
- Work with structured and unstructured data to enable analytics, monitoring, and event-driven applications.
- Optimise data ingestion, transformation, and storage for performance, reliability, and scalability.
- Collaborate with software engineers, data scientists, and DevOps teams to integrate data solutions into production environments.
- Implement data quality, observability, and governance best practices.Troubleshoot and resolve performance issues in streaming applications and distributed systems.
What You Will Bring
- A degree in Computer Science, Data Engineering, Software Engineering, or a related field.
- Strong programming skills in Python, Java, or Scala.
- Familiarity with SQL and NoSQL databases
- Basic understanding of Kafka, Flink, Spark, or other stream processing technologies (hands-on experience is a plus).
- Exposure to cloud computing platforms (AWS, GCP, or Azure) and infrastructure as code.
- Interest in distributed systems, data architectures, and real-time analytics.
- Strong problem-solving skills and a willingness to learn in a fast-paced environment.
What Gives You An Edge
- Experience with Apache Kafka
- Knowledge of batch and streaming data architectures
- Product mindset to understand business needs, and come up with scalable engineering solutions
Ready to build what's next? Let’s get in motion.Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law. If you have questions about this posting, please contact support@lensa.com
Apply now
Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!

RemoteITJobs.app
Get RemoteITJobs.app on your phone!

AI Engineer (remote)

Senior Analytics Engineer

Data Engineer Cloud - Azure_ Hasta 55K_ Teletrabajo

ETL/Data Engineer/Data Architect
