
Data Engineer
Job Description
Posted on: July 5, 2025
Data Engineer
We’re hiring on behalf of a high-frequency trading firm that has been operating exclusively in the digital asset markets for nearly a decade. Our client is a highly effective trading house with a compact team of technologists and traders, focused on precision, speed and resilience in infrastructure.
Role Overview
We are looking for a Data Engineer to play a key role in scaling and maintaining high-performance data infrastructure. You'll take ownership of developing and optimising data workflows, enhancing observability and governance across key systems, collaborating with both technical and nontechnical teams to ensure data is aligned with business needs. You’ll have the opportunity to influence architecture decisions, contribute to scalable solutions, and work end-to-end across the full data lifecycle - from ingestion through to reporting and machine learning readiness.
Key Responsibilities
- Design and maintain scalable data pipelines for both real-time and batch processing.
- Manage and optimise performance in a high-throughput columnar data warehouse environment.
- Define and implement standards for data quality, observability, and governance.
- Collaborate closely with engineering and trading teams to align data systems with broader objectives.
- Develop and maintain tools for monitoring technical and business metrics (e.g., dashboards, alerts).
- Lead orchestration of workflows using tools like Argo or Airflow and contribute to CI/CD processes.
Requirements
- 5+ years of experience in data engineering or backend infrastructure.
- Strong proficiency in Python, including object-oriented programming and testing practices.
- Advanced SQL skills with experience in optimisation and complex query design (e.g., window functions, joins).
- Hands-on experience with columnar databases such as Clickhouse (ideally MergeTree family).
- Experience with workflow schedulers like Argo Workflows, Airflow, or similar.
- Understanding of Kafka architecture, including topics, partitions, and message flow.
- Familiarity with CI/CD pipelines (e.g., GitHub Actions, GitLab CI, ArgoCD).
- Ability to build and maintain monitoring dashboards with tools like Grafana.
Bonus Skills
- Experience with cloud infrastructure (especially AWS: S3, EKS, RDS)
- Familiarity with Kubernetes and Helm for deployment and scaling.
- Exposure to data observability frameworks and tooling.
- Background supporting ML workflows or infrastructure (e.g., feature pipelines, training datasets).
Apply now
Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!

RemoteITJobs.app
Get RemoteITJobs.app on your phone!

Data Engineer

(Junior) Cloud Data Engineer (m/w/d)

Data Engineer

Analytics Engineer (Remote)
