
Data Engineer
Job Description
Posted on: October 14, 2025
Title : Data Engineer Location : Remote (PST Preferred) Job Type : Contract (12+ months) About The Company A leading entertainment company at the forefront of data-driven decision-making. This team focuses on optimizing retention strategies across streaming and television products, leveraging advanced data engineering techniques and cloud technologies to drive business impact. Job Description The Senior Data Engineer – Retention Analytics will play a critical role in building and optimizing data pipelines that support retention strategy and campaign performance analytics. This individual will work closely with data analysts and business stakeholders to develop, transform, and maintain high-quality datasets that drive insights into customer churn, financial performance, call volume, and viewership trends. Key Responsibilities
- Design, develop, and optimize ETL pipelines using Databricks and Snowflake .
- Collect, transform, and load (ETL) data into the warehouse and reporting environments.
- Optimize data performance and troubleshoot inefficiencies within Databricks.
- Work with large, unstructured datasets , managing complex joins and nested queries.
- Automate data dependencies and tasks , ensuring seamless data flow and pipeline reliability.
- Conduct daily monitoring of data flows to proactively resolve any errors or job failures.
- Build dashboards and visualizations in Tableau or Power BI (not a requirement but a plus).
- Support retention analytics by enabling data-driven decision-making across customer segmentation models.
- Implement data partitioning and performance tuning techniques (e.g., Salt technique, Repartitioning ) to optimize workloads.
- Work independently within an agile environment , collaborating with data analysts, business teams, and other engineering partners.
Required Qualifications
- 7+ years of experience in data engineering .
- Proficiency in Python, SQL, Spark, and Databricks .
- Experience working with Snowflake and cloud-based data platforms (AWS, Azure, or GCP) .
- Strong understanding of ETL processes, data pipelines, and workflow automation .
- Experience working with churn, financial, subscription, and viewership data .
- Ability to track joins and manage nested, complex code structures in large datasets.
- Hands-on experience with data partitioning strategies for performance optimization.
- Preferred time zone: PST (open to Central but not East Coast).
Nice-to-Have Qualifications
- Experience developing Tableau or Power BI dashboards .
- Familiarity with machine learning or predictive analytics in a data engineering context.
- Exposure to big data streaming technologies like Kafka or Spark Streaming.
Interview Process & Timeline
- Open to reviewing candidates immediately – resumes should be submitted on a rolling basis (not in batches).
- Manager prefers go-getters with strong time management skills – independent workers who thrive in a structured, fast-paced environment.
Apply now
Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!
RemoteITJobs.app
Get RemoteITJobs.app on your phone!

Mid-Senior Data Engineer (B2B temporary)

Software Engineer (Remote - Europe)

Data Engineer

Data Engineer II, Fandango (AWS/Redshift/PySpark)

