
Staff Data Engineer
Job Description
Posted on: March 27, 2026
Role: Staff Data Engineer
Reports To: Senior Director, Business Operations
Location: Remote
Job Summary:
We are looking for an exceptional Staff Data Engineer to build the high performance foundation that powers our company’s internal business analytics. In this pivotal role, you will partner with a newly hired Staff Analytics Engineer to design and build the end-to end delivery of our data ecosystem, ensuring that our leadership team has the timely, actionable insights they need to make informed decisions. You will be responsible for building “the plumbing” of this data ecosystem, including the ingestion of data from diverse sources into our Azure data lake, transformation of data in Databricks, and delivery of gold layer data to visualization tools. You will enable the seamless flow of financial and operational data from source systems to decision-makers, eliminating the technical bottlenecks that delay critical business insights.
Key Responsibilities:
• Infrastructure Architecture: Design and implement the core architecture for the company’s data ecosystem that will be used for business analytics. This includes the end to-end architecture from the data in source systems to delivery in visualizations.
• High-Scale Ingestion: Build robust Azure Data Factory pipelines to pull data from disparate source systems (Salesforce, ServiceNow, D365) to Azure data lake.
• Standards & Governance: Set the technical standards for the Business Operations engineering team. You will define how the team handles CI/CD, version control, and data quality testing at the ingestion level.
• System Reliability: Ensure the raw and bronze data layers are available and up-to-date, minimizing downtime.
Required Qualifications:
• Experience: 7+ years of progressive experience in Data Engineering, with a specific focus on designing and building cloud infrastructure and high-volume data movement.
• Cloud Infrastructure Architecture: Deep expertise in architecting the Azure Data Stack (Azure Data Factory, Azure Data Lake Storage, Databricks).
• High-Scale Data Ingestion: Proven ability to build robust, scalable ELT/ETL pipelines using Azure Data Factory and Databricks.
• Advanced Python & Spark: Expert-level proficiency in Python and Apache Spark for distributed data processing.
• Governance & Security: Experience implementing enterprise-grade data governance, and data lineage.
• DevOps & CI/CD: Strong experience implementing CI/CD pipelines (Azure DevOps or GitHub Actions) for data infrastructure.
• LLM Application: Experience leveraging LLMs and AI-assisted development tools to accelerate data engineering workflows, improve code quality, and automate repetitive technical tasks.
Education: Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a closely related technical discipline.
Preferred Qualifications:
• Master’s or equivalent in Computer Science, Engineering, or Cloud/Data Systems
• Prior experience working in a technology company or SaaS environment
Apply now
Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!
RemoteITJobs.app
Get RemoteITJobs.app on your phone!

SQL Data Analyst (REMOTE EST)

Staff Data Engineer

Freelance Web Scraping Specialist (Vibe Coding)

Database Reliability Engineer III (Remote)

