The Shift From Data Pipelines to Data Products

Simon Späti
14 min readJun 20, 2022
The Shift From Data Pipelines to Data Products

Data consumers, such as data analysts, and business users, care mostly about the production of data assets. On the other hand, data engineers have historically focused on modeling the dependencies between tasks (instead of data assets) with an orchestrator tool. How can we reconcile both worlds?

This article reviews open-source data orchestration tools (Airflow, Prefect, Dagster) and discusses how data orchestration tools introduce data assets as first-class objects. We also cover why a declarative approach with higher-level abstractions helps with faster developer cycles, stability, and a better understanding of what’s going on pre-runtime. We explore five different abstractions (jobs, tasks, resources, triggers, and data products) and see if it all helps to build a Data Mesh.

What Is a Data Orchestrator?

A Data Orchestrator models dependencies between different tasks in heterogeneous environments end-to-end. It handles integrations with legacy systems, cloud-based tools, data lakes, and data warehouses. It invokes computation, such as wrangling your business logic in SQL and Python and applying ML models at the right time based on a time-based trigger or by custom-defined logic.

What makes an orchestrator an expert is that it lets you find when things are happening…

--

--

Simon Späti

Data Engineer & Technical Author with 15+ years of experience. I enjoy maintaining awareness of new innovative and emerging open-source technologies.