Why Managing Data Projects Requires Engineering Judgement
Delivery decisions in data programmes need more than process; they require insight into how systems behave.
Over the years, I’ve seen many data projects that were well run on paper.
They had clear plans, regular reporting, engaged stakeholders, and capable teams. Yet despite all of that, they struggled to deliver data that people genuinely trusted or used.
When this happens, it’s rarely about a lack of effort or discipline. More often, delivery decisions are being made without enough engineering judgement about how data systems really behave.
Data work behaves differently
One of the first things you learn when working closely with data platforms is that they don’t behave like most traditional IT projects.
In application or infrastructure delivery, there’s often an assumption that requirements can be stabilised early and that progress will be broadly linear. In data engineering and analytics, that assumption rarely holds.
The hardest problems tend to emerge after something is delivered:
- Source data behaves inconsistently
- Business logic evolves once stakeholders see real outputs
- Edge cases appear at scale
- Performance and cost trade-offs become visible
None of this is unusual. It’s simply the nature of working with data.
Why engineering judgement matters
Engineering judgement, in this context, isn’t about writing code or choosing tools. It’s about understanding where uncertainty genuinely sits and how decisions made today shape outcomes later on.
It means recognising:
- Which decisions are reversible and which are not
- Where short-term shortcuts create long-term cost
- How data modelling choices constrain future analysis
- When an apparent delivery issue is really a data quality or ownership problem
Without this perspective, delivery conversations can become overly focused on surface indicators (plans, milestones, tasks completed) while deeper risks remain hidden.
That’s often why data projects appear healthy right up until stakeholders begin to lose confidence in the outputs.
The role of the project manager in data delivery
Effective data delivery requires project managers to go beyond coordination and reporting. It involves being willing to:
- Challenge assumptions about data readiness
- Ask why certain transformations exist, not just when they’ll be delivered
- Understand the implications of tooling choices (such as dbt or Snowflake) without treating them as silver bullets
- Recognise when progress is being blocked by modelling, ownership, or governance issues rather than delivery effort
This doesn’t mean every project manager needs to be a data engineer. But purely procedural approaches often fall short in data-led programmes.
Treat data platforms as long-lived products
Treating data platforms as products rather than finite projects encourages:
- Clear ownership over time
- More honest conversations about trade-offs
- Investment in models that can evolve as understanding improves
- Governance that supports delivery instead of constraining it
It also creates space to acknowledge uncertainty openly, rather than trying to plan it away.
Closing thought
Good project management remains essential. But in data programmes, it rarely works in isolation.
When delivery is informed by engineering judgement, and when uncertainty is accepted as part of the work, data projects are far more likely to produce outputs that are trusted, useful, and sustainable.
How have you seen this play out in your own data programmes?