Skip to main content
Convergent Business TechnologiesBook a Call
Case study · Non-Profit

30 teams on one data estate

Designed and delivered an automated, governed data pipeline for a global development and humanitarian organization — consolidating fragmented Excel-based workflows across multiple functional sections and regions into a centralized Microsoft Fabric warehouse with row-level security and full data lineage.

The outcome
30
teams on one data estate
The brief
Client
Global development and humanitarian organization
Industry
Non-Profit
Service
Data Foundations
Engagement
Feb 2025 — Nov 2025 (technology partner)

What wasn’t working.

A data value-chain assessment inside a large global organization surfaced a familiar problem at scale: multiple functional sections, operating across several regions, had each built their own data workflows over time. The result was a landscape of disconnected Excel-based silos, inconsistent data quality, and limited visibility across teams.

Staff had adapted to the limitations — but the organization recognized it wasn't unlocking the analytical value its data should be producing. Cross-sectional analysis was nearly impossible. Reporting was manual and slow. Without a governed central repository, trust in the numbers varied depending on which team produced them.

Three requirements shaped the brief: automation over manual effort (reliable scheduled pipelines replacing Excel-heavy ingestion), governance and access control (row-level security and lineage tracking were non-negotiable given data sensitivity across regions and functions), and harmonization (shared dimensions across datasets so cross-functional analysis actually works).

How we shipped it.

We built a centralized, governed data warehouse on Microsoft Fabric — designed from the outset for automation, data quality, and controlled access.

Rather than retrofit governance after the fact, we embedded it into the foundation: standardized quality checks at ingestion, Purview-based lineage and cataloguing, and row-level security from day one. Harmonized dimensions were defined up front so that cross-sectional analysis became a first-class capability, not an afterthought.

The engagement also included hands-on enablement — field trainings delivered to users across regions so the platform would be adopted, not just deployed.

In the box at go-live.

  • Automated ingestion pipelines pulling Excel-based source files into a centralized data lake on Microsoft Fabric OneLake.
  • Standardized data quality checks applied at ingestion to catch inconsistencies early and reduce manual reconciliation effort.
  • Harmonized dimensional model enabling cross-sectional analysis across previously siloed functional areas.
  • Central analytical repository with Power BI dashboards, access control, and row-level security aligned to organizational roles and regions.
  • Fabric-native governance via Microsoft Purview — full data lineage, cataloguing, and access auditability.
  • Four field training sessions delivered to end-user teams across regions to drive adoption.

The number.

4
regional trainings delivered
1
governed data estate

A single governed analytical platform now serves 30 teams across the organization, replacing fragmented Excel workflows with a unified, multi-region repository. Cross-sectional analysis — previously impossible without manual consolidation across teams — is now a first-class capability. The implementation established a reusable architectural pattern for future data initiatives, delivered with row-level security and full lineage appropriate for a sector where data sensitivity and accountability are high.

Technology stack
Microsoft FabricOneLakeData FactoryPower BIMicrosoft PurviewPythonExcel

What’s the number you want to move?

Thirty minutes with a senior consultant. No pitch deck.

Book a Discovery Call →