Back to SME&C Infographics
Microsoft Fabric • Governance

Governance Framework
for Microsoft Fabric

A visual guide to the 7 governance domains — from tenant settings to deployment pipelines.

Tenant Settings & Administration

The foundation of your Fabric environment. Tenant settings control who can do what across the entire organization — from workspace creation to external sharing to AI features.

✅ What Good Looks Like

All tenant settings are documented with an owner, rationale, and review date
Workspace creation is restricted to approved groups — no sprawl
Feature toggles are scoped to security groups, not enabled tenant-wide
Audit logs are enabled with retention beyond 90 days
AI and Copilot features are governed with clear usage policies
A monthly review cadence catches drift before it becomes a problem

⚠️ Common Pitfalls

Leaving defaults on

Fabric ships with permissive defaults. If you don't review them, every user can create workspaces, share externally, and use AI features without guardrails.

No admin role governance

Too many Fabric Admins means too many people can change tenant-wide settings. Limit admin roles and use PIM for just-in-time access.

Audit log blind spot

The default 90-day retention means you lose compliance evidence. Export to Log Analytics or a storage account for long-term retention.

Feature toggle whack-a-mole

Microsoft releases new features monthly. Without a review process, new toggles get enabled without evaluation — creating shadow governance gaps.

💡 Recommendations

1

Document Before You Change

Export a baseline of all tenant settings before making changes.
This becomes your audit trail and rollback reference.

2

Scope, Don't Blanket

Use security groups to scope tenant settings.
Instead of enabling a feature for the entire org, enable it for a pilot group first.

3

Treat Settings as Code

At maturity, manage tenant settings via Infrastructure-as-Code.
Changes go through pull requests with approval, not manual portal clicks.

4

AI Features Need a Policy

Copilot and AI features consume capacity units and can expose data.
Define who can use them, in which workspaces, and monitor CU impact.

📖 Learn More on Microsoft Learn →

Workspace Governance

Workspaces are the organizational unit of Fabric. How you structure them determines your security boundaries, collaboration patterns, and ability to scale.

✅ What Good Looks Like

Every workspace has a clear owner, purpose, and documented naming convention
Workspace structure follows a deliberate pattern (per data product, per domain, or per layer)
Medallion architecture (Bronze → Silver → Gold) is implemented with clear layer boundaries
Stale workspaces are identified and decommissioned through a lifecycle process
Semantic models are endorsed or certified — users know which data to trust
Workspace provisioning is automated with templates, not ad-hoc

⚠️ Common Pitfalls

Workspace sprawl

Without governance, teams create workspaces ad-hoc. Within months you have hundreds of 'Test', 'Copy of', and abandoned workspaces nobody owns.

No naming convention

Without a naming standard, you can't tell what a workspace contains, who owns it, or what environment it belongs to.

Flat structure

Putting everything in one workspace means everyone has access to everything. No separation between raw data, curated models, and consumer reports.

Orphaned semantic models

Without endorsement governance, users don't know which models are trusted. They build reports on draft or experimental data.

💡 Recommendations

1

Choose a Workspace Pattern

Pattern A (Recommended): One workspace per data product — great for data mesh.
Pattern B: Per medallion layer — for strict regulatory needs.
Pattern C: Consolidated — only for small pilots.

2

Naming Convention

Format: {Domain} - {Layer/Purpose} - {Description} [{Env}]
Example: Finance - Gold - Revenue Reporting [Prod]
Enforce via automation, not email reminders.

3

Medallion Layers

Bronze = raw ingestion (land as-is).
Silver = cleansed and conformed.
Gold = curated business models ready for consumption.
Each layer = its own workspace for access control.

4

Lifecycle Management

Review workspaces quarterly.
Archive unused workspaces after 90 days of inactivity.
Delete after 180 days.
Automate notifications to owners.

📖 Learn More on Microsoft Learn →

Domains & Data Mesh

Domains enable federated governance — letting business units own their data while the central team sets guardrails. This is how Fabric scales beyond one team.

✅ What Good Looks Like

3–7 top-level domains aligned to business units or value streams
Every workspace is assigned to a domain — no orphaned workspaces
Domain admins handle day-to-day governance; central team sets policies
Tenant settings are delegated to domains where appropriate
Data products are endorsed and certified at the domain level
Subdomains provide granularity without over-complicating the structure

⚠️ Common Pitfalls

Too many domains

Creating a domain per team or project leads to fragmentation. Start with 3–7 aligned to major business functions.

Domains without owners

A domain without an admin is just a label. Every domain needs a named administrator who is accountable.

Central team bottleneck

If the central team must approve every workspace, every access request, and every deployment, you've created a bottleneck.

Ignoring certification

Without endorsement and certification, consumers can't distinguish authoritative data from experimental datasets.

💡 Recommendations

1

Align to Business, Not Org Chart

Domains should map to business functions (Finance, Sales, Operations) or value streams.
Not to reporting lines — org charts change; business functions are stable.

2

Three-Tier Delegation

Fabric Admin → sets tenant-level guardrails.
Domain Admin → manages workspaces, access, and settings within their domain.
Domain Contributor → self-service within boundaries.

3

Certification Framework

Two levels:
Promoted — owner recommends it.
Certified — formally reviewed against quality criteria.
Certification should require documented lineage, refresh reliability, and owner accountability.

4

Default Domain

Configure a default domain so new workspaces aren't orphaned.
Route them to a 'Sandbox' or 'Unassigned' domain for triage.

📖 Learn More on Microsoft Learn →

Roles & Access Control

Who can see what, who can change what, and how you enforce least privilege. This domain covers workspace roles, row-level security, sensitivity labels, and identity governance.

✅ What Good Looks Like

Workspace roles use security groups — never individual accounts
Least privilege is the default: most users are Viewers, not Members
Sensitivity labels (Public → Highly Confidential) are applied to all data assets
Row-Level Security filters data by user context — no 'everyone sees everything'
Access reviews happen quarterly with automated reminders via Entra ID
Admin access uses PIM — just-in-time elevation, not permanent assignment

⚠️ Common Pitfalls

Everyone is a Member

Making everyone a 'Member' role gives them edit access to all content. Most users should be Viewers.

Individual account assignments

Assigning access to john@company.com instead of 'Finance-Analysts' security group means no one knows who has access.

No sensitivity labels

Without labels, a confidential financial model looks the same as a public marketing dashboard.

RLS afterthought

Adding Row-Level Security after go-live means users have already seen data they shouldn't.

💡 Recommendations

1

Four Workspace Roles

Admin — infrastructure changes.
Member — create/edit content.
Contributor — edit only assigned items.
Viewer — read-only.
Map each to a security group.

2

Sensitivity Labels

Four tiers: Public, Internal, Confidential, Highly Confidential.
Enable label inheritance — when a Lakehouse is labeled 'Confidential', downstream reports inherit it.

3

Just-in-Time Access

Use Privileged Identity Management (PIM) for admin roles.
Admins activate their role for a time-limited window, with approval and audit trail.

4

RLS Design Patterns

Static RLS: map security groups to data filters (e.g., region = 'EMEA').
Dynamic RLS: use USERPRINCIPALNAME() for per-user filtering.
Always test with 'View as Role'.

📖 Learn More on Microsoft Learn →

Capacity & Cost Governance

Fabric uses a capacity-based model where compute is shared across workloads. Understanding CU consumption, throttling, and cost attribution is essential to avoid surprise bills.

✅ What Good Looks Like

Each domain or business unit has a defined capacity budget
The Capacity Metrics app is deployed and reviewed weekly
Throttling and smoothing policies are documented and understood
Dev/Test workloads run on separate, smaller capacities
Cost attribution uses showback or chargeback to drive accountability
Capacity scaling has governance guardrails — not a free-for-all

⚠️ Common Pitfalls

One shared capacity for everything

A single capacity means one runaway notebook can throttle everyone's reports. Separate at least by environment.

Ignoring the smoothing window

Fabric smooths CU consumption over 24 hours. Teams misread this as 'unlimited' until they hit rejection.

No cost visibility

Without showback, no one knows their workloads cost $8K/month.

Over-provisioning 'just in case'

Starting with F64 when F8 would suffice wastes thousands per month. Start small, monitor, right-size.

💡 Recommendations

1

Capacity Topology

Pattern 1: Single shared (NOT for prod).
Pattern 2 (Recommended): Per domain — enables cost attribution.
Pattern 3: Per workload type (interactive vs background).
Pattern 4: Per environment + domain (enterprise).

2

Monitor Before You Scale

Deploy the Capacity Metrics app on day one.
Review weekly. Scale based on data, not guesses.
F2 handles more than you'd expect for BI-only workloads.

3

Smoothing vs Throttling

Fabric's 24h smoothing window absorbs burst usage.
But if your average exceeds capacity:
1. Background jobs get delayed (10min+)
2. Interactive queries get throttled
3. Everything gets rejected

4

Showback First

Start with showback: give each domain visibility into their CU consumption.
After 6–12 months of stable patterns, transition to chargeback for true cost accountability.

📖 Learn More on Microsoft Learn →

Data Security & Compliance

Protecting data at rest, in transit, and in use. This covers DLP policies, data classification, lineage, Purview integration, network security, and regulatory compliance.

✅ What Good Looks Like

All data assets are classified by sensitivity tier
DLP policies are active in enforcement mode with user-facing policy tips
Data lineage is tracked end-to-end from source to report
Microsoft Purview provides unified governance across Fabric and other platforms
Private endpoints protect sensitive workloads from public internet access
OneLake shortcuts and gateways are inventoried with credential rotation schedules

⚠️ Common Pitfalls

DLP in monitor-only forever

Many orgs enable DLP in 'monitor mode' and never transition to enforcement. Set a deadline (4–6 weeks) to move to blocking.

No data classification

Without classification, you can't apply the right protection. A 'Highly Confidential' dataset and a public reference table get the same (lack of) protection.

Lineage gaps

If data flows through external tools (ADF, Databricks) before landing in Fabric, built-in lineage breaks.

OneLake shortcut sprawl

Shortcuts let any workspace reference data from another. Without governance, you create invisible data dependencies.

💡 Recommendations

1

Classify at the Source

Apply sensitivity labels when data enters Fabric (Bronze layer).
Labels flow downstream through lineage.
Don't wait until the Gold layer — by then, unclassified data has already been exposed.

2

DLP Rollout Strategy

Week 1–2: Test mode (no user impact).
Week 3–4: Monitor mode (log violations).
Week 5+: Enforce with policy tips (block + educate).
Review false positives weekly.

3

Purview Integration

Fabric has built-in lineage, but Purview extends governance across Azure SQL, Databricks, Power BI, and external sources.
Register Fabric as a Purview source for unified catalog.

4

Network Security Tiers

Tier 1: Public endpoints (default — fine for non-sensitive data).
Tier 2: Conditional Access policies (restrict by location/device).
Tier 3: Private Link + Managed VNet (for regulated industries).

📖 Learn More on Microsoft Learn →

Monitoring & Deployment

Operational excellence: how you deploy changes, monitor health, and ensure reliability. Covers Git integration, CI/CD, the Capacity Metrics app, and change management.

✅ What Good Looks Like

Fabric artifacts are in Git with a branching strategy (main/develop/feature)
Deployments go through a pipeline: Dev → Test → Prod with approval gates
The Capacity Metrics app tracks CU usage, refresh success, and query performance
Change management requires documented rationale and peer review
Automated testing validates data quality before production deployment
Monitoring dashboards provide proactive alerting, not reactive firefighting

⚠️ Common Pitfalls

No version control

Editing directly in the Fabric portal means no history, no rollback, no code review.

Manual deployments

Copy-pasting between Dev and Prod introduces human error. Missed configurations, wrong connection strings.

Metrics app not deployed

The Capacity Metrics app is free and gives you CU consumption, throttling events, and refresh success rates.

No testing for data

Code gets unit tests; data gets nothing. Without data validation, bad data silently flows to production.

💡 Recommendations

1

Start with Deployment Pipelines

Fabric's built-in Deployment Pipelines are the easiest way to promote content.
Dev → Test → Prod with no code required.
Great for BI teams getting started.

2

Graduate to fabric-cicd

For engineering teams, the fabric-cicd Python library enables code-first deployments.
Use parameter.yml for environment configs.
Use lakehouse_id.yml for target mapping.

3

Git Integration

Connect workspaces to Azure DevOps or GitHub.
Use feature branches for development, PRs for review, and main branch for production.
Sync is automatic.

4

Monitor Proactively

Deploy Capacity Metrics app. Set alerts for:
CU utilization > 80%
Refresh failure rate > 5%
Query duration > SLA threshold
Review weekly — don't wait for user complaints.

📖 Learn More on Microsoft Learn →
Back to SME&C Infographics