Dashboard Bloat: A Practical Audit & Archive Framework

If your BI workspace feels like a thrift store—duplicates, abandoned experiments, and "final_v7" tabs—adoption will stall. Here’s the pruning framework I use each quarter to restore clarity without political fallout.

Symptoms of Bloat

The SCORE Model

I triage each candidate dashboard with a simple label. Goal: process 50–100 assets in a day, not start an archaeology expedition.

SStandard: Core operational or exec recurring use.
CCandidate: Useful, but overlapping or needs consolidation.
OObsolete: Outdated metrics or deprecated pipeline.
RReference: Historical snapshot; keep frozen.
EExperiment: Temporary for an analysis sprint.

1. Inventory Pull

Export metadata (owner, last viewed, refresh status). In Power BI / Tableau / Looker this is scriptable. Sort by last_viewed ascending—biggest wins live there.

2. Rapid Classification

I skim structure, filters, refresh cadence. I don’t validate numbers yet. Just tag.

3. Overlap Pass

Group Candidate dashboards covering similar KPIs. Pick a survivor. Archive the rest, link to the survivor in description so dependency anxiety fades.

4. Standardize Survivors

5. Communicate Archive

I post a short changelog in Slack/Teams: what was archived, what replaced it, and how to request restoration (rare). Transparency prevents the “you deleted my thing” drama.

Sample Metadata Query (Looker / Postgres)

SELECT title, last_viewed_at, user_id, space_id
FROM looker_content
WHERE type = 'dashboard'
ORDER BY last_viewed_at ASC
LIMIT 200;

Guardrails

Dashboards age like produce, not wine. Pretending otherwise creates shadow spreadsheets.

ROI Tracking

After a cleanup cycle I log: dashboards archived, duplicates merged, refresh failures reduced, average load time delta. That tangible win buys the next cycle.

Takeaways

Run this twice a year and your BI layer stops feeling like a landfill.