Dashboard Metrics Ideation
Status
Section titled “Status”Non-canonical exploration document.
This file captures product thinking and preserved conversation outcomes around how the Account Dashboard could evolve from a readiness surface into a more outcome-aware operational cockpit.
If parts of this document become approved behavior, promote them into:
- Account – Dashboard
docs/domains/**docs/architecture/**docs/workflows/**
Why this exploration exists
Section titled “Why this exploration exists”The current dashboard is strongest at showing:
- setup readiness
- coverage gaps
- next actions
That is useful, but it does not yet express the platform’s bigger value proposition:
- understanding customer context from comments, reviews, and conversations
- triaging those inputs well
- responding with AI grounded in the right knowledge base
- improving user and brand outcomes over time
This exploration exists to preserve thinking about how the dashboard could later represent that value in a meaningful way.
Product framing
Section titled “Product framing”The dashboard should eventually help operators answer two linked layers of questions:
1) Readiness and control
Section titled “1) Readiness and control”- Is the system configured correctly?
- Where are the current setup or coverage blockers?
- What should be fixed next?
2) Customer signals and outcomes
Section titled “2) Customer signals and outcomes”- What are customers expressing right now?
- Where is dissatisfaction rising?
- Which issues are being resolved successfully?
- Where is the system failing to triage, ground, or escalate correctly?
The dashboard should remain an actionable operational surface, not turn into a generic BI report.
Metric framework
Section titled “Metric framework”Signal metrics
Section titled “Signal metrics”What customers are expressing across reviews, comments, and conversations.
- inbound volume
- negative sentiment rate
- severe negative review/comment count
- dissatisfaction rate
- praise or success-signal rate
- recurring complaint themes
- unresolved follow-up rate
System metrics
Section titled “System metrics”How well the platform understands, routes, and responds.
- triage confidence
- escalation rate
- false-positive escalation risk
- auto-handled vs human-escalated share
- response time
- reply acceptance / edit rate
- knowledge base usage rate
- blocked reply rate due to missing knowledge
Outcome metrics
Section titled “Outcome metrics”Whether the platform actually improved the result.
- issue resolution rate
- sentiment improvement
- review recovery rate
- negative-to-neutral conversion
- negative-to-positive conversion
- reopened issue rate
- successful human handoff rate
Recommended future dashboard concepts
Section titled “Recommended future dashboard concepts”Concept A — Customer signals strip
Section titled “Concept A — Customer signals strip”A compact row of high-signal cards such as:
Dissatisfaction detectedResolved or recoveredEscalated for human reviewKnowledge gaps affecting replies
This should be clearly secondary to the core readiness strip unless and until real analytics exist.
Concept B — Friction and opportunity panel
Section titled “Concept B — Friction and opportunity panel”A ranked module showing:
- top negative themes
- rising complaint clusters
- unresolved dissatisfaction pockets
- top praise or success clusters
This is likely more useful than a decorative chart because it remains operator-oriented.
Concept C — Outcome trend preview
Section titled “Concept C — Outcome trend preview”A lightweight trend area for:
- negative volume trend
- resolution trend
- review recovery trend
- escalation trend
This should only become canonical when backed by real history or aggregate APIs.
Use cases this could support
Section titled “Use cases this could support”- A restaurant brand wants to see how many low-rating reviews arrived this week and how many were recovered after response.
- A hotel group wants to see where dissatisfaction is rising and which properties have the slowest response or poorest recovery.
- A retail operator wants to identify recurring customer friction topics that the KB does not yet cover.
- A CX leader wants to evaluate whether AI routing and response are improving outcomes over time rather than merely increasing automation.
Current implementation reality
Section titled “Current implementation reality”Important limitation:
The current app implementation does not yet provide real sentiment, response-outcome, or time-series dashboard analytics from the existing dashboard data pipeline.
Current dashboard truth is mostly:
- workspace readiness
- channel linkage and activation
- KB completeness
- agent activation and coverage
- lightweight pseudo activity derived from agent updates
That means any preview of customer-signal or outcome metrics in the current UI must be clearly marked as:
- concept preview
- demo data
- future metrics model
and must not be mistaken for real production analytics.
Promotion candidates
Section titled “Promotion candidates”The following ideas are good candidates for later promotion:
- formal dashboard metric taxonomy: signal vs system vs outcome metrics
- route-level dashboard requirement for a secondary customer-signals region
- future aggregate API payload groups for dashboard metrics
- workflow and analytics instrumentation requirements to make these metrics real
Promotion decision
Section titled “Promotion decision”This section records the current decision about what should graduate into future platform requirements versus what should remain exploratory for now.
Promote toward canonical requirements
Section titled “Promote toward canonical requirements”These are strong enough to guide future implementation planning:
- the dashboard should keep the distinction between:
- readiness metrics
- customer-signal metrics
- outcome metrics
- the dashboard may include a future customer-signals layer, but it must remain secondary to operational readiness and next-action surfaces
- the future dashboard aggregate should support at least:
- signal summary
- system/triage summary
- outcome summary
- top friction themes
- KB gap signals
- instrumentation must eventually support:
- dissatisfaction detected
- escalation triggered
- KB gap detected
- response sent / auto-handled / human-assisted
- outcome resolved / recovered / reopened
Keep exploratory for now
Section titled “Keep exploratory for now”These are promising, but still need more product validation before becoming requirements:
- exact top-card KPI set for the customer-signals strip
- the final naming of concepts like
resolved,recovered, orhuman saves - whether trend mini-charts belong on the dashboard or a drill-in page
- whether praise/success signals deserve primary placement or secondary placement
- how much of this belongs on Dashboard versus a dedicated quality / analytics view
Defer until stronger data and policy contracts exist
Section titled “Defer until stronger data and policy contracts exist”These should not become dashboard requirements yet:
- false-positive vs false-negative triage quality scoring
- sentiment-lift or recovery formulas that depend on historical attribution logic
- brand or revenue impact claims
- cross-channel benchmarking between business units without governance and normalization rules
- ROI metrics such as time saved or churn value recovered
Recommended future release path
Section titled “Recommended future release path”Phase 1
Section titled “Phase 1”Make the following canonical and implementation-ready:
- metric taxonomy
- minimum dashboard payload groups
- minimum enabling analytics events
- explicit route-level support for a future customer-signals layer
Phase 2
Section titled “Phase 2”Implement real signal and system metrics first:
- dissatisfaction detected
- high-risk escalations
- KB gaps affecting replies
- response handling split: auto vs assisted vs escalated
- top friction themes
Phase 3
Section titled “Phase 3”Add outcome metrics only when the underlying attribution model is trustworthy:
- resolved / recovered
- sentiment improvement
- review recovery
- reopened issue rate
- successful human handoff
Guardrails
Section titled “Guardrails”- Do not add decorative graphs without a clear operator job.
- Do not mix fake outcome numbers into real setup metrics without explicit labeling.
- Do not let the dashboard drift into a generic analytics page.
- Keep the dashboard focused on operational clarity, prioritization, and next action.