Skip to content

Dashboard Metrics Ideation

Non-canonical exploration document.

This file captures product thinking and preserved conversation outcomes around how the Account Dashboard could evolve from a readiness surface into a more outcome-aware operational cockpit.

If parts of this document become approved behavior, promote them into:

The current dashboard is strongest at showing:

  • setup readiness
  • coverage gaps
  • next actions

That is useful, but it does not yet express the platform’s bigger value proposition:

  • understanding customer context from comments, reviews, and conversations
  • triaging those inputs well
  • responding with AI grounded in the right knowledge base
  • improving user and brand outcomes over time

This exploration exists to preserve thinking about how the dashboard could later represent that value in a meaningful way.

The dashboard should eventually help operators answer two linked layers of questions:

  • Is the system configured correctly?
  • Where are the current setup or coverage blockers?
  • What should be fixed next?
  • What are customers expressing right now?
  • Where is dissatisfaction rising?
  • Which issues are being resolved successfully?
  • Where is the system failing to triage, ground, or escalate correctly?

The dashboard should remain an actionable operational surface, not turn into a generic BI report.

What customers are expressing across reviews, comments, and conversations.

  • inbound volume
  • negative sentiment rate
  • severe negative review/comment count
  • dissatisfaction rate
  • praise or success-signal rate
  • recurring complaint themes
  • unresolved follow-up rate

How well the platform understands, routes, and responds.

  • triage confidence
  • escalation rate
  • false-positive escalation risk
  • auto-handled vs human-escalated share
  • response time
  • reply acceptance / edit rate
  • knowledge base usage rate
  • blocked reply rate due to missing knowledge

Whether the platform actually improved the result.

  • issue resolution rate
  • sentiment improvement
  • review recovery rate
  • negative-to-neutral conversion
  • negative-to-positive conversion
  • reopened issue rate
  • successful human handoff rate

A compact row of high-signal cards such as:

  • Dissatisfaction detected
  • Resolved or recovered
  • Escalated for human review
  • Knowledge gaps affecting replies

This should be clearly secondary to the core readiness strip unless and until real analytics exist.

Concept B — Friction and opportunity panel

Section titled “Concept B — Friction and opportunity panel”

A ranked module showing:

  • top negative themes
  • rising complaint clusters
  • unresolved dissatisfaction pockets
  • top praise or success clusters

This is likely more useful than a decorative chart because it remains operator-oriented.

A lightweight trend area for:

  • negative volume trend
  • resolution trend
  • review recovery trend
  • escalation trend

This should only become canonical when backed by real history or aggregate APIs.

  • A restaurant brand wants to see how many low-rating reviews arrived this week and how many were recovered after response.
  • A hotel group wants to see where dissatisfaction is rising and which properties have the slowest response or poorest recovery.
  • A retail operator wants to identify recurring customer friction topics that the KB does not yet cover.
  • A CX leader wants to evaluate whether AI routing and response are improving outcomes over time rather than merely increasing automation.

Important limitation:

The current app implementation does not yet provide real sentiment, response-outcome, or time-series dashboard analytics from the existing dashboard data pipeline.

Current dashboard truth is mostly:

  • workspace readiness
  • channel linkage and activation
  • KB completeness
  • agent activation and coverage
  • lightweight pseudo activity derived from agent updates

That means any preview of customer-signal or outcome metrics in the current UI must be clearly marked as:

  • concept preview
  • demo data
  • future metrics model

and must not be mistaken for real production analytics.

The following ideas are good candidates for later promotion:

  • formal dashboard metric taxonomy: signal vs system vs outcome metrics
  • route-level dashboard requirement for a secondary customer-signals region
  • future aggregate API payload groups for dashboard metrics
  • workflow and analytics instrumentation requirements to make these metrics real

This section records the current decision about what should graduate into future platform requirements versus what should remain exploratory for now.

These are strong enough to guide future implementation planning:

  • the dashboard should keep the distinction between:
    • readiness metrics
    • customer-signal metrics
    • outcome metrics
  • the dashboard may include a future customer-signals layer, but it must remain secondary to operational readiness and next-action surfaces
  • the future dashboard aggregate should support at least:
    • signal summary
    • system/triage summary
    • outcome summary
    • top friction themes
    • KB gap signals
  • instrumentation must eventually support:
    • dissatisfaction detected
    • escalation triggered
    • KB gap detected
    • response sent / auto-handled / human-assisted
    • outcome resolved / recovered / reopened

These are promising, but still need more product validation before becoming requirements:

  • exact top-card KPI set for the customer-signals strip
  • the final naming of concepts like resolved, recovered, or human saves
  • whether trend mini-charts belong on the dashboard or a drill-in page
  • whether praise/success signals deserve primary placement or secondary placement
  • how much of this belongs on Dashboard versus a dedicated quality / analytics view

Defer until stronger data and policy contracts exist

Section titled “Defer until stronger data and policy contracts exist”

These should not become dashboard requirements yet:

  • false-positive vs false-negative triage quality scoring
  • sentiment-lift or recovery formulas that depend on historical attribution logic
  • brand or revenue impact claims
  • cross-channel benchmarking between business units without governance and normalization rules
  • ROI metrics such as time saved or churn value recovered

Make the following canonical and implementation-ready:

  • metric taxonomy
  • minimum dashboard payload groups
  • minimum enabling analytics events
  • explicit route-level support for a future customer-signals layer

Implement real signal and system metrics first:

  • dissatisfaction detected
  • high-risk escalations
  • KB gaps affecting replies
  • response handling split: auto vs assisted vs escalated
  • top friction themes

Add outcome metrics only when the underlying attribution model is trustworthy:

  • resolved / recovered
  • sentiment improvement
  • review recovery
  • reopened issue rate
  • successful human handoff
  • Do not add decorative graphs without a clear operator job.
  • Do not mix fake outcome numbers into real setup metrics without explicit labeling.
  • Do not let the dashboard drift into a generic analytics page.
  • Keep the dashboard focused on operational clarity, prioritization, and next action.