Skip to content

Dashboard Metrics Pipeline

Define the future cross-system workflow that turns raw reviews, comments, and conversations into dashboard-ready signal, system, and outcome metrics.

This workflow exists because the Dashboard should not derive future analytics directly from route-local UI state. It needs a canonical pipeline for:

  • signal detection
  • triage and routing summaries
  • KB gap identification
  • outcome recording
  • aggregate materialization

An inbound item arrives from a supported surface such as:

  • social comments
  • review platforms
  • web chat
  • messaging channels
  • future supported inboxes

Required outputs:

  • workspace scope
  • channel type
  • app surface
  • timestamp

The platform classifies the item into dashboard-relevant signal buckets.

Examples:

  • dissatisfaction detected
  • severe negative signal
  • success / praise signal
  • neutral informational request

Required event family:

  • customer_signal.*

The system groups related signals into reusable friction or opportunity themes so the dashboard can show ranked topic clusters instead of only raw counts.

Examples:

  • refund expectations
  • scheduling friction
  • missing compatibility guidance

Required event family:

  • customer_signal.theme_clustered

The platform decides what to do next with the interaction.

Possible paths:

  • auto-handle
  • assist human responder
  • escalate to human review
  • block or defer due to policy, quality, or missing knowledge

Required event family:

  • ai_triage.*

If the system generates or assists a reply, it should record:

  • whether a reply was generated
  • whether a reply was sent
  • whether KB weakness or missing guidance reduced quality/confidence

Required event family:

  • ai_reply.*

Once enough downstream evidence exists, the system may record outcome-level signals such as:

  • resolved
  • recovered
  • reopened
  • successful human handoff

Required event family:

  • interaction.outcome_*

Outcome capture must remain conservative until attribution rules are approved.

The analytics layer groups the recorded events into dashboard payload groups:

  • readinessSummary
  • customerSignalSummary
  • systemSummary
  • outcomeSummary
  • topFrictionThemes
  • recentActivity
  • recommendedActions

Materialization may be:

  • near-real-time
  • batched
  • hybrid

The route contract must not assume a specific storage engine.

flowchart LR
intake[InteractionIntake] --> signal[SignalClassification]
signal --> theme[ThemeClustering]
signal --> triage[TriageAndRouting]
triage --> reply[ReplyAndGrounding]
triage --> human[HumanEscalation]
reply --> outcome[OutcomeCapture]
human --> outcome
signal --> aggregate[DashboardAggregate]
theme --> aggregate
triage --> aggregate
reply --> aggregate
outcome --> aggregate

Enable dashboard-ready signal and system metrics:

  • dissatisfaction detected
  • severe negative count
  • theme clustering
  • escalations triggered
  • KB gap detected
  • auto vs assisted vs escalated handling split

Enable dashboard-ready outcome metrics:

  • resolved
  • recovered
  • reopened
  • successful handoff

Enable more advanced trend and quality layers only after attribution and governance mature.