Runtime Orchestration Architecture
Purpose
Section titled “Purpose”This document defines the long-term runtime architecture for a voice-driven, AI-orchestrated ConversionIQ platform.
It does not describe the current implementation. It defines the target runtime model that should guide future product, platform, and documentation decisions.
Use this document when making decisions about:
- multimodal input handling
- runtime agent orchestration
- policy-governed tool execution
- documentation-driven behavior
- learning and recommendation loops
Related docs:
platform-overview.mdlayers.mdintent-model.mddocumentation-hub-runtime-role.mdlearning-loop.md../workflows/agent-orchestrated-execution.md../prompts/runtime-governance.md
Position in the architecture
Section titled “Position in the architecture”The entity stack in layers.md describes what the platform is made of:
- Workspaces
- Knowledge Bases
- Agents
- Channels
- Apps
This document describes how the platform should behave at runtime.
These are different views:
layers.md= business and system entitiesruntime-orchestration.md= execution model
They must complement each other and must not be conflated.
Core principle
Section titled “Core principle”ConversionIQ should evolve toward a policy-governed, documentation-driven runtime orchestration system where:
- voice, text, and UI interactions are normalized into the same intent model
- runtime behavior is constrained by canonical documentation contracts
- agents coordinate planning, execution, and validation
- learning systems improve the platform through governed proposals, not silent mutation of truth
Important clarification:
Voice = Text = Promptis useful at the interaction abstraction level- it is not true at the infrastructure level, because voice introduces real-time, privacy, latency, and device-specific concerns that text does not
Runtime layers
Section titled “Runtime layers”flowchart TD inputLayer[InputLayer] intentLayer[IntentLayer] policyLayer[PolicyAndGovernanceLayer] orchestrationLayer[AgentOrchestrationLayer] executionLayer[ExecutionLayer] stateLayer[StateAndMemoryLayer] docsLayer[DocumentationHubAndKnowledgeLayer] learningLayer[LearningAndOptimizationLayer] trustLayer[TrustSecurityAndAuditLayer]
inputLayer --> intentLayer intentLayer --> policyLayer policyLayer --> orchestrationLayer orchestrationLayer --> executionLayer executionLayer --> stateLayer stateLayer --> learningLayer docsLayer --> policyLayer docsLayer --> orchestrationLayer learningLayer --> docsLayer trustLayer --> policyLayer trustLayer --> orchestrationLayer trustLayer --> executionLayer1) Input Layer
Section titled “1) Input Layer”Normalizes all inbound signals into a common interaction envelope.
Handles:
- voice input
- text input
- UI interactions
- channel metadata
- actor/session/workspace context
Primary output:
InteractionEvent
2) Intent Layer
Section titled “2) Intent Layer”Transforms interaction events into structured, actionable intent.
Handles:
- intent classification
- entity extraction
- ambiguity detection
- confidence scoring
- escalation hints
Primary output:
StructuredIntent
3) Policy and Governance Layer
Section titled “3) Policy and Governance Layer”Determines whether the requested action is allowed, safe, and in scope before orchestration proceeds.
Handles:
- workspace and tenant isolation
- RBAC and permission checks
- compliance rules
- tool eligibility
- prompt and workflow selection constraints
- approval requirements
4) Agent Orchestration Layer
Section titled “4) Agent Orchestration Layer”Coordinates the runtime chain of specialized agents required to satisfy a request.
Handles:
- agent selection
- decomposition of tasks
- context passing
- handoff sequencing
- fallback and stop conditions
5) Execution Layer
Section titled “5) Execution Layer”Carries out the approved action path.
Handles:
- response generation
- tool use
- API calls
- configuration changes
- workflow progression
- state mutation
6) State and Memory Layer
Section titled “6) State and Memory Layer”Maintains the context required for coherent operation and future optimization.
Handles:
- session state
- user and workspace context
- conversation/task memory
- execution history
- outcome records
7) Documentation Hub and Knowledge Layer
Section titled “7) Documentation Hub and Knowledge Layer”Supplies canonical policy, behavior, and structural knowledge to runtime systems.
Handles:
- architecture contracts
- workflow definitions
- prompt contracts
- domain rules
- action and approval classes
- product behavior references
8) Learning and Optimization Layer
Section titled “8) Learning and Optimization Layer”Observes system usage and turns outcomes into recommendations for improvement.
Handles:
- pattern detection
- recommendation generation
- failure analysis
- optimization proposals
- prompt/workflow/doc improvement suggestions
9) Trust, Security, and Audit Layer
Section titled “9) Trust, Security, and Audit Layer”Cross-cutting control layer over the entire runtime.
Handles:
- audit logging
- safety and compliance checks
- prompt injection defenses
- sensitive action controls
- policy breach detection
- human-in-the-loop gating
Runtime agent roles
Section titled “Runtime agent roles”The future runtime system should use agent roles based on function, not the current delivery workflow roles used by the engineering team.
Recommended runtime agent classes:
Intent Agent
Section titled “Intent Agent”- interprets what the user is trying to achieve
- identifies missing information
- determines ambiguity and confidence
Planner Agent
Section titled “Planner Agent”- determines whether the goal is feasible
- chooses workflow and execution path
- decomposes work into orchestrated steps
Policy Agent
Section titled “Policy Agent”- validates access, compliance, scope, and action class
- blocks invalid or unsafe execution paths early
Execution Agent
Section titled “Execution Agent”- performs the actual task
- invokes tools and services
- produces user-visible results or mutations
Validation Agent
Section titled “Validation Agent”- checks whether execution satisfied the original goal
- verifies correctness, safety, and policy alignment
Learning Agent
Section titled “Learning Agent”- captures outcome quality
- identifies repeated friction or opportunity
- generates improvement proposals
Recommendation Agent
Section titled “Recommendation Agent”- proactively surfaces next-best actions
- suggests optimizations based on context and prior behavior
Boundary with current delivery agents
Section titled “Boundary with current delivery agents”The current delivery-agent model in ../ai-governance/decision-flow.md governs how the team builds the platform:
- Neo
- Father
- Coder
- Helper
- QA
These are not the same as runtime agents.
Separation rule:
docs/ai-governance/**governs engineering and delivery- runtime agents belong to product/runtime architecture and should be governed by:
docs/architecture/**docs/workflows/**docs/prompts/**- future runtime-specific API and tool contracts
This separation is required to avoid mixing:
- engineering permissions with customer-facing runtime permissions
- implementation workflows with product behavior
- local Cursor tooling with runtime orchestration logic
Key architecture rules
Section titled “Key architecture rules”- The Documentation Hub is a governed contract system, not a passive wiki.
- Runtime systems may read canonical docs and policies directly.
- Learning systems should usually produce proposals for documentation or prompt updates, not silently rewrite truth.
- Sensitive actions must pass through policy, approval, and audit boundaries.
- Workspace and tenant isolation remain non-negotiable runtime constraints.
- Runtime orchestration must remain observable, debuggable, and reviewable.
- Prefer a small number of reliable runtime agent classes over a large opaque swarm.
Relationship to current platform scope
Section titled “Relationship to current platform scope”This vision is aligned with current documented platform principles:
- workspace isolation from
platform-overview.md - entity layering from
layers.md - trust and audit controls from
security-compliance.md - Chatti Live assistant surface from
app-shell.md
It extends the current architecture toward:
- multimodal interaction handling
- runtime orchestration
- documentation-driven behavior selection
- governed learning loops
It does not imply immediate implementation of:
- full voice infrastructure
- autonomous self-editing documentation
- unrestricted tool execution
- large fully autonomous agent swarms
Phased evolution path
Section titled “Phased evolution path”Phase 1: MVP-compatible foundations
Section titled “Phase 1: MVP-compatible foundations”Focus on contracts and orchestration readiness without requiring full multimodal runtime behavior.
Build now:
- canonical runtime architecture docs
- intent contracts for text and UI interactions
- prompt governance and workflow contracts
- policy, approval, and action classifications
- structured telemetry for intent, execution, validation, and outcome
- proposal-based learning rather than self-editing documentation
Phase 2: Early orchestration runtime
Section titled “Phase 2: Early orchestration runtime”Introduce a constrained runtime loop for a small number of high-value tasks.
Build next:
- planner, executor, and validator pattern
- recommendation generation
- runtime policy resolution from approved docs
- stronger observability and audit traces
Phase 3: Governed adaptive behavior
Section titled “Phase 3: Governed adaptive behavior”Use learning signals to improve the platform through controlled proposals and selective automation.
Build later:
- workflow improvement proposals
- prompt revision proposals
- recommendation tuning
- governed adaptation within explicit policy boundaries
Phase 4: Multimodal and proactive intelligence
Section titled “Phase 4: Multimodal and proactive intelligence”Extend the orchestration model to support richer inputs and anticipatory system behavior.
Build much later:
- voice input and output
- streaming session management
- proactive recommendation and optimization flows
- richer memory and context propagation
- more advanced specialized runtime agents
What belongs now vs later
Section titled “What belongs now vs later”Appropriate now
Section titled “Appropriate now”- documentation-driven system contracts
- intent normalization for text and UI
- approval-aware orchestration design
- recommendation and learning telemetry
- strict separation between canonical truth and runtime proposals
Future evolution
Section titled “Future evolution”- direct runtime consumption of fully compiled documentation contracts
- autonomous multi-step system reconfiguration
- full voice-first interaction design
- self-optimizing orchestration loops with limited human intervention