Skip to content

Runtime Prompt Governance

This document defines how prompts should be governed in the future runtime architecture.

Prompts are a critical part of system behavior, but they are not the entire behavior model. They must operate within domain, workflow, policy, and approval constraints defined elsewhere in the docs hub.

Related docs:


Runtime prompts should be treated as governed contracts that shape AI behavior inside approved system boundaries.

Prompts can define:

  • tone and behavior
  • task framing
  • output format
  • tool-use guidance
  • grounding and retrieval instructions
  • fallback and refusal behavior

Prompts must not independently define:

  • permission models
  • tenant access
  • approval requirements
  • compliance exceptions
  • undocumented business rules

Those belong to canonical domain, workflow, security, and policy documentation.


Recommended prompt stack:

Defines global runtime invariants.

Examples:

  • safety posture
  • tenant isolation reminders
  • response quality expectations
  • behavior consistency rules

Defines behavior for a product surface or app.

Examples:

  • Chatti Live behavior
  • Comment Responder behavior
  • assistant-side guidance

Defines task-specific orchestration framing.

Examples:

  • planning behavior
  • execution guidance
  • validation criteria

Defines local business truth and policy context.

Examples:

  • brand voice
  • compliance language
  • approved facts
  • workspace-specific rules

  1. Prompts must be versioned.
  2. Prompt changes must be attributable and reviewable.
  3. Prompts must not be the only place where critical business policy exists.
  4. Sensitive actions must still require policy and approval checks outside the prompt.
  5. Runtime prompt updates should generally enter the system as proposals, not silent live edits.

Boundary between prompts and platform behavior

Section titled “Boundary between prompts and platform behavior”

Prompt-driven behavior is only one part of runtime behavior.

Full runtime behavior is shaped by:

  • prompts
  • policy rules
  • workflow contracts
  • tool and API boundaries
  • tenant and permission constraints
  • runtime state and memory

Recommended interpretation:

  • prompts influence behavior
  • policies constrain behavior
  • orchestration decides behavior
  • execution performs behavior

Use docs/prompts/** for:

  • product/runtime prompt contracts
  • prompt inputs and outputs
  • grounding expectations
  • refusal and fallback behavior
  • prompt versioning strategy

Do not use docs/ai-governance/** for runtime product prompts unless the content is explicitly about engineering or delivery-agent behavior.


Prompt evolution should follow this model:

  1. Observe runtime outcomes
  2. Detect prompt-related friction or failure
  3. Generate a structured revision proposal
  4. Review and approve the change
  5. Publish a new prompt version

This preserves:

  • trust
  • auditability
  • rollback ability
  • explanation of why behavior changed

In the current platform stage:

  • define prompt contracts in docs before treating them as runtime-governing assets
  • keep prompts tightly tied to documented workflows and domain rules
  • use recommendations and proposal flow before introducing adaptive self-tuning prompts

Longer term:

  • prompts may be assembled dynamically from approved contracts
  • prompt version resolution may become runtime-driven
  • adaptive tuning may be allowed within explicit governance bounds