FortiVault · AI Governance Architecture

AI Customer Support Governance

Most enterprise AI customer support deployments are missing a governance layer — a mechanism that measures accuracy, controls when automation is allowed, and makes every AI decision traceable.

FortiVault implements governance through three integrated capabilities: the AI Trust Score, Automation Gating, and a Full Audit Trail. Together they form the control layer that sits above FortiAgent and determines when — and whether — AI is safe to resolve a customer query automatically.

Why resolution rate is not enough

AI customer support platforms compete on resolution rate. But resolution rate tells you how many queries an AI answered — not how accurately it answered them, not which categories it should and shouldn't be automating, and not what happened when it got something wrong.

An AI agent that resolves 70% of tickets is impressive. An AI agent that resolves 70% of tickets with 60% accuracy in billing — and no mechanism to catch the errors before they reach customers — is a liability.

Governance answers the question resolution rate does not: is this AI accurate enough in this category to resolve tickets without human oversight?

AI Trust Score

A single, continuously updated measure of AI readiness

The AI Trust Score is FortiVault's primary governance signal. It aggregates response accuracy, human override rate, and connector call outcomes across every support category — updated continuously as FortiAgent handles real conversations.

It answers the question every enterprise needs to answer before expanding automation: is FortiAgent accurate enough in this category to resolve tickets without human review?

Without a Trust Score, that question gets answered informally — by gut feel, by incident, or not at all. With it, automation policy becomes data-driven: FortiAgent earns the right to automate each category as accuracy improves.

What FortiVault measures

Response accuracy

How often FortiAgent's answers are correct, by category

Human override rate

How often agents correct or reject an AI response

Connector reliability

How consistently live API calls return accurate data

Escalation rate

How often FortiAgent correctly identifies queries it cannot resolve

Automation Gating

Category-level control over when AI is allowed to resolve automatically

Automation Gating is FortiVault's mechanism for controlling AI autonomy. Each support category — billing, returns, login, technical support, account changes — has its own AI Trust Score threshold. When FortiAgent's accuracy in a category crosses the threshold, automation is enabled. When it drops below, every response in that category requires human review before sending.

This is the structural difference between governed and ungoverned AI support. Without gating, automation expands by default — and errors in billing queries, refund decisions, and account changes reach customers before anyone on the support team sees them.

With gating, automation is earned category by category, based on measured accuracy. Teams can see exactly which categories are automated, which are in review, and which are disabled — and change thresholds at any time.

What FortiVault measures

Per-category thresholds

Independent Trust Score gates for billing, returns, login, and more

Three automation states

Enabled, human review required, or disabled — per category

Reversible at any time

Raise or lower thresholds without redeploying FortiAgent

Billing and refund protection

High-stakes categories require higher accuracy before automating

Full Audit Trail

Every AI decision traceable to its inputs, rules, and outcome

FortiVault logs every FortiAgent decision — the knowledge sources retrieved, the connector API called, the guidance rule applied, whether automation ran or human review was triggered, and the final outcome. Every response is traceable.

This matters for two reasons. Operationally, it lets support teams identify exactly why an AI response was wrong — which knowledge chunk was used, which connector returned bad data, which rule was misconfigured. Without a trace, you know something went wrong but not what to fix.

Compliance-wise, it means every AI decision is auditable. In regulated environments — financial services, healthcare, retail with consumer protection obligations — the ability to show what the AI decided, why, and what happened as a result is not optional.

What FortiVault measures

Knowledge trace

Which FAQ, policy, or product guide was retrieved for each response

Connector call log

Which API was called, what parameters were sent, what was returned

Rule application log

Which guidance rules shaped tone, escalation, or compliance constraints

Outcome record

Auto-resolved, sent to human review, or escalated — with timestamp

Governance as the control layer above execution

FortiVault sits above FortiAgent in the stack. FortiAgent handles AI execution — knowledge retrieval, connector calls, response generation. FortiVault evaluates every response before it reaches the customer: does the Trust Score support automation? Does the category policy allow it? If not, the response enters human review.

01

FortiAgent executes

Retrieves knowledge, calls connectors, drafts a grounded response based on your configuration.

02

FortiVault evaluates

Computes Trust Score for this category. Checks automation gating policy. Decides: auto-send or human review.

03

Audit trail logged

Every input, decision, and outcome is recorded — traceable, reviewable, and available for compliance reporting.

AI governance questions answered

Is AI Trust Score a real-time metric or a batch calculation?

The AI Trust Score is updated continuously as FortiAgent handles conversations. It reflects the rolling accuracy across recent interactions in each support category — so teams see changes in real time as FortiAgent is corrected or validated by human reviewers.

What happens when a category's Trust Score drops below its automation gate?

FortiVault immediately requires human review for all FortiAgent responses in that category. Responses are drafted by FortiAgent but held in a review queue — a support agent approves or corrects before the customer sees anything. Automation resumes automatically once accuracy recovers.

Who sets the automation gating thresholds?

Support operations teams configure thresholds in the FortiVault admin console. Different categories can have different thresholds — billing and refund queries typically require higher accuracy than informational FAQ responses. Thresholds can be changed at any time without redeploying FortiAgent.

How does this differ from standard AI chatbot compliance features?

Standard compliance features — data residency, encryption, privacy policies — address how data is handled. Governance addresses whether the AI's decisions are accurate and appropriate in real time. An AI can be GDPR-compliant and still give customers wrong billing information. FortiVault's governance layer catches the latter.

See AI Governance Working in a Live Demo

AI Trust Score, Automation Gating, and full Audit Trail — working together in a real FortiAgent deployment.