top of page
Abstract Blue Pattern

Designing an AI Co-Pilot for Customer Support Agents

​Domain: Customer service · Enterprise

Tools: Figma, Figma Make, FigJam

​

AI copilot for customer .png

Executive Summary

Role: Lead UX Designer
Scope: AI-assisted support workspace, agent workflows, response suggestion system, and AI confidence guardrails
Team: Self-initiated concept based on research into enterprise support workflows and AI-assisted tooling
Outcome: Concept design exploring how AI can reduce handle time, surface context, and support agent decision-making

​

How I designed an AI-assisted agent tool that reduces handle time, surfaces contextual suggestions, and keeps humans in control of every interaction.

The problem

Support agents in large enterprise environments handle hundreds of tickets every day. Each ticket requires them to read long conversation histories, search knowledge bases, and type responses from scratch. The average handle time (AHT) was over 9 minutes per ticket. Agents reported fatigue, inconsistent responses, and frustration with the tools that were supposed to help them.

AI was already being used in the background, but it was not surfaced to agents in a meaningful or trustworthy way. The intelligence existed, but the experience did not.

"The AI knows the answer. The agent doesn't know the AI knows. The customer waits." 

A recurring pattern I wanted to break.

Research & Discovery

I studied how support agents actually work through shadowing sessions, workflow mapping, and reviewing published research on agent tooling at scale. Three pain points emerged consistently:

Context Overload

Agents re-read 20–40 messages per ticket to get up to speed, even for handoffs they didn't initiate.

​

​

Blank-Page Fatigue

Crafting responses from scratch for every ticket was mentally taxing and led to tone inconsistencies.

​

​

AI Distrust

Agents had seen AI suggestions before but did not trust them. There were no confidence indicators, explanations, or clear control for agents.

Design Principles

Before wireframing, I defined three non-negotiable principles for this AI co-pilot:

AI Suggests, Human Decides

Every AI output is a starting point, not a final action. Agents always have one-click override.​

​

Show Confidence, not just Output

Each suggestion carries a confidence indicator so agents can calibrate how much to trust it.​

​

No Hidden Automation

Nothing happens without agent awareness. Fallback states are explicit, not invisible.

​

The solution 

The co-pilot panel sits next to the agent workspace. It provides two AI capabilities: a conversation summary and ranked response suggestions, both visible only to the agent.

Screen 1 - Agent workspace with AI co-pilot panel

The AI co-pilot sits alongside the agent’s existing workspace so it supports the workflow without interrupting it. It provides a conversation summary and ranked response suggestions, helping agents quickly understand the issue and respond faster.

screen - 1_edited.png

Screen 2 - Low confidence fallback state

When the AI is uncertain, the interface clearly communicates the low confidence level. Instead of automatically suggesting a response, the system encourages the agent to review the conversation and respond manually.

image_edited.png

Screen 3 - Agent editing an AI suggestion

AI suggestions act as a starting point rather than a final answer. Agents can review, edit, and personalize the response before sending it, ensuring they remain in full control of the interaction.

image_edited.png

Screen 4 - AI analysing / loading state

While the AI processes the conversation, the interface communicates that suggestions are being generated. Clear loading feedback prevents confusion and reassures agents that the system is actively working in the background.

image_edited.png

User Flow - AI Guardrails

I mapped every point where AI could fail and designed explicit fallback states for each:

image_edited.png

When AI confidence drops below 60%, the suggestion panel shows a low-confidence warning and encourages the agent to write the response manually. This helps prevent low-quality AI responses from reaching customers.

Projected Outcomes

↓ 38% reduction in average handle time
↑ 22% increase in agent confidence in AI suggestions
↓ 60% less time spent reviewing conversation history
0 AI responses sent without agent review

What I learned

Designing for AI is not about making it more visible. It is about making it visible in the right way. The confidence score turned out to be the most important design decision. Agents trusted the tool much more once they understood why a suggestion was made, not just what the suggestion was.

​

The biggest UX challenge was not the AI itself but the fallback states. What happens when the AI is wrong? What happens when confidence is low? What happens when the customer’s sentiment changes mid-conversation? These edge cases are where trust is either built or lost.

bottom of page