Important notes: this is live and private organisation data, I had to mask ticket titles, expert names, company names, specific products and terms.
Korca Triage Agent
What it does
The Korca Triage Agent answers one question that support teams ask dozens of times a day: "Who is the best person to handle this ticket?"
It does this by reasoning over a knowledge graph built from real production support history - ~500 tickets, 12 experts and ~100 clients. Given a new ticket, the agent traverses the graph to find which experts have historically resolved the most similar tickets, then layers in client relationship context before making its recommendation.
The platform uses deterministic graph routing as a baseline: vector similarity to find related tickets, then a composite score to rank experts. That approach reaches 72% top-1 accuracy on confirmed tickets. Solid, but it misses cases where the ticket wording differs from historical patterns or where client context overrides topic similarity. The Aura agent is the reasoning layer on top: it follows the same graph relationships but uses multi-hop tool calls and LLM reasoning to lift top-1 accuracy to 93.4%.
The agent is integrated into the live UI. Every ticket drawer has an "Ask Aura" button that queries the agent in real time alongside the graph routing score, surfacing both signals to the support manager making the final call.
Tickets received by webhook and there's option to assign expert automatically via Teamwork Desk API or just send a message to the ticket with suggestion.
Dataset and why a graph fits
Dataset: Real support tickets from a live SaaS platform. Imported via the Teamwork Desk API, enriched with expert resolutions and client relationships.
Why a graph is the right structure:
A table can tell you Expert A resolved a ticket about job board integrations. A graph can tell you *why* Expert A is the right person for *this* ticket about *this* client.
That two-signal convergence - Expert A resolves similar tickets AND has history with this client - is not a join you can write cleanly in SQL. In the graph it's a two-hop traversal followed by a frequency count. The agent follows this chain automatically.
The routing problem is inherently relational. Support expertise isn't a flat attribute on a person - it emerges from the intersection of three dimensions that all live naturally as graph relationships:
- Topic familiarity: which tickets has this expert resolved? (`RESOLVED`, `CONFIRMED_FOR`)
- Client affinity: which clients does this expert have history with? (`FROM`, `WORKS_FOR` hierarchy). A ticket from a client an expert has repeatedly handled is a strong prior even if the topic is unfamiliar. `WORKS_FOR` captures sub-contractor/parent relationships so that a ticket from a sub-contractor rolls up to the parent client's history.
- Skill coverage: which product areas does this expert own? (`HAS_SKILL`) Used as a fallback when ticket history is thin for a given topic.
None of these dimensions work well in isolation. A flat "expert skills" table misses client history. A simple ticket-count misses the fact that the same expert handles different topics for different clients. The graph holds all three signals simultaneously and the agent can reason across them in a single traversal.
Graph schema:
```
(:Ticket {id, subject, content, gemini_embedding})
(:User {name, email, skills[]})
(:Client {name, domain})
(:Skill {name})
(:User)-[:RESOLVED]->(:Ticket)
(:User)-[:CONFIRMED_FOR]->(:Ticket)
(:Ticket)-[:FROM]->(:Client)
(:Client)-[:WORKS_FOR]->(:Client)
(:User)-[:HAS_SKILL]->(:Skill)
```
The `CONFIRMED_FOR` edge is the strongest signal β a human explicitly confirmed this expert handled this ticket correctly. `RESOLVED` is weighted slightly lower (self-reported). The agent's Expert Resolver tool aggregates both.
Scale:
- 485 Ticket nodes with 3072-dim Gemini embeddings
- 12 User (expert) nodes
- ~100 Client nodes
- 938 total nodes Β· 3,521 relationships
- Vector index: `ticket_embedding_gemini` (cosine, Vertex AI / gemini-embedding-001)
Agent tools
Semantic Ticket Finder | Similarity Search | Entry point - embeds the incoming ticket and finds the 20 most similar historical tickets
Expert Resolver | Cypher Template | Core routing - traverses `RESOLVED`/`CONFIRMED_FOR` edges from similar tickets to rank experts by frequency
Client History Lookup | Cypher Template | Client affinity - finds recent tickets from the same client and who handled them
Skill Match | Cypher Template | Fallback - matches keywords from the ticket against expert skill nodes when ticket history is thin
Tool 1 β Semantic Ticket Finder (Similarity Search)
The entry point for every routing request. The incoming ticket text (`subject + body`) is embedded using `gemini-embedding-001` (3072-dim, Vertex AI) at query time and matched against the `ticket_embedding_gemini` vector index. Returns the top-20 most semantically similar historical tickets with their IDs, subjects, and pre-computed routing suggestions. Using the same model for both stored vectors and query embeddings ensures scores are meaningful - a cosine similarity of 0.91 actually means the tickets describe the same problem.
Tool 2 β Expert Resolver (Cypher Template)
The core routing logic. Takes the similar ticket IDs from Tool 1 and traverses `RESOLVED` and `CONFIRMED_FOR` edges to count how many times each expert has handled those tickets. `CONFIRMED_FOR` edges carry the strongest signal - they represent explicit human confirmation that this expert was the right person. The result is a ranked list of experts with matched ticket subjects and client names, giving the agent concrete evidence to explain its recommendation.
WITH split($ticket_ids_csv, ',') AS id_strings
MATCH (u:User)-[rel:RESOLVED|CONFIRMED_FOR]->(t:Ticket)
WHERE toString(t.id) IN id_strings
OPTIONAL MATCH (t)-[:FROM]->(c:Client)<-[:FROM]-(other:Ticket)
<-[:RESOLVED|CONFIRMED_FOR]-(u)
RETURN u.name AS name, u.email AS email,
count(DISTINCT t) AS similar_tickets_matched,
count(DISTINCT other) AS same_client_tickets,
collect(DISTINCT c.name)[0..3] AS clients
ORDER BY similar_tickets_matched DESC, same_client_tickets DESC
LIMIT 5
Tool 3 β Client History Lookup (Cypher Template)
The client affinity layer. When the agent detects a company name in the ticket, it calls this tool to surface recent tickets from that client and which expert handled each one. The graph's `WORKS_FOR` relationship captures client hierarchy: a sub-contractor or white-label client points to its parent company, so tickets from either entity contribute to the same client history. A client who has worked with one expert across a dozen tickets is a strong signal, and it's encoded structurally in the graph, no pre-computed score needed. The agent uses this to break ties or override topic-based routing when client familiarity is the dominant signal.
MATCH (t:Ticket)-[:FROM]->(c:Client)
WHERE toLower(c.name) CONTAINS toLower($client_name)
OR toLower(c.domain) CONTAINS toLower($client_name)
OPTIONAL MATCH (u:User)-[:CONFIRMED_FOR|RESOLVED]->(t)
RETURN t.subject AS subject, t.id AS id,
u.name AS expert, t.ingest_status AS status
ORDER BY t.id DESC
LIMIT 10
Tool 4 β Skill Match (Cypher Template)
The fallback layer. When Expert Resolver returns weak signal - fewer than 3 matched tickets for the top candidate - the agent extracts keywords from the ticket subject and calls this tool to find experts whose HAS_SKILL nodes match. This mirrors the local graph routing fallback: when ticket history is thin for a topic (a new product area, an unusual request type), skill coverage becomes the deciding signal. The agent passes keywords as a comma-separated string and the tool returns experts ranked by how many skills match.
MATCH (u:User)-[:HAS_SKILL]->(s:Skill)
WHERE any(keyword IN split($keywords, ',')
WHERE toLower(s.name) CONTAINS toLower(trim(keyword)))
RETURN u.name AS name, u.email AS email,
collect(s.name) AS matching_skills
ORDER BY size(collect(s.name)) DESC
LIMIT 5
Example interaction
Incoming ticket:
"Hi, we have a few team changes to make on our website. One of our consultants has left and needs to be removed, and we're adding two new team members β details attached. Could you update the team page for us please?"
Agent reasoning (4-tool traversal):
1. Semantic_Ticket_Finder embeds the query and retrieves 20 similar historical tickets β surfacing past team page updates, staff change requests, and website content edits
2. Expert_Resolver counts resolutions across those 20 tickets:
- Expert L: 5 matched tickets β team page edits, consultant profile updates, website content changes
- Expert R: 2 matched tickets β website configuration and technical setup
3. Client_History_Lookup confirms: the last 3 tickets from this client were all handled by Expert L
4. Expert_Resolver returned strong signal (5 matches) so Skill_Match is not needed
5. Agent concludes: Expert L owns website content changes for this client. Routes to Expert L.
Why this is a meaningful result: the graph routing surfaced Expert R as the top candidate based on vector similarity to "website" tickets. The Aura agent called the client lookup tool, saw the existing relationship between this client and Expert L, and used client affinity as the deciding signal β arriving at the correct expert where topic similarity alone would have failed.
Accuracy
Measured against 485 tickets with confirmed expert assignments.
Current graph routing (vector similarity + composite score)
Top-1 - 72%
Top-2 - 93.8% (correct routing was in top 2 choices)
Top-3 - 99.2% (correct routing was in top 3 choices)
Aura agent - 93.4%
The top-2 and top-3 numbers show the graph already contains the right answer in almost every case. The hard problem is ranking it first. The Aura agent's multi-hop reasoning addresses exactly that: it lifts top-1 accuracy from 72% to 93.4% by weighing credential patterns, client history, and request type together rather than applying a fixed scoring formula. It correctly routes tickets where vector similarity misleads - cases where the topic matches one expert but the actual request pattern points to another.
What makes this different
1. Real production data, real stakes.
Every ticket, expert, and client relationship in the graph comes from a live support platform. The accuracy numbers are measured against actual expert confirmations - not synthetic benchmarks. The embedding format (`subject\n\nbody`) mirrors the routing query exactly, so similarity scores reflect the real retrieval task.
2. The graph stores the reasoning, not just the data.
`CONFIRMED_FOR` edges represent human judgement - a support manager explicitly confirming the right expert. `RESOLVED` edges represent outcome data. The agent traverses these edges to find experts with demonstrated competence on similar problems, not just experts who exist in the system. A new expert has no `RESOLVED` edges and won't be over-recommended just because they're listed.
3. Live UI integration.
The Aura agent isn't a standalone demo - it's a button in the production ticket drawer. The platform supports three routing modes that teams can choose between:
Manual - a support manager opens a ticket, clicks "β Ask Aura", and sees the recommendation alongside the graph routing score. They make the final call.
Semi-automatic - every new ticket arriving via webhook is automatically routed by Korca, which posts the expert suggestion as a note directly in Teamwork Desk. The manager reviews and confirms.
Fully automatic - Korca assigns the expert in Teamwork Desk without human review, based on the routing score confidence.
This is a working production integration across all three modes - not a console prototype.
4. Three routing signals, one graph.
Client affinity (FROM, WORKS_FOR hierarchy), skill coverage (HAS_SKILL), and ticket history (RESOLVED, CONFIRMED_FOR) are all first-class graph relationships. The agent can traverse any combination: "find experts who resolved similar tickets AND whose past tickets share this client AND have the relevant skill listed." A fixed scoring formula applies these as weighted multipliers. The agent applies them as reasoning steps - it can decide which signal matters most for a given ticket type rather than always applying the same weights.
5. The Aura agent improves on graph routing, not just replaces it.
The graph routing already reaches 99.2% top-3 accuracy - the knowledge is in the graph. The Aura agent's job is to reason over that knowledge more effectively than a fixed scoring formula can. It gets the right answer on tickets where rigid vector similarity misleads: cases where the ticket topic matches one expert but the credential/account pattern, client history, or request type points to another.
Tech Stack
Graph database: Neo4j Aura
Agent integration: Neo4j Aura REST API
Embeddings: Gemini gemini-embedding-001 (3072-dim, Vertex AI)
Backend: Python / FastAPI
Frontend: React / TypeScript
Data source: Teamwork Desk API
Infrastructure:
Korca runs on a bare-metal Kubernetes cluster in a homelab environment. The full stack is self-hosted:
ArgoCD for GitOps deployments
Infisical for secrets management (Aura credentials, API keys)
Cilium as the CNI
Longhorn for persistent storage
Loki + Grafana + Prometheus for logs and metrics
Zot as a private OCI container registry
cert-manager + Cloudflare for TLS.
Linkedin: https://www.linkedin.com/in/jevlachov/



