🏆 Build a Tiny Aura Agent Community Challenge

:trophy:Aura Agent: Build a Tiny Aura Agent Community Challenge

It's been a while a since we had a contest, so let's kick one submissions are open!

Neo4j Aura Agent is now generally available :tada:
To celebrate, we want to see what the community builds with it.

Aura Agent lets you build and deploy graph grounded AI agents in minutes, directly on top of your AuraDB knowledge graphs. No custom infrastructure required.

You can use Aura Agent to generate agents from your graph schema, reason across relationships, inspect how answers were produced, and deploy through REST or MCP endpoints. Aura Free is enough to get started and is available throughout February.

:bullseye: Community challenge

Build a tiny Aura Agent and share what it does.

Tiny really means tiny. This challenge is about clarity and graph thinking, not scale. Small graphs work great.

As guidance, your agent can:

  • stay under 10k nodes
  • focus on a single, well defined schema
  • answer one question really well

Any one of those is fine.


:white_check_mark: How to participate

Reply to this topic with:

  1. Agent name
  2. What your agent does in one sentence
  3. What data or graph it is grounded in
  4. One or two example questions it can answer

Optional but encouraged:

  • Screenshot of the agent UI or reasoning tab
  • Short Loom or GIF

No forms. No pressure. Just build and share.

:bellhop_bell: If you run into challenges, feedback or :beetle: (positive vibes too!) add a topic for our Aura Agent team here!

:trophy: What’s next?

We'll announce :sports_medal: prizes next week and we’ll highlight standout examples across:

  • Creative use cases
  • Strong graph reasoning
  • Clear, explainable answers
  • Community favorites

If this goes well, we will run another contest with different criteria for the second half of the month! So now’s the perfect time to experiment.

Here are all your assets to begin!

Launch blog by @zach_blumenfeld

Developer Guide tutorial

Demo video

New Aura Agent Product webpage

:backhand_index_pointing_right: Read the full product announcement here:

1 Like

Ari's Sample submission

Agent name
Star Wars Swapi Mini Agent

What the agent does
Answers questions about Star Wars characters, movies, and planets using a small knowledge graph and explains how answers are derived from graph relationships.

What data or graph it’s grounded in
A tiny Star Wars knowledge graph built from SWAPI data. The graph includes movies, characters, and planets and stays well under 10k nodes. It focuses on relationships such as appears in and homeworld to support multi hop reasoning.

Example question
Which characters are connected to Naboo across multiple movies?

(Screenshot: agent overview and question input)

Example response and reasoning
The agent identifies characters that have a relationship to Naboo and appear in more than one movie. It follows character to planet and character to movie relationships to determine overlap.

The resulting characters are:

  • R2-D2
  • Palpatine
  • PadmĂ© Amidala
  • Jar Jar Binks

(Screenshot: agent response and reasoning view)

Optional notes
This agent was built entirely on Aura Free. The goal was to keep the graph small and focused while still demonstrating multi hop reasoning and explainability in the agent responses.

2 Likes

Agent Name:

Find Attractions in a City / neighborhood Mini Agent

What it does:

Identifies places of interest of given type(s) based in target location

What data or graph it’s grounded in:

A knowledge graph created from a collection of travel blogs data with restaurants, museums, hotels, etc. extracted (via a GenAI agent). The graph also contains the hierarchical structure of the blogs / documents, and relevant Country –> City –> Neighborhood information.

Example Question:

What attractions / restaurants / shops / museums are in Mexico City / Roma Norte?

Optional Notes:

Built on Aura Free.

1 Like

Awesome @ims585. First and foremost, welcome back as it's been a minute (or more than 4 minutes, to be exact), since we have seen a you.

Thanks so much for jumping in and sharing this :raising_hands:
This is a great example of a clear, focused Aura Agent and exactly the kind of submission that helps others see what “tiny but effective” looks like.

As a small surprise for being the first submission, we’re sending you a Neo4j beanie. (see your DMs).

Thanks for helping kick off the challenge and setting the tone.

Really looking forward to seeing what others build from here.

Hey folks - I really like this competition and looking forward to see more!

Here’s a small, concrete example to give another idea of how Aura Agents can be used for the challenge. My example is from the financial domain. Thanks to the GDC (formerly known as LDBC) for publishing this cool temporal graph dataset!

Setup (very lightweight)

Agent

Using Aura’s AI features, I created a first agent called “Financial Transfer Analyzer”.

It uses three tools:

1. FetchCompanies (Similarity Search)

  • For each Company node, I generated an OpenAI embedding from its description, using Neo4j’s build in GenAI plugin.

  • Stored the embedding as a node property and created a vector index.

  • Configured a similarity search tool on that index.

  • This allows natural queries like:

    “Which company produces smart energy devices and is based in Leipzig, Germany?”
    (Note: the dataset is synthetic; descriptions are artificial.)

2. Text2Cypher

  • Uses the known graph schema to translate natural language into Cypher.

  • Example:

    “How many companies does a person named Kevin G own between 2023 and 2024?”

3. Find Intermediary Accounts in Transfer Cycle (Cypher Template)

  • FinBench defines several temporal risk/fraud queries, as you can read in their full documentation.

  • I implemented TCR-4 as a Cypher Template Tool.

TCR-4 (conceptually):

Given two accounts (via id) and a time window, check whether money was transferred from src to dst.
If so, find intermediary accounts that received money from dst and transferred money back to src within the same window, and aggregate counts and amounts.

This tool runs a predefined Cypher query and only needs parameters (srcAccountId, dstAccountId, startTime, endTime).

Here the cypher query:

CYPHER 25
MATCH (src:Account {accountId: toInteger($srcAccountId)})
MATCH (dst:Account {accountId: toInteger($dstAccountId)})
MATCH (src)-[e1:TRANSFER]->(dst)
WHERE e1.createTime >= datetime($startTime) AND e1.createTime <= datetime($endTime)
WITH src, dst
MATCH (dst)-[e3:TRANSFER]->(other:Account)
MATCH (other)-[e2:TRANSFER]->(src)
RETURN
  other.accountId AS otherAccountId,
  COUNT(e2) + COUNT(e3) AS totalTransferEdges,
  SUM(e2.amount) + SUM(e3.amount) AS totalTransferAmount,
  coll.max([e2.amount, e3.amount]) AS maxTransferAmount

Example interaction

  1. Ask for the Accounts to check
    The agent first uses Text2Cypher to resolve the nicknames to the correct account IDs.

  2. The business question

    By this question, the agent uses our tool Find Intermediary Accounts in Transfer Cycle with the resolved IDs and time window. The agent returns a rendered table listing intermediary accounts, number of transfers, total amounts, and max transfer amount.

Why this is interesting

  • The agent combines semantic search, schema-aware query generation, and domain-specific Cypher templates.

  • It shows how Aura Agents can translate a real-world risk question into:

    1. Graph lookup

    2. Fraud pattern detection

    3. Structured, analyst-friendly output

Hopefully this gives another concrete angle for how to approach the challenge with Aura Agents :rocket:

1 Like

hey, so excited for another challenge.

here’s my submission:

Agent name:
ConceptCompass

What it does:

Evaluates whether a startup idea solves real user problems and highlights where it is crowded versus uniquely differentiated using explicit graph relationships.

What data or graph it is grounded in:
A small Neo4j graph connecting Ideas, Problems, UserTypes, and ExistingSolutions via SOLVES, AFFECTS, and ADDRESSES relationships.

Example questions it can answer:

  • Does this idea solve a problem that already has strong competitors?
  • Which problems does this idea address that remain unmet?

I intentionally kept the graph and agent small to focus on explainable, relationship-driven reasoning rather than scale.

anyways, was fun. enjoyed playing around

1 Like

Great to see some momentum!

Here is the first prize for the winner when announced Feb 16:


https://www.lego.com/en-in/product/soccer-ball-43019

:wrapped_gift: First-Place Prize: LEGO® Editions Soccer Ball (Set 43019)

We’re giving away a brand-new LEGO® Editions Soccer Ball building set as the first prize! It’s a detailed collectible build perfect for fans of sports, design, and creativity.

It won't be released til March 1st so this is hot off the presses as the world prepares for the World Cup!

Hi @Ari_Neo4j & team, here’s my submission :slight_smile:
PagerDruid — Blameless On-Call Graph Agent for Microservices (Neo4j Aura Agent)

PagerDruid is a sarcastic-but-blameless on-call assistant that diagnoses incidents using graph reasoning over a microservices dependency graph and evidence retrieval from runbooks + incident history.

:white_check_mark: Data note: The schema is purpose-built and the dataset is synthetic, created to safely demonstrate realistic on-call triage patterns (services, dependencies, ownership, runbooks, incidents).

It answers questions like:

  • “Checkout is timing out with 502s — what should I check first and who should I page?”
  • “If redis-cache is unhealthy, what’s the blast radius within 2 hops?”
  • “What’s the shortest dependency path from checkout to core-db?”

:brain: What it does

PagerDruid combines:

  1. Keyword retrieval (RAG without embeddings)
    • Full-text search over runbooks, incidents, and services
  2. Graph reasoning
    • Dependency paths (DEPENDS_ON)
    • Blast radius within N hops (upstream + downstream)
    • Ownership routing (OWNED_BY)
  3. Actionability first
    • “First checks” from runbooks
    • “Who to page” with on-call contacts
    • Similar historical incidents + what resolved them
    • Evidence list (service names, runbook IDs, incident IDs)

:spider_web: Graph schema (core model)

Nodes

  • Service(name, tier, description)
  • Team(name, timezone, contact)
  • Runbook(id, title, tags, steps, serviceName)
  • Incident(id, time, severity, summary, resolution)

Relationships

  • (Service)-[:DEPENDS_ON]->(Service)
  • (Service)-[:OWNED_BY]->(Team)
  • (Runbook)-[:FOR_SERVICE]->(Service)
  • (Incident)-[:AFFECTED]->(Service)
  • (Incident)-[:RESOLVED_BY]->(Runbook)

:camera: Screenshot 1 — Schema / model diagram

Description: a screenshot from Data Importer model view (preferred) or a clear schema graph showing labels + relationships.


:wrench: Agent tools (deterministic + safe)

PagerDruid uses Cypher Template tools (no hallucinated facts) to retrieve and reason deterministically:

Retrieval (full-text)

  • SearchServices → searches service_ft
  • SearchRunbooks → searches runbook_ft
  • SearchIncidents → searches incident_ft

Graph reasoning

  • ImpactRadius → upstream + downstream within 2 hops (+ owners)
  • ShortestDependencyPath → path between services (+ owners per node)
  • OwnersToPage → maps service(s) to owning team + contact
  • RecentIncidentsForService → last incidents for a service

Detail extraction

  • RunbookSteps → returns runbook steps for top 1–2 matches

:camera: Screenshot 2 — Tools configured in Aura Agent

screenshot from Aura Agent builder showing the tool list (names visible).


:magnifying_glass_tilted_left: Retrieval layer (no embeddings)

Embeddings were intentionally skipped to reduce setup friction and keep the solution highly reproducible.

Instead, PagerDruid uses Neo4j full-text indexes:

  • runbook_ft over Runbook.title, Runbook.tags, Runbook.steps
  • incident_ft over Incident.summary, Incident.resolution
  • service_ft over Service.name, Service.description

:camera: Screenshot 3 — Indexes / constraints proof

Desc: output of SHOW INDEXES; showing runbook_ft, incident_ft, service_ft (and any uniqueness constraints).

:receipt: Graph proof (visual reasoning)

PagerDruid’s “graph brain” is visible and testable through direct graph queries:

1) Shortest path (visual)

Example shortest path query (rendered as a graph):

MATCH (a:Service {name:"checkout"})
MATCH (b:Service {name:"payment"})
MATCH p = shortestPath((a)-[:DEPENDS_ON*..10]->(b))
RETURN p;

:camera: Screenshot 4 — Shortest dependency path (Graph view)

Desc: screenshot of the Graph view showing the path nodes and DEPENDS_ON arrows.

:clapper_board: Demo

Prompt

Checkout is timing out with 502s — what should I check first and who should I page?

What PagerDruid returns (always)

  • Most likely culprit services (with reasons)

  • Blast radius (upstream + downstream within 2 hops)

  • Dependency chain (shortest path to top suspect)

  • First 5 checks (from runbooks; each ends with [RB-xxx])

  • Who to page (service → team → contact)

  • Similar historical incidents + what fixed them

  • Evidence used (IDs + service names)

:camera: Screenshot 6 — Agent response (with evidence)

Desc: screenshot of the agent response showing: culprits + blast radius + runbook checks + paging + incidents + evidence list.

Toolchain used in this demo:
SearchRunbooks + SearchIncidents + ImpactRadius + OwnersToPage + RunbookSteps (+ ShortestDependencyPath when needed)


Why this is useful (and fun)

PagerDruid turns a dependency graph into a decision engine:

  • Faster triage: “Which team do I page?”

  • Safer diagnosis: answers are evidence-backed (runbooks + incidents)

  • Better communication: blast radius + dependency path makes impact obvious

And yes — it stays blameless. The systems misbehaved. Humans are innocent. :grinning_face_with_smiling_eyes:

2 Likes

Things are starting to :fire: up! Welcome back @joslat - I think it's been about 7 minutes :slight_smile: since your last post.

1 Like

Agent name
Melbourne Property Arbitrage Detector

What the agent does
This agent helps families and investors identify "Education-Value" hotspots in the Melbourne property market. By navigating the relationships between school performance (ATAR scores) and real-world property prices, it identifies "Efficiency Gaps"—suburbs where public school outcomes rival elite private schools but property entry prices remain accessible. It can answer complex queries about house sizes (rooms), budget constraints, and school types, while explicitly showing the "Chain of Thought" logic it uses to traverse the graph.

What data or graph it’s grounded in

A high-fidelity knowledge graph of the Melbourne property and education landscape, modeled using a Hub-and-Spoke architecture.

  • Nodes:

    • :Property: Includes price, number of rooms/area, and address.
    • :School: Includes ATAR scores, sector (Public/Private/Selective), and annual fees.
    • :Suburb: Acts as the central hub connecting properties to their respective school zones.
  • Relationships:

    • (Property)-[:LOCATED_IN]->(Suburb)
    • (School)-[:ZONED_FOR]->(Suburb)

Example question
I have a budget of $2M and need at least 4 bedrooms. Which suburbs have top-tier public schools (ATAR > 75) where I can avoid paying private school fees?

Example response and reasoning

Filter Properties: Identifies :Property nodes where price <= 1500000 and rooms >= 4.

  1. Traverse to Suburb: Follows the [:LOCATED_IN] relationship to identify the candidate :Suburb hubs.
  2. Cross-Reference Schools: Follows the [:ZONED_FOR] relationship from those suburbs to find :School nodes where sector = "Public" and atar > 85.
  3. Calculate Arbitrage: Compares the median property price in that zone against the cost of private school fees stored in neighboring nodes to determine the "Efficiency Score."
1 Like

st button and i am cuurent in an edit box not sure if it is topic or subject

10:10 AM

You're probably in the right spot. There's likely only one main edit box in a reply — Discourse reply boxes don't have a separate subject or topic field. Just paste the submission text now with Ctrl+V (make sure you've copied it first).

If you need to copy it fresh, here it is one more time — select everything below and copy it:

Agent Name: Prairie Water Research Intelligence Agent

What it does: Connects Saskatchewan water research publications to hydrometric monitoring stations and water quality measurements, letting users explore which researchers and institutions are studying which rivers and whether water quality objectives are being met.

Data/Graph: 103 nodes across 6 labels — Works (35 research publications), Authors (25 Saskatchewan water researchers), Institutions (University of Regina, University of Saskatchewan), Topics (15 water-related subfields), Stations (8 USGS/ECCC hydrometric monitoring sites on the Souris, Qu'Appelle, and Saskatchewan rivers), and Measurements (18 water quality readings for phosphorus, TDS, sodium, sulfate, nitrogen, and chloride). 187 relationships connecting authorship, citations, institutional affiliations, topic assignments, station-to-measurement links, and research-to-station associations.

Example questions it can answer:

  • "How many works has each institution published?" → U of S: 31, U of R: 17

  • "List all measurement nodes and their properties" → Returns all 18 water quality measurements with parameter, value, status, and year

Notes: Station and measurement data uses fabricated values modeled on real USGS and ECCC monitoring sites across southern Saskatchewan. In production, the graph would be populated via the USGS Water Services API and Environment Canada's Hydrometric API, both free and returning JSON that maps directly to the existing schema. The agent uses a Text2Cypher tool grounded in the graph schema. Built by a blind developer using screen readers (VoiceOver/JAWS), which made navigating the Aura console an adventure in itself.Agent Name: Prairie Water Research Intelligence Agent

What it does: Connects Saskatchewan water research publications to hydrometric monitoring stations and water quality measurements, letting users explore which researchers and institutions are studying which rivers and whether water quality objectives are being met.

Data/Graph: 103 nodes across 6 labels — Works (35 research publications), Authors (25 Saskatchewan water researchers), Institutions (University of Regina, University of Saskatchewan), Topics (15 water-related subfields), Stations (8 USGS/ECCC hydrometric monitoring sites on the Souris, Qu'Appelle, and Saskatchewan rivers), and Measurements (18 water quality readings for phosphorus, TDS, sodium, sulfate, nitrogen, and chloride). 187 relationships connecting authorship, citations, institutional affiliations, topic assignments, station-to-measurement links, and research-to-station associations.

Example questions it can answer:

"How many works has each institution published?" — U of S: 31, U of R: 17

"List all measurement nodes and their properties" — Returns all 18 water quality measurements with parameter, value, status, and year

Notes: Station and measurement data uses fabricated values modeled on real USGS and ECCC monitoring sites across southern Saskatchewan. In production, the graph would be populated via the USGS Water Services API and Environment Canada Hydrometric API, both free and returning JSON that maps directly to the existing schema. The agent uses a Text2Cypher tool grounded in the graph schema. Built by a blind developer using screen readers (VoiceOver/JAWS), which made navigating the Aura console an adventure in itself.

In the spirit of fairness…

Since I need a little extension on the decision timeline (hello US holiday weekend :sweat_smile:), it’s only right that we extend the submission window too.

So we’re giving you one extra day to submit your Tiny Aura Agent.

:spiral_calendar: New deadline: Midnight, February 15th (EST)

If you were polishing, tweaking, or just getting started — consider this your bonus round.

Thanks for the incredible creativity so far. I can’t wait to see what else comes in!

Winner will be announced by 5pm EST on Tuesday February 17.

Ari

  1. Agent name

    1. Hedd audio product genius
  2. What your agent does in one sentence

    1. Get recommendations and technical support about any product in the catalogue.
  3. What data or graph it is grounded in

    1. Digital Product Passport (compliance data) Product Graph from a shopify website + support documents and internal databases. All answers grounded in official data used for DPP reporting (specs, materials, country of assembly, etc)
  4. One or two example questions it can answer

    1. Comparison between products, recommendations, compatibility questions, technical requirements of the equipment, etc. It

We additionally have semantic search for graph embedded documents but encountered a bug on the embeddings indexes retrieval of the aura agent, nevertheless have a different agent published using neo4j for the retrieval used for both customers and internal copilots as context engine for any product ai workflow/agent.

ScreenRecording2026-02-15at20.12.06-ezgif.com-video-to-gif-converter

1 Like

:trophy: Tiny Aura Agent Community Challenge Winners Announced

What an incredible round. hank you to everyone who participated in the Build a Tiny Aura Agent Community Challenge. The quality, creativity, and clarity of submissions truly exceeded expectations.

We saw real graph thinking in action. Focused designs. Explainable reasoning. Practical real world use cases. Exactly what Aura Agent was built to support.

This community delivered.


:star: Special Thanks to Our Celebrity Judge

A huge thank you to our Neo4j celebrity judge
** @ed.sandoval, Product Manager for Aura Agent**

Ed personally reviewed submissions and shared his perspective on what stood out from both a product and graph reasoning standpoint.

Here is what he had to say about this round overall:

"This was an incredible round of submissions. Building something from scratch on a brand new tool is no small feat, and seeing the creative ways you applied Aura Agent to real world problems, from tracking environmental water quality in Saskatchewan to diagnosing complex server crashes, is truly inspiring.

You have all demonstrated that graph thinking is not just about storing data. It is about uncovering the relationships that make data meaningful. Whether you were mapping property arbitrage or building a sarcastic but brilliant on call assistant, you pushed the boundaries of what is possible with this technology.

I would love to keep the conversation going."

Thank you, Ed.


:3rd_place_medal: Third Place

ConceptCompass

by @bhattacharya.ananya0

:link: Submission

ConceptCompass earned third place with a thoughtful and highly original idea. Using graph relationships to evaluate startup ideas against real problems and competitive landscapes demonstrated strong conceptual creativity.

Ed shared:

"This submission shows how graph structure can clarify complex strategic thinking. It highlights relationships between ideas, problems, and competitors in a way that makes differentiation visible and actionable."

Excellent work.


:2nd_place_medal: Second Place

Melbourne Property Arbitrage Detector

by @paras.sitoula

:link: Submission

This agent delivered impressive real world value by mapping properties, schools, suburbs, and education performance to uncover meaningful housing opportunity insights.

Ed commented:

"This is a strong example of real world graph reasoning. It connects multiple domains through relationships and surfaces patterns that would be difficult to uncover without a graph."

A very close second place finish.


:1st_place_medal: First Place

PagerDruid — Blameless On-Call Graph Agent for Microservices

by @joslat

:link: Submission

Our first place winner demonstrated exceptional graph reasoning and technical depth.

PagerDruid tackled a complex problem diagnosing server crashes in microservices environments and used relationship driven reasoning to surface clear, explainable insights.

Ed’s thoughts:

"PagerDruid demonstrates exactly what graph powered agents can do. It uses relationships intentionally to diagnose complex systems and turns connected data into clear, explainable operational insight."

:trophy: First Prize
LEGO Editions Soccer Ball Set 43019

Congratulations on earning first place. We will coordinate shipping directly.


:raising_hands: To All Participants

Every submission showed initiative, experimentation, and real effort.

From environmental research to property analytics to startup evaluation and system diagnostics, this community demonstrated that graph powered agents are practical, inspectable, and ready for real world application.

This will not be the last challenge.

Stay tuned.


:rocket: Want to Shape What Comes Next?

If you enjoyed this challenge and want to influence future product direction, features, or community initiatives, we want your input. Our UX team is always looking for Neo4j Community users. If you want to add yourself to the product engineering team's database,
:backhand_index_pointing_right: join here. Details can be found here as well. Join the conversation here: đź’µ Dev input powers Neo4j. Rewards included!

Your feedback directly powers what we build next. Rewards included.

Ari
Neo4j Community Manager

3 Likes

@bhattacharya.ananya0 @paras.sitoula @joslat a MASSIVE CONGRATULATIONS from the entire Neo4j team on your wins and a huge congratulations to everyone who submitted! I took a deep dive into these submissions this morning and all of the submissions are very creative! You all need to give a talk about what you built. If anyone is ever interested in doing a quick 5-10min video explanation of what you built, let me know! And if you post online about it, tag me or Neo4j so we can boost it!

Extremely great work to the winners and all submissions. Let’s do these challenges more often Ari :) HERE HERE!

3 Likes

Whoa! - Big wow!! — I’m genuinely thrilled :raising_hands:

Huge thanks @Ari_Neo4j & @ed.sandoval and the Neo4j team for running this, and for the thoughtful judging notes.

Building PagerDruid on Aura Agent was ridiculously fun and ended up into a multi hour obsession trying to build it: A tiny graph, but real on-call energy, and lots of “show me the evidence” multi-hop reasoning. The first version came up really fast, so the experience was great!! but then had to polish the queries as the result was not what I exactly wanted - that took some trial and error - ended up in bed waaaay later than expected with how fun this was.

Also +1 to the 5–10 min walkthrough idea — I’m happy to record a quick video / share the repo + dataset - if I can - unsure if https://console.neo4j.io/ supports this “sharing” . Does it, @Ari_Neo4j?

Thanks again! :rocket:
I learned a lot on this motivational challenge!! the initial video showed how easy it was and I… just had to try it!!

Optional swap-in closers:

  • If anyone wants the schema + Cypher tools, I’ll happily post them, either in a GH repo or LinkedIn Article.

  • Next round when? :grinning_face_with_smiling_eyes:

  • My inner SRE is very satisfied right now.

Thanks all!!
José

3 Likes

super cool! Big congratz on winning the challenge!
If you record a video about it and publish it somewhere (maybe together with that GH repo), please do let me know! I am more than happy to feature this in an upcoming twin4j episode!! :)

2 Likes

yayyyyy!!! congrats everyone

2 Likes