πŸ† Start Here: Register & Get Aura Credits: Aura Agent Hackathon

Aura Agents & Context Building Hackathon

Turn your data into something that actually thinks.

Build an AI agent on top of a knowledge graph that understands context, not just answers. Learn fast, build something real, and put your skills to work.

All Qualified Submission get our brand newly designed "Agent" T-Shirt! *
*One T-Shirt per mailing address


:bullseye: How It Works

:one: Learn

:backhand_index_pointing_right: Take the Building Agents in Neo4j Aura Course

  • Takes about 2 hours
  • Complete the course by May 15
  • The first 500 participants who finish the course and register below become eligible for $100 Aura credits

:two: Register for your $100 in Aura Credits

  • Use the same email or account as your GraphAcademy login
  • This step makes you eligible for credits if you have completed the course
  • Fill in the details below and we'll verify your completion and send your credits within a few business days.

:backhand_index_pointing_right: Aura Credits Form


:three: Build [Hackathon is open for submissions from :spiral_calendar: April 15 to June 15

Your mission: Take any dataset and turn it into a context-rich graph your agent can reason over.

:light_bulb: Put your $100 Aura credits to work. Load your data, connect it, and make your agent smarter.

  • Use Neo4j Aura Free
  • Bring your own dataset or grab one from Kaggle
  • Add at least one tool:
    • Cypher Template
    • Text2Cypher
    • Similarity Search
  • Optional: publish your agent

:four: Submit

Reply to this thread in this category with your qualified submission using your Agent Name as the title.

Qualified submissions Include:

  • Agent Name
  • What it does
  • Dataset and why a graph fits
  • Screenshot of your agent in the Aura console
  • Screenshot or short demo of your agent in action
  • Optional: link to your agent if available

:trophy: What You Can Win

Course Reward

  • First 500 participants receive $100 in Aura credits

Top Hackathon Builds

  • 1st Place: Bose QuietComfort Ultra Headphones
  • 2nd Place: Raspberry Pi 5 (16GB)
  • 3rd Place: 250 Aura Credits

Everyone who submits a qualified Aura Agent on this thread will receive a newly design Neo4j "Agent" T-Shirt.


:balance_scale: What the Judges Are Looking For

This is where good projects separate from great ones.

Make your graph matter
Use relationships to drive insight, not just store data.

Build an agent people would actually use
Clear role, useful tools, and responses that feel smart.

Show your thinking
The best agents explain why something is true, not just return results.

Bring something interesting
Unique datasets, real problems, or ideas that make us say, β€œthat’s cool.”

Present it like it matters
Clean description, strong screenshots or demo, and an easy-to-follow submission.

If you want to win, don’t just build something that works. Build something that makes people stop and look twice.

My Aura Hackathon Submission: Apple HealthGraph Agent

What it does

HealthGraph Agent turns your Apple Health data into a knowledge graph that reasons about your body. It answers questions no health app can: "Why was my recovery terrible last Thursday?" "What pattern precedes my best sleep?" "How does strength training vs. running affect my HRV differently?"

Apple Health collects 50+ metric types daily, but stores them as disconnected time series. You can see what happened never why. HealthGraph Agent connects the dots by building a graph where workouts, sleep sessions, heart rate variability, resting heart rate, blood oxygen, and activity rings are linked through temporal and causal relationships. The agent then uses multi-hop graph reasoning to surface patterns that are invisible in flat dashboards.

Dataset and why a graph fits

Dataset: Apple Health XML export (or synthetic data for people without an iPhone/Apple Watch). The ETL pipeline parses the raw export.xml using streaming XML processing, transforms ~48,000 records per year into graph-ready structures, and batch-loads into AuraDB.

Why a graph fits this could be the key insight:

Health data is inherently relational. A table can tell you your HRV was 28ms on Thursday. A graph can tell you why:

(Workout:HIIT {duration:45min, energy:540kcal})
  -[:ON_DAY]-> (Day:Thursday)
  -[:NEXT_DAY]-> (Day:Friday)
  -[:HAS_SUMMARY]-> (DailySummary {hrv_mean:28, resting_hr:72})

That HIIT session on Thursday β†’ poor recovery on Friday isn't visible in any time-series chart. But in the graph, it's a two-hop traversal. The agent follows these relationship chains to explain causality, not just correlation.

Graph schema:

(:Person)-[:USES]->(:Device)-[:RECORDED]->(:Workout)
(:Workout)-[:ON_DAY]->(:Day)-[:HAS_SUMMARY]->(:DailySummary)
(:Workout)-[:FOLLOWED_BY {hours_between}]->(:SleepSession)
(:SleepSession)-[:ON_DAY]->(:Day)
(:Day)-[:NEXT_DAY]->(:Day)
(:Day)-[:PART_OF]->(:Week)

The FOLLOWED_BY relationship between workouts and sleep sessions is where the graph gets powerful β€” it captures temporal causation with the hours_between property, allowing the agent to reason about recovery windows.

Scale: ~700 nodes and ~1,200 relationships for 12 months of data. 366 Day nodes, 366 DailySummary nodes, ~220 Workout nodes, 366 SleepSession nodes, 53 Week nodes, plus Device and MetricType nodes. Small graph, rich reasoning.

Agent tools

Tool Type What it enables
Weekly overview Cypher Template "How was my health last week?" β€” returns daily metrics across 8 dimensions
Workout β†’ sleep impact Cypher Template "How does running affect my sleep?" β€” multi-hop traversal from workout through day to next sleep session and next-day HRV
Free-form health queries Text2Cypher Any natural language question about health patterns β†’ agent generates Cypher from the graph schema
Pattern matching Similarity Search "Find days like my best recovery day" β€” vector similarity on DailySummary text embeddings

Example conversation

User: "How does strength training affect my sleep compared to running?"

Agent reasoning (multi-hop):

  1. Matches all Workouts where activity_type = 'TraditionalStrengthTraining'
  2. Follows FOLLOWED_BY β†’ SleepSession to get post-workout sleep duration
  3. Follows ON_DAY β†’ Day β†’ NEXT_DAY β†’ Day β†’ HAS_SUMMARY β†’ DailySummary for next-day HRV
  4. Repeats for activity_type = 'Running'
  5. Compares the two distributions

Agent response: After strength training sessions (avg 50 min), you sleep 7.4 hours on average with next-day HRV of 38ms. After running sessions (avg 35 min), sleep drops to 6.9 hours but next-day HRV recovers to 44ms. Your body recovers faster from cardio than resistance training β€” consider scheduling strength sessions earlier in the week when you can afford a slower recovery.

This answer requires 4 hops through the graph. No flat database can produce it.

What makes this different

1. The graph drives the insight, not just stores data. Every answer traces a relationship path. "Your HRV dropped because..." always cites the specific workout β†’ sleep β†’ recovery chain that explains it.

2. Anyone can use it. Don't have an Apple Watch? The repo includes a synthetic data generator with 4 persona profiles (athlete, biohacker, sedentary, default) that produce 12 months of realistic, correlated health data. One command: GENERATE=1 PERSONA=biohacker bash scripts/run_pipeline.sh

3. The agent explains its reasoning. When the agent says "your best recovery days follow yoga sessions," it shows the Cypher path it traversed and the specific nodes it visited. The reasoning tab makes the graph traversal transparent.

4. It's a real tool for a real community. The quantified self and biohacking community (millions of Apple Watch users tracking their health) currently has no way to query cross-metric correlations. This agent fills that gap.

Here you can start: https://github.com/ma3u/healthgraph-agent

DepGraph Agent β€” Hackathon Submission

:rocket: Live Demo: sai6522-depgraph-agent-uiapp-wfcewa.streamlit.app
:package: GitHub: github.com/Sai6522/DepGraph-agent

Screenshot

Agent in action β€” Streamlit UI

Knowledge graph in Neo4j Aura Explore (830 nodes)

Vulnerability timeline in Aura Dashboard

Graph schema in Aura Query Tool β€” 1,066 nodes Β· 1,829 relationships


What it does

DepGraph Agent turns your PyPI dependency tree into a knowledge graph that reasons about supply chain security. It answers questions no health app β€” sorry, no security tool β€” can: "What's the blast radius of werkzeug?" "Why is my app exposed to this CVE?" "How does a vulnerability in a low-level library propagate up through 4 hops to my top-level packages?"

PyPI packages have hundreds of transitive dependencies, but most tools only show direct CVEs. You can see what is vulnerable β€” never why your app is affected. DepGraph connects the dots by building a graph where packages, dependency relationships, and known OSV vulnerabilities are linked through explicit edges. The agent uses multi-hop graph reasoning to trace propagation paths that are invisible in flat dashboards.


Dataset and why a graph fits

Dataset: Two free public APIs β€” no Kaggle, no signup:

  • PyPI JSON API (pypi.org/pypi/<pkg>/json) β€” live package metadata and dependency lists
  • OSV API (api.osv.dev/v1/query) β€” real CVE/GHSA vulnerability data for PyPI packages

25 seed packages (flask, django, requests, cryptography, pillow, urllib3, celery, etc.) + 1 hop of their dependencies β†’ 269 packages, 801 real vulnerabilities.

Why a graph fits β€” this is the key insight:

A table can tell you werkzeug has CVEs. Only a graph can tell you why your app is affected:

(Package:flask)-[:DEPENDS_ON]->
  (Package:werkzeug)-[:HAS_VULNERABILITY]->
    (Vulnerability {id: "CVE-2023-25577", severity: "HIGH"})

That flask β†’ werkzeug β†’ CVE chain isn't visible in any dependency scanner's flat output. But in the graph, it's a two-hop traversal. The agent follows these relationship chains to explain causality, not just correlation.

When Log4Shell dropped in 2021, teams needed to know their blast radius across 3–4 hops of transitive dependencies. That's a graph traversal problem. Recursive CTEs in SQL break at scale and can't explain the path. A graph returns the full chain in milliseconds and makes the reasoning transparent.

Graph schema:

(:Package {name, version, summary})
(:Vulnerability {id, summary, severity, published, embedding})

(:Package)-[:DEPENDS_ON]->(:Package)
(:Package)-[:HAS_VULNERABILITY]->(:Vulnerability)

Scale: 269 Package nodes, 801 Vulnerability nodes, 500+ DEPENDS_ON edges, 400+ HAS_VULNERABILITY edges. Small graph, rich reasoning.


Agent tools

Tool Type What it enables
direct_vulns Cypher Template "Is flask vulnerable?" β€” direct HAS_VULNERABILITY lookup
blast_radius Cypher Template "What's the blast radius of werkzeug?" β€” DEPENDS_ON*1..4 traversal finding every upstream package that transitively depends on it AND carries CVEs
dep_path Cypher Template "Path from flask to markupsafe?" β€” shortestPath across dependency edges
top_vulnerable Cypher Template "Most vulnerable packages?" β€” aggregation over HAS_VULNERABILITY edges
text2cypher Text2Cypher Any natural language question β†’ LLM generates Cypher from the graph schema
similarity_search Similarity Search "Find CVEs like remote code execution via deserialization" β€” vector search on OSV vuln embeddings

Example conversation

User: "How does a vulnerability in werkzeug affect packages that depend on flask?"

Agent reasoning (multi-hop):

  1. Matches (werkzeug)-[:HAS_VULNERABILITY]->(v) β€” finds 20 CVEs
  2. Follows (upstream)-[:DEPENDS_ON*1..4]->(werkzeug) β€” finds flask, starlette, and others
  3. Returns affected packages with hop count, vuln count, and sample CVEs
  4. LLM synthesizes the chain into a human explanation

Agent response:

Werkzeug 3.1.3 carries 20 known vulnerabilities including CVE-2023-25577 (HIGH β€” path injection) and CVE-2023-46136 (HIGH β€” DoS via multipart parsing). Flask directly depends on werkzeug (1 hop), meaning any application using flask is transitively exposed. The dependency chain is: your-app β†’ flask β†’ werkzeug β†’ [CVE]. Starlette also has werkzeug in its dependency tree (2 hops) with 7 additional vulnerabilities in the chain.

This answer requires 3 hops through the graph. No flat database produces it.


What makes this different

1. The graph drives the insight, not just stores data.
Every answer traces a relationship path. "Your app is exposed because..." always cites the specific package β†’ dependency β†’ CVE chain. The reasoning is transparent β€” the agent shows which tool it used and what graph data it retrieved.

2. Real data, live APIs.
No synthetic datasets. Every vulnerability is pulled live from the OSV database. Every dependency edge is pulled live from PyPI. The graph reflects the actual state of the ecosystem today.

3. Semantic vulnerability search.
Vulnerability embeddings (3072-dim via Gemini) enable queries like "find CVEs similar to SQL injection in ORM layer" β€” matching by attack pattern, not CVE ID. This surfaces related vulnerabilities across different packages that keyword search misses entirely.

4. It solves a real problem for a real community.
Every Python developer has transitive dependencies they don't fully understand. DepGraph makes the hidden exposure visible and explainable β€” not just a list of CVEs, but the exact path through which each one reaches your code.


Tech stack

  • Neo4j Aura Free β€” managed graph database
  • neo4j-graphrag β€” Text2Cypher + VectorRetriever
  • Groq (llama-3.3-70b-versatile) β€” LLM, free tier (14,400 req/day)
  • Google Gemini (gemini-embedding-001, 3072-dim) β€” vulnerability embeddings
  • OSV API + PyPI API β€” live data, no Kaggle required
  • Streamlit β€” chat UI

:tada: Aura Credits Sent! Check your email that used when you completed the Graph Academy course and signed up on the form.

Huge shoutout to everyone below β€” your Aura Credits have been sent :rocket:. If signed up and your name is not below below, please DM me as you were missing some important information in order to receive your credits.

The next batch of credits will be sent out later this week!

We can’t wait to see what you build with Aura. Go create something awesome and share it with the community :light_bulb: Can't wait to get your projects on the ticker!

@dripto.215
@serah.elrashidi
@guiller
@mabu.mate
@mayank
@lucifer
@abel.alvarez
@numericOverflow
@mgl.next
@charleno
@aldrinm
@telagam.dinakar
@flinford
@shantanunitw01
@vlad1
@ravikumartekkali2006
@isaac.john
@mahanteshimath
@pereira_rock
@astridparamita
@venkatasaiprasadp
@fadeusi
@jeremykalilin
@deanfoulds
@agung_sidharta_gungz
@jonas.koenner
@dhiraj.patra
@patraom49
@srikrishnakesavulu
@lisa1

I have completed the course and also got the credit. Started analysis for the hackathon so that can develop real life solutions with Aura, graph, RAG and Agents.
Thanks for this opportunity.

Update! Thanks to your reports, I was able to prioritize this with our support team...the team resolved this and updated fix is live. I tested it and have redeemed my credits :).

Please give me a thumbs up here once you are able to claim them.

Thanks again for your patience.

Let's build some great Agent Solutions!

It worked, and the credit was claimed. Thank you, Neo4j Aura team.

Hello neo4J Community. early this day, I have completed the course. I would like my $100 aura credits be posted to my account. thank you for your help.

Thank you. Your credits will be sent within 2-3 business days by email with redemption instructions. If you do not receive them by Friday morning, please DM me.

Title: RegulatoryRisk GraphAgent β€” Financial Compliance Navigator


Agent Name

RegulatoryRisk GraphAgent β€” Financial Compliance Navigator


What it does

RegulatoryRisk GraphAgent helps compliance officers answer questions no flat database can:

  • "Which Basel IV controls is this bank missing and why?"

  • "What is the full counterparty contagion chain if this bank fails?"

  • "Which regulatory rule was violated in this risk event?"

  • "Find past incidents semantically similar to this new breach"

  • "What controls would have prevented this risk event?"

  • "Which Basel IV rules supersede older Basel III rules?"

The agent traverses a knowledge graph of financial institutions, regulatory obligations, control implementations, counterparty exposures, and historical risk events to explain WHY something is compliant or risky β€” not just whether it is. Every answer cites the exact graph relationship path that produced it.

What makes this unique: the agent is also published as a live MCP server endpoint and connected directly to Claude.ai via MCP connector β€” judges can query it live from Claude right now.


Dataset and why a graph fits

The graph models the global banking regulatory landscape with fictional institution names:

Node Type Count Description
Entity 8 Banks across US, UK, Germany, Switzerland, India
Regulation 5 Basel IV, Basel III, DORA, Dodd-Frank, SREP
Rule 10 Regulatory rules with supersession chains
Control 10 Compliance controls linked to specific rules
RiskEvent 7 Historical incidents with severity and financial impact
Regulator 6 Enforcement bodies with jurisdiction
Total 46 nodes 96 relationships, 10 relationship types

Why only a graph solves this:

A compliance officer asking "Is Horizon Bank compliant with Basel IV?" needs to traverse:

(Entity)-[:SUBJECT_TO]->(Regulation)-[:CONTAINS]->(Rule)
  -[:REQUIRES_CONTROL]->(Control)

Then check:

(Entity)-[:IMPLEMENTS]->(Control)

That is a 4-hop traversal. The missing controls are the difference between those two sets. No SQL query or flat table can express this cleanly β€” and it gets worse when you add counterparty contagion:

(Bank)-[:COUNTERPARTY_OF*1..3]->(Bank)-[:INVOLVED_IN]
  ->(RiskEvent)-[:TRIGGERED_BY]->(Rule)

This is a multi-hop path traversal with relationship property aggregation β€” total USD exposure accumulated across each hop. The graph makes the reasoning transparent and auditable, which is exactly what regulators demand.

Additional graph-native features:

  • SUPERSEDES chain β€” Basel IV rules supersede Basel III rules, enabling regulatory evolution tracking

  • TRIGGERED_BY β€” links risk events directly to the rule that was breached

  • AFFECTED β€” tracks which entities were impacted by contagion from a risk event

  • Vector embeddings (3072-dim Gemini) on RiskEvent nodes for semantic similarity search


Agent Tools

Tool Type What it does
Missing Regulatory Controls for Bank Cypher Template 4-hop traversal finding controls required by regulation but not implemented by the bank
Counterparty Contagion Paths Cypher Template Multi-hop COUNTERPARTY_OF traversal up to 3 hops with cumulative USD exposure
Rules Violated by Risk Event Cypher Template Links RiskEvent β†’ Rule β†’ Regulation β†’ Entity for full violation context
Bank Regulations and Enforcement Cypher Template Entity β†’ Regulation β†’ Regulator with jurisdiction and authority level
Superseding Basel Rules Cypher Template SUPERSEDES chain showing Basel IV replacing Basel III rules
Preventive Controls for Risk Event Cypher Template RiskEvent β†’ Rule β†’ Control with implementation status and prevention assessment
Search Similar Risk Events Similarity Search 3072-dim Gemini embeddings on risk event descriptions via vector cosine index
Natural Language to Cypher Text2Cypher Last resort fallback for ad-hoc aggregation queries

Example interactions

1. Compliance gap analysis (Cypher Template)

Query: "What regulatory controls is Horizon Bank missing?"

Response: Identifies 3 missing controls β€” Leverage Ratio Monitor (Basel IV, daily), Stress Testing Framework (Basel III + SREP, quarterly), NSFR Monitor (Basel III + Basel IV, monthly) β€” with regulation, frequency, purpose and risk exposure summary.

2. Counterparty contagion (Cypher Template)

Query: "Show me counterparty contagion paths from Horizon Bank"

Response: Returns 10 multi-hop paths with full chain traversal, cumulative USD exposure per path, hop count, and circular risk loops where contagion returns to the originating bank β€” total systemic exposure quantified.

3. Semantic incident matching (Similarity Search)

Query: "Find past risk events similar to a ransomware attack on banking infrastructure"

Response: Matches EVT-007 (score 0.837, ransomware on third-party vendor) and EVT-003 (score 0.861, IT outage disrupting operations) with semantic reasoning explaining why each event matches.

4. Preventive control analysis (Cypher Template)

Query: "What controls would have prevented risk event EVT-001?"

Response: Traces EVT-001 β†’ TRIGGERED_BY β†’ BASEL4-LIQ-1 β†’ REQUIRES_CONTROL β†’ Liquidity Coverage Ratio Monitor, confirms control was NOT IMPLEMENTED at Alpine Investment Bank, explains the causal chain.


What makes this different from other submissions

1. Graph drives every single answer. No response is a simple lookup. Every answer traces a multi-hop relationship path and explains the compliance reasoning. The agent never returns a fact without the graph path that produced it.

2. Live MCP integration with Claude.ai. The agent is published as an MCP server and connected directly to Claude.ai. This is not just a demo inside Aura console β€” it works as a real tool inside Claude conversations right now.

3. Real-world enterprise domain. Financial regulatory compliance is a $50B/year industry problem. This agent addresses the exact compliance gap analysis and systemic risk assessment work that banks do manually today.

4. All 3 tool types used strategically. Cypher Templates for precise multi-hop traversals (6 templates), Similarity Search for semantic incident matching with 3072-dim Gemini embeddings, Text2Cypher as true last-resort fallback β€” exactly per Neo4j best practice.

5. Regulatory evolution tracking. The SUPERSEDES relationship between Basel IV and Basel III rules is unique to this submission β€” it models how regulations evolve over time, enabling queries like "which of our controls were designed for Basel III rules that Basel IV has now superseded?"

6. Explainability built in. Every answer states which rule requires which control, which regulator enforces which regulation, and which relationship path produced the answer β€” making the reasoning auditable and transparent.

Agent link

MCP Server: provided in github

To connect: Claude.ai β†’ Settings β†’ Connectors β†’ Add custom connector β†’ paste MCP URL


Built by

Dhiraj Patra
AI, Agentic Workflows Lead & Architect | 10+ Years Driving Enterprise AI Innovation
Senior Engineering Lead | AI/ML Architect | Technical Architect EY β€” embedded within BNY Mellon Enterprise Risk Technology
28+ years experience | Patent holder in EDGE AI

GitHub Link

Your entry is added to the banner ticker :slight_smile:

Agent Name CineGraph AI

What it does: CineGraph AI is a cinematic intelligence agent that transforms 100,000 movies into a reasoning engine. It performs multi-hop graph traversals to explain why certain creators and genres dominate the industry, moving beyond simple keyword search to true relational analysis.

Dataset and why a graph fits: I used the Global Movies Dataset (1950–2026), which includes 100,000 movies and over 400,000 relationships. Movies are inherently connected by Directors, Actors, Genres, and Eras. A graph is the perfect fit because it captures the causality chainsβ€”the ripple effect of a successful collaboration or a genre trendβ€”that a flat table simply cannot represent.

  • Scale: 115,000+ nodes, 400,000+ relationships.

Technology Stack

  • Database: Neo4j Aura (Knowledge Graph)
  • Reasoning Engine: Neo4j Aura Agent (Powered by LLM with multi-hop graph traversal)
  • Data Pipeline: Python with neo4j driver and pandas
  • Visualization: Streamlit + Plotly (Live dashboard)
  • Query Language: Cypher

Agent Tools The agent uses a combination of precise Cypher templates and natural language discovery:

  • director_career: Analyzes entire filmographies, revenue tracking, and ratings.
  • actor_collaborations: Evaluates the success of frequent actor-director pairings.
  • genre_decade_trend: Tracks genre popularity and quality shifts from the 1950s to today.
  • director_genre_country: Identifies a director's stylistic and geographic focus.
  • blockbuster_formula: Pattern recognition for high-ROI movies.
  • text2cypher: Fallback for ad-hoc natural language questions.

Example Conversation User: "How does Christopher Nolan's ROI in Sci-Fi compare to his work in other genres?" Agent Reasoning (Multi-Hop):

  1. Matches (Director {name: 'Christopher Nolan'})-[:DIRECTED]->(m:Movie) β€” finds movies.
  2. Follows (m)-[:IN_GENRE]->(g:Genre) β€” separates filmography into Sci-Fi vs other genres.
  3. Calculates avg(m.roi_pct) for both groups β€” discovers Sci-Fi ROI is ~175%.
  4. Synthesizes the answer, noting that his Sci-Fi work significantly outperforms his Dramas in profitability.

What makes this different

  1. Massive Scale: Processing 100K movies makes this one of the most comprehensive cinematic graphs in the competition.
  2. Specialized Tooling: Uses 6 custom Cypher templates for precise, high-speed industry analysis rather than relying solely on Text2Cypher.
  3. Multi-Hop Depth: Capable of 4+ hop reasoning (Director -> Movie -> Genre -> Decade) to provide deep historical context for every answer.

Screenshot of my agent in action:

Link to my agent (Github): GitHub - Kumar3421/CineGraph-AI Β· GitHub

Awesome! You are on the ticker!

image

:rocket: Aura Credits Update. We’re at 50!

We’re officially up to 50 community members who have received Aura Credits.

Huge shoutout to everyone below. Your credits have been sent and we can’t wait to see what you build next.

If you signed up and don’t see your name listed, please DM me so we can sort it out.

The next batch of credits will be sent later this week.

Now it’s time to put those credits to work. Build your Aura Agent, share it with the community, and get your project featured on the ticker.

:tada: New Hackathon Submissions Added to the Ticker

Please welcome our latest builders:

@dhiraj.patra
RegulatoryRisk GraphAgent

@shantanunitw01
CineGraph AI

Both projects are now live on the submission banner ticker. Keep them coming.

@dripto.215
@serah.elrashidi
@guiller
@mabu.mate
@mayank
@lucifer
@abel.alvarez
@numericOverflow
@mgl.next
@charleno
@aldrinm
@telagam.dinakar
@flinford
@shantanunitw01
@vlad1
@ravikumartekkali2006
@isaac.john
@mahanteshimath
@pereira_rock
@astridparamita
@venkatasaiprasadp
@fadeusi
@jeremykalilin
@deanfoulds
@agung_sidharta_gungz
@jonas.koenner
@dhiraj.patra
@patraom49
@srikrishnakesavulu
@lisa1

I can only add one screenshot, as I am a new user. Is there a way to adjust it?

I adjusted it. Will you please try again?