Hello
I have been experimenting with combining Neo4j and LLMs to create more context-aware responses in chatbots & internal tools.
One challenge I ran into is making LLMs (like GPT-4) respond accurately when the information is deeply interconnected; like organizational hierarchies, product dependencies / research citation graphs.
I am currently fetching relevant subgraphs from Neo4j and passing them as part of the prompt but I am hitting token limits and occasionally getting generic / hallucinated answers.
Has anyone tried building a workflow where LLMs use Neo4j not just as a data source but as a reasoning guide for eg, traversing relationships on demand and narrowing response scope based on proximity or path rules?
I am thinking about integrating LangChain or llamaindex with Neo4j to dynamically adjust what context gets passed based on user queries. I have checked Building RAG Applications With the Neo4j GenAI Stack: A Guide for reference .
When someone on my team asked me what is ChatGPT, this project was my answer but I would love to hear from others trying to make LLMs more grounded; especially with graph-powered context.
Any tips or architecture patterns around retrieval, embedding storage / prompt management with Neo4j?
Thank you !!