Anyone with experience with langchain-neo4j

Greetings,

New to this channel, and hope this is relevant. I am trying to use langchain-neo4j with a database that I converted into a graph - it was originally on mongo and I converted it to Neo4j.

I tried using langchain-neo4j with that database with ChatOpenAI and even for a small graph with about 100 records converted into a graph I am getting a huge context window that does not fit openAI. Here is the error message I am getting: "Request too large for gpt-3.5-turbo in organization *** on tokens per min (TPM): Limit 200000, Requested 364902.

I am using a tiny sample of the database of about 100 records while I have around 50K records and this already puts me over the token limit, so I wonder what is happening.

I am using code very similar to the example in langchain-neo4j :

from langchain_openai import ChatOpenAI

from langchain_neo4j import GraphCypherQAChain, Neo4jGraph

llm = ChatOpenAI(
    temperature=0,
    api_key="sk-...",  # Replace with your OpenAI API key
)
graph = Neo4jGraph(url="bolt://localhost:7687", username="neo4j", password="password")
chain = GraphCypherQAChain.from_llm(llm=llm, graph=graph, allow_dangerous_requests=True)
chain.run("Who starred in Top Gun?")

When I try the entire database using graph = Neo4jGraph which all the records, it takes hours to load only this command.

Am I doing something wrong?

When created the graph - I created one node per record so in the small data base I have 100 main nodes connected to many to their dependencies - all of them are separated. So essentially I have 100 disconnected graphs in the database.

Hopefully someone can put me in the right direction or the right forum.

Looking at the documentation, have you tried chain.invoke to see the query?

Thanks @joshcornjo - yet chain.invoke returns the same error as chain.run. I wonder if langchain just puts the entire graph into the LLM context windows?

Set up LangChain / LangSmith Tracing and see what's being passed to the LLM. Highly recommended as a general practise. LangSmith is free to use and you will gain a lot of insights into latency and token usage.

See: https://docs.smith.langchain.com/observability for more details.