Error and inconsistency in module "Neo4j Retriever Tool" - Building Chatbot with Python

Hi community,
I'm studying the course building neo4j-backed chatbot with python and have some confusion and error during runtime. Any helps would be much appreciated

ref : Neo4j Retriever Tool | GraphAcademy

  1. .from_llm or .from_chain_type?
    the material discuss about .from_llm, but the code snippet show .from_chain_type. not sure which exactly is it?

  2. Error invoking the second tool: vector search at agent invoke.
    I strictly follow the material, right up until the part where I add the second tool, vector search, into the tool list. If I ask a general question, it is ok. but if I asked specifically about the 'plot' it returned some validation error - however, if I enter the query directly via kg_qa, it will be ok.

To illustrate, I have also added some images here(the system won't allow me to attach file)

Here is also the output of errors

ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)

3 Likes

Hi again,

I think I solved it.

It was unclear to me that I had to revise agent.generate_response() as I progressed through the module. The def generate_response(prompt) is also provided in solutions/tools/cypher.py, solutions/tools/vector.py

But this creates a new question though. How can we manage and process the incoming responses? is there a way to know which response comes from which 'Tool' in the chain?

For example,

  • if the response is from general chat, then -->response = agent_executor.invoke({"input": prompt})
  • if the response is from vector search, then -->response = kg_qa({"query": prompt})
  • if the response is from graph chain, then -->response = cypher_qa.run(prompt)

Regards,
Sira

1 Like

Hi @swatakit thanks for sharing. I am struggling with same question you've raised.

1 Like

I've managed to find some workaround - this is from Adding the Neo4j Vector Retriever | GraphAcademy

replace kg_qa with a function instead
image

Same goes for cypher_qa

Hope this helps someone else too

Hi @afa2912 , this works for me

3 Likes

There's an even simpler solution that requires less fiddling

Change the return_direct from True to False - this stops the agent output getting returned directly to the LLM output, which is what's throwing a format error as it's not in form of a string (making it false means that it introduces an intermediate step to format it correctly into a string)

3 Likes

Hi Amanda,

I think I encounter the same issue as Swatakit but the proposed fix (return_direct from True to False ) make the LLM to loop and so stop (Agent stopped due to iteration limit or time limt).

Here the feedback when return_direct is set to False.

Here the error message I got when the return_direct is set to True.
//////////////
ValidationError: 2 validation errors for AIMessage content str type expected (type=type_error.str) content value is not a valid list (type=type_error.list)
Traceback:

File "E:\PythonProgram\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "E:\Projet IA\GraphLLMCours\Create A chat BOT\llm-chatbot-python-main\bot.py", line 40, in
handle_submit(prompt)
File "E:\Projet IA\GraphLLMCours\Create A chat BOT\llm-chatbot-python-main\bot.py", line 24, in handle_submit
response = generate_response(message)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Projet IA\GraphLLMCours\Create A chat BOT\llm-chatbot-python-main\agent.py", line 91, in generate_response
response = agent_executor.invoke({"input": prompt})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\PythonProgram\Lib\site-packages\langchain\chains\base.py", line 89, in invoke
return self(
^^^^^
File "E:\PythonProgram\Lib\site-packages\langchain\chains\base.py", line 314, in call
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "E:\PythonProgram\Lib\site-packages\langchain\chains\base.py", line 410, in prep_outputs
self.memory.save_context(inputs, outputs)
File "E:\PythonProgram\Lib\site-packages\langchain\memory\chat_memory.py", line 39, in save_context
self.chat_memory.add_ai_message(output_str)
File "E:\PythonProgram\Lib\site-packages\langchain_core\chat_history.py", line 65, in add_ai_message
self.add_message(AIMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\PythonProgram\Lib\site-packages\langchain_core\load\serializable.py", line 107, in init
super().init(**kwargs)
File "E:\PythonProgram\Lib\site-packages\pydantic\v1\main.py", line 341, in init
raise validation_errorThe solution doesn't fix the inconsistency
//////////////////////

Hi @d.denimal

I did try Amanda's solution, it did work for me.

But this one I'm using in my code also works. it's a hack from another neo4j llm fundamental. Give it a try?

Regards,
swatakit

2 Likes

Hi @swatakit,

Thanks for the input! I don't any error message but the bot doesn't provide an answer corresponding to the course...

it seems "Neo4jVector.from_existing_index(...the retrieval_query=...)" doesn't work.

I will do another post as apparently it is not the same issue.
Thanks for your help!

1 Like

@d.denimal

glad it works for you!

On your query, I think you have to submit a description of a plot e.g. "movie about alien attacking earth" because, the movie plot then converts to-->embedding-->vector search--> return results

Cheers,
swatakit

Thank you @swatakit !! Worked for me!

1 Like

Worked for me!! Thanks a lot @swatakit

1 Like