I had the same problem as you and also was struggling a lot since I could not find any proper answers to that. A hint on an additional prompt then brought the solution for me. Here is, what I found out and then did:
Within the CypherQAChain, there is a prompt template which tells the LLM what to do with the received context. It is set by default to a standard QA Prompt which you can find here: langchainjs/langchain/src/chains/graph_qa/prompts.ts at main · langchain-ai/langchainjs · GitHub under const CYPHER_QA_TEMPLATE.
Of course, this is very general to suit all kinds of answers / use cases. I guess that in your (and in my) case this was just not enough for the LLM to understand how to transform the context into a proper response and hence it replied "I don't know".
My solution now was to copy that QA prompt into my script, modify it so that it understands what the returned context in my queries might be and provide it to the CypherQAChain. Here is a short extract:
CYPHER_GENERATION_TEMPLATE = """
Some cypher template text and at the end:
Schema:
{schema}
Question:
{question}
Cypher Query:
"""
QA_PROMPT = """
Some qa template ...
Question: Which customers are associated with Avery H. Jackson?
Context:[{{'Customer': 'Devon Q. Allen'}}, {{'Customer': 'Quinn L. Davis'}}]
Helpful Answer: The customers Devon Q. Allen and Quinn L. Davis are associated with Avery H. Jackson.
Follow this example when generating answers.
If the provided information is empty, say that you don't know the answer.
Information:
{context}
Question: {question}
Helpful Answer:
"""
cypher_prompt = PromptTemplate.from_template(CYPHER_GENERATION_TEMPLATE)
qa_prompt = PromptTemplate.from_template(QA_PROMPT)
cypher_qa = GraphCypherQAChain.from_llm(
llm,
graph=graph,
verbose=True,
cypher_prompt=cypher_prompt,
qa_prompt = qa_prompt
)
Remark: Don't forget to put double curly brackets ({{ and }}) into the context example part of your prompt or else you will get the ValueError: Missing some input keys: {...} error.
For me this solved my problem and the context is now correctly used for formulating an answer. I hope it solves your problem, too!