Caching of genai.vector.encode output

When implementing vector search-functionality, it would be useful to be able to cache the vector embedding of the search-string that a user enters.

Typically, a user would enter something to search for and it is likely that the call to the database will be made several times in a row (e.g. when a user selects want to drill down the search results). Each time Neo4j encodes the user input (through genai.vector.encode), it will fire a call to OpenAI (or another provider). This will take time and comes with a (monetary) cost.

It would be interesting when Neo4j could cache the most recent xxxx encodings (e.g. 1.000) and therefore would be faster en cheaper.

Does Neo4j support this in any way ?

Actually it might be useful to store the user's conversation as a pattern in the graph, we do that e.g. in Neoconverse and our Docs chatbot.

Then you can store the embedding directly on the user question node.
And for the answer you can connect it to the sources that were used to generate the answer.


Thanks for your thoughts. Your use case is slightly different from ours. Ideally, I'd like the database to take care of this for me.