When implementing vector search-functionality, it would be useful to be able to cache the vector embedding of the search-string that a user enters.
Typically, a user would enter something to search for and it is likely that the call to the database will be made several times in a row (e.g. when a user selects want to drill down the search results). Each time Neo4j encodes the user input (through genai.vector.encode), it will fire a call to OpenAI (or another provider). This will take time and comes with a (monetary) cost.
It would be interesting when Neo4j could cache the most recent xxxx encodings (e.g. 1.000) and therefore would be faster en cheaper.
Does Neo4j support this in any way ?