The documentation says that 'support configuring custom analyzers, including analyzers that are not included with Lucene itself.' However,
https://neo4j.com/docs/operations-manual/4.2/performance/index-configuration/#index-configuration-fulltext
Here it doesn't say how to configure my own tokenizer. Is there an example of how to configure my own tokenizer?
call db.index.fulltext.listAvailableAnalyzers
If I have my own tokenizer interface in python:
def get_tokens(text):
...
return tokens
How to configure it to be used by the full-text indexing?