Is there documentation on how to configure custom tokenizer in full-text indexing?

The documentation says that 'support configuring custom analyzers, including analyzers that are not included with Lucene itself.' However,

https://neo4j.com/docs/operations-manual/4.2/performance/index-configuration/#index-configuration-fulltext

Here it doesn't say how to configure my own tokenizer. Is there an example of how to configure my own tokenizer?

call db.index.fulltext.listAvailableAnalyzers

If I have my own tokenizer interface in python:

def get_tokens(text):
   ...
  return tokens

How to configure it to be used by the full-text indexing?

I think you have to write your tokenizer in Java.

Here's more info: