Is there documentation on how to configure custom tokenizer in full-text indexing?

The documentation says that 'support configuring custom analyzers, including analyzers that are not included with Lucene itself.' However,

Here it doesn't say how to configure my own tokenizer. Is there an example of how to configure my own tokenizer?

call db.index.fulltext.listAvailableAnalyzers

If I have my own tokenizer interface in python:

def get_tokens(text):
  return tokens

How to configure it to be used by the full-text indexing?

I think you have to write your tokenizer in Java.

Here's more info: