Fastest method/tool to delete all nodes for a label (over 138M) when label has index?

Can anyone recommend the most optimal method to remove 138M+ nodes, one label and one index?

So far MATCH (n:TheLabel) RETURN n LIMIT 1 is instant.
As soon as I try something like MATCH (n:TheLabel) WITH n LIMIT 1 DETACH DELETE n it takes forever (yet to return).
I've also tried dropping the index. That is also taking forever.

Based on previous work with the dataset, I'm almost positive it's due to updating the index when deleting, that was the case when writing to the graph initially. That's why I'm trying to drop the index first.

Anyway if anyone has any tips, that do not involve APOC, I'm all ears.

graph version is 3.5.17 on a 3 server cluster.

Thanks,
Mike

DROP DATABASE yourDataBaseName

Just kidding, but it works and really quickly. ( Neo4j 4.x only sadly )
I don't get why you are using the WITH clause.

But usually:
MATCH (n:TheLabel)
DETACH DELETE n

Is THE way to go without APOC.
Labels indexes, if I can call them like that, are a part of the engine, you can not delete them. And they are the most accessible, thus quick thing in the engine.
You can only manage the property(ies) indexes you created before.

The WITH is to limit the transaction size.

MATCH (n:TheLabel)
DETACH DELETE n

when there are over a billion nodes plus attached rels will exceed the transaction size limit and crash the cluster trying to copy the transaction to the followers.

There is one actual index that can be dropped.
For example: DROP INDEX ON :TheLabel(property1, property2, property3)

Transaction size limit?

Isn't any configuration line for that in the configuration file.
Does it work if you increase:

dbms.memory.heap.max_size=1G
dbms.memory.pagecache.size=512m Not more the 50% for your computer memory
dbms.tx_state.memory_allocation=ON_HEAP <- Not sure about this one, be careful

Theses are set as no limit ( commented ) by default in my Neo4j instance:

#dbms.memory.transaction.global_max_size=256m
#dbms.memory.transaction.max_size=16m

If increasing theses don't work, I don't know.
In my book if you have enough memory to create theses nodes you should have enough to delete them.