Unable to create relationships on big Scale Factor TPCH dataset

Hello,

I am using Neo4J EE 5.10 and trying to import the TPCH dataset for scale factor 100 -> 100GB of data. I imported the nodes with neo4j admin import (no issues here) and I started creating relationships between the nodes with the following format:

LOAD CSV WITH HEADERS FROM "file:///LINEITEM_ORDERS.csv" AS inputRow
CALL {
WITH inputRow
MATCH (l:LINEITEM { l_orderkey: toInteger(inputRow.l_orderkey), l_linenumber: toInteger(inputRow.l_linenumber)})
MATCH (o:ORDERS{o_orderkey: toInteger(inputRow.l_orderkey)})
WITH l,o,inputRow
MERGE (l)-[:LINEITEM_ORDERS]->(o) } IN TRANSACTIONS OF 5000 ROWS;

Everything went fine for almost all relationships types, except LINEITEM_PARTSUPP, which needs to be MATCHed on composed unique fields (ex. primary keys in SQL). I tried using the same logic:

LOAD CSV WITH HEADERS FROM "file:///LINEITEM_PARTSUPP.csv" AS inputRow
CALL {
WITH inputRow
MATCH (l:LINEITEM { l_orderkey: toInteger(inputRow.l_orderkey), l_linenumber: toInteger(inputRow.l_linenumber)})
MATCH (ps:PARTSUPP { ps_partkey: toInteger(inputRow.l_partkey), ps_suppkey: toInteger(inputRow.l_suppkey)})
WITH l,ps
MERGE (l)-[:LINEITEM_PARTSUPP]->(ps) } IN TRANSACTIONS OF 5000 ROWS;

And even tried to merge that field in PARTSUPP into a single field, by setting a new parameter as a superkey:

MATCH (n:PARTSUPP)
CALL { WITH n
SET n.superkey=n.ps_partkey*100000000+n.ps_suppkey
} IN TRANSACTIONS OF 10000 ROWS;

And modifying the query to create relationships accordingly:

LOAD CSV WITH HEADERS FROM "file:///LINEITEM_PARTSUPP.csv" AS inputRow
CALL {
WITH inputRow
MATCH (l:LINEITEM { l_orderkey: toInteger(inputRow.l_orderkey), l_linenumber: toInteger(inputRow.l_linenumber)})
MATCH (ps:PARTSUPP { superkey: toInteger(inputRow.l_partkey)*100000000+toInteger(inputRow.l_suppkey)})
WITH l,ps
MERGE (l)-[:LINEITEM_PARTSUPP]->(ps) } IN TRANSACTIONS OF 10000 ROWS;

All these methods are failing. First one takes way too long: about 24hours for 2500 relationships, while this dataset for SF100 has milions of relationships. And the second ones do not insert any relationship in the graph, despite committed reads growing, no committed writes happen.

I cannot import the relationships with neo4j admin import due to the lack of a field with unique values in the dataset structure which must remain the same.
Is there anything that I'm missing? Is there other way for doing this without crashing the memory ?