Hi @guinametal !
Guess you are having a layer that orchestrate your DB calls. I haven't found a single query to do this but there's a pretty standard logic you may follow, hoping is performant enough.
Consider your use case as (with extra relations):
:begin
CREATE CONSTRAINT ON (node:`UNIQUE IMPORT LABEL`) ASSERT (node.`UNIQUE IMPORT ID`) IS UNIQUE;
:commit
CALL db.awaitIndexes(300);
:begin
UNWIND [{_id:29, properties:{name:"a"}}, {_id:30, properties:{name:"b"}}, {_id:31, properties:{name:"c"}}, {_id:32, properties:{name:"d"}}, {_id:33, properties:{name:"e"}}, {_id:34, properties:{name:"f"}}, {_id:35, properties:{name:"g"}}, {_id:36, properties:{name:"h"}}, {_id:37, properties:{name:"i"}}] AS row
CREATE (n:`UNIQUE IMPORT LABEL`{`UNIQUE IMPORT ID`: row._id}) SET n += row.properties SET n:NODE;
:commit
:begin
UNWIND [{start: {_id:29}, end: {_id:30}, properties:{w:9}}, {start: {_id:29}, end: {_id:31}, properties:{w:7}}, {start: {_id:29}, end: {_id:32}, properties:{w:11}}, {start: {_id:30}, end: {_id:31}, properties:{w:12}}, {start: {_id:30}, end: {_id:32}, properties:{w:7}}, {start: {_id:31}, end: {_id:32}, properties:{w:8}}, {start: {_id:32}, end: {_id:33}, properties:{w:4}}, {start: {_id:32}, end: {_id:34}, properties:{w:5}}, {start: {_id:33}, end: {_id:34}, properties:{w:7}}, {start: {_id:33}, end: {_id:35}, properties:{w:9}}, {start: {_id:33}, end: {_id:36}, properties:{w:7}}, {start: {_id:33}, end: {_id:37}, properties:{w:8}}, {start: {_id:34}, end: {_id:35}, properties:{w:9}}, {start: {_id:34}, end: {_id:36}, properties:{w:10}}, {start: {_id:34}, end: {_id:37}, properties:{w:8}}, {start: {_id:35}, end: {_id:36}, properties:{w:10}}, {start: {_id:35}, end: {_id:37}, properties:{w:7}}, {start: {_id:36}, end: {_id:37}, properties:{w:11}}, {start: {_id:30}, end: {_id:37}, properties:{w:0}}] AS row
MATCH (start:`UNIQUE IMPORT LABEL`{`UNIQUE IMPORT ID`: row.start._id})
MATCH (end:`UNIQUE IMPORT LABEL`{`UNIQUE IMPORT ID`: row.end._id})
CREATE (start)-[r:CON]->(end) SET r += row.properties;
:commit
:begin
MATCH (n:`UNIQUE IMPORT LABEL`) WITH n LIMIT 20000 REMOVE n:`UNIQUE IMPORT LABEL` REMOVE n.`UNIQUE IMPORT ID`;
:commit
:begin
DROP CONSTRAINT ON (node:`UNIQUE IMPORT LABEL`) ASSERT (node.`UNIQUE IMPORT ID`) IS UNIQUE;
:commit
First we will charge or graph on memory.
CALL gds.graph.create.cypher(
'gr_complete',
'MATCH (n:NODE) RETURN id(n) AS id, labels(n) AS labels',
'MATCH (n:NODE)-[r:CON]->(m:NODE) RETURN id(n) AS source, id(m) AS target, r.w as w'
)
Then we will filter on condition:
CALL gds.beta.graph.create.subgraph('gr_filtered', 'gr_complete', '*', 'r.w > 7')
YIELD graphName, fromGraphName, nodeCount, relationshipCount;
Now we can have the isolated components
CALL gds.wcc.stream('gr_filtered')
YIELD nodeId, componentId
with componentId, collect(nodeId) as g, collect(gds.util.asNode(nodeId)) as nodes
RETURN componentId, g
Now we have the tricky part. Now we have 2 cluster but you may have more. Now you need to execute next step for every pair of clusters.
with [33, 34, 35, 36, 37] as c1, [29, 30, 31, 32] as c2
MATCH(n1:NODE)-[r:CON]-(n2:NODE)
where id(n1) in c1 and id(n2) in c2
return r order by r.w DESC limit 1
And this relation is your bridge between the 2 clusters. Now you just have you added it to the others 
H
EDIT:
I was thinking about other ways to do so.
You can calculate clusters with wcc mutating the in memory graph as:
CALL gds.wcc.mutate('gr_filtered', { mutateProperty: 'componentId' })
YIELD nodePropertiesWritten, componentCount;
And then:
MATCH(n:NODE)-[r:CON]-(n2:NODE)
where gds.util.nodeProperty('gr_filtered', id(n), 'componentId') < gds.util.nodeProperty('gr_filtered', id(n2), 'componentId')
with gds.util.nodeProperty('gr_filtered', id(n), 'componentId') as c1, gds.util.nodeProperty('gr_filtered', id(n2), 'componentId') as c2,collect(r) as rels
UNWIND rels as rel
with c1, c2, apoc.agg.maxItems(rel, rel.w) as r
return c1, c2, r
With r as the bridges.