Kernel panic, Desktop2 on Mac M4 not loading Nodes, Relationships, Keys on 2025.09.0

Hi all,

Was running an import using apoc.load.json using the rust extension of the python3 driver with this approach, running 5000 records at a time:

with GraphDatabase.driver(NEO4J_URI, auth=AUTH) as driver:

I’m importing into a large database (500m nodes) and had a similar crash before, so I’m doing something wrong. I’m using the default database of neo4j. There are a 8 transaction.db files, earliest from 29 Nov; last one from yesterday 30 Nov 18:53. Note that the python error msg below is from 22:08 onwards.

Desktop is up, but shows no databases in the local instance. I can re-run any load (with less data per load) so the priority for me is having Neo4j recognise the Nodes, Relationships, Properties.

Log at time of fail shows kernel panic:

2025-11-30 22:09:32.637+0000 ERROR [neo4j/f143053e] Panic detected [Reason: Kernel database panicked, Error: org.neo4j.internal.kernel.api.exceptions.TransactionApplyKernelException: Failed to apply transaction: Transaction #72063 {started 2025-11-30 22:08:09.580+0000, committed 2025-11-30 22:09:03.926+0000, with 184668 commands in this transaction, lease -1, latest committed transaction id when started was 72062, consensusIndex: -1}]
org.neo4j.internal.kernel.api.exceptions.TransactionApplyKernelException: Failed to apply transaction: Transaction #72063 {started 2025-11-30 22:08:09.580+0000, committed 2025-11-30 22:09:03.926+0000, with 184668 commands in this transaction, lease -1, latest committed transaction id when started was 72062, consensusIndex: -1}
at org.neo4j.internal.kernel.api.exceptions.TransactionApplyKernelException.internalError(TransactionApplyKernelException.java:39) ~[neo4j-kernel-api-2025.09.0.jar:2025.09.0]
at com.neo4j.internal.blockformat.BlockStorageEngine.apply(BlockStorageEngine.java:473) ~[neo4j-block-storage-engine-2025.09.0.jar:2025.09.0]

....

The latest neo4j log has an authentication failure (I had passwords stored locally in Desktop):

2025-12-01 10:32:00.181+0000 WARN  [bolt-35] The client is unauthorized due to authentication failure.
2025-12-01 10:32:00.182+0000 WARN  [bolt-36] The client is unauthorized due to authentication failure.
2025-12-01 10:32:00.182+0000 WARN  [bolt-37] The client is unauthorized due to authentication failure.  

Neo4j version: 2025.09.0

plugins: apoc, gen-ai, graph data science (only using apoc at the moment)

Desktop2 version: recent (ie last time it asked me to upgrade)

Mac running on M4 Tahoe 26.1; 15g of memory allocated; running on an external SSD with >2TB space available.

python3 version: 3.13.9

neo4j-admin not running (not finding the jvm, but haven’t needed it prior to this)

Python error:


neo4j.exceptions.GqlError: {gql_status: 50N00} {gql_status_description: error: general processing exception - internal error. Internal exception raised BlockStorageEngine: Failed to apply transaction: %s} {message: 50N00: Internal exception raised BlockStorageEngine: Failed to apply transaction: %s} {diagnostic_record: {'OPERATION': '', 'OPERATION_CODE': '0', 'CURRENT_SCHEMA': '/'}} {raw_classification: None}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

    result = driver.execute_query(query, params)

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/driver.py", line 946, in execute_query

    return session._run_transaction(

           ~~~~~~~~~~~~~~~~~~~~~~~~^

        access_mode,

        ^^^^^^^^^^^^

    ...<3 lines>...

        {},

        ^^^

    )

    ^

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/work/session.py", line 555, in _run_transaction

    tx._commit()

    ~~~~~~~~~~^^

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/work/transaction.py", line 220, in _commit

    self._connection.fetch_all()

    ~~~~~~~~~~~~~~~~~~~~~~~~~~^^

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/io/_bolt.py", line 881, in fetch_all

    detail_delta, summary_delta = self.fetch_message()

                                  ~~~~~~~~~~~~~~~~~~^^

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/io/_bolt.py", line 866, in fetch_message

    res = self._process_message(tag, fields)

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/io/_bolt5.py", line 1202, in _process_message

    response.on_failure(summary_metadata or {})

    ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/neo4j/_sync/io/_common.py", line 262, in on_failure

    raise self._hydrate_error(metadata)

neo4j.exceptions.DatabaseError: {neo4j_code: Neo.DatabaseError.Transaction.TransactionCommitFailed} {message: Could not apply the transaction: Transaction #72063 {started 2025-11-30 22:08:09.580+0000, committed 2025-11-30 22:09:03.926+0000, with 184668 commands in this transaction, lease -1, latest committed transaction id when started was 72062, consensusIndex: -1} to the store after written to log.} {gql_status: 2DN05} {gql_status_description: error: invalid transaction termination - failed to apply transaction. There was an error on applying the transaction. See logs for more information.}

Thank you for reading this, and any help would be great!

Update: several restarts, and extended time away from keyboard, Desktop 2 reported Nodes and Relationships again in the query tab.

But being Desktop, it has just frozen. It is not a happy bunny.

Also, I am still seeing Databases (0) when I look at the local instances tab, which means doing a backup does not work from Desktop….

I can see the Neo4j Desktop 2 Helper (Renderer) is using a fair amount of CPU (127% per Activity Monitor) - but the list of transactions neostore.transaction.db.820 through neostore.transaction.db.827 has not shifted at all.

Any advice would be welcome….

Here is how I would try create a kernel panic:

  • Allocate more memory than I actually have on the system
  • Run merge queries that does not match a node key constraint
  • Run lots of un-parameterized queries
  • Use apoc.load.json with large json documents to throw off memory estimations
  • Change the ownership of some database files
  • Change limits like max files and cpu for the user running the process
  • Find an inconsistent database backup to start with
  • Create a new driver for every transaction
  • Avoid consuming results and leave sessions open

Still, you may have to combine some of that to successfully throw the kernel into a panic and not get any resonable output in the logs.

Maybe this will help you identify some things to look into? It sounds like the first step would be to create a new “clean project” so you have an empty database. Make sure your heap and pagecache are set. Instead of writing to the graph, just run through the apoc.load.json and count something (to verify that all files can be parsed).

If you can, share what your apoc.load.json query looks like.

Thank you for your response, Hakan!

Not exceeding memory (that I can see) and using indices, parameters. Not changing max files or cpu for the user, or leaving sessions open.

Will definitely explore these 2 points you have raised & report back:

  • Find an inconsistent database backup to start with

    • not that I know of – will run neo4j-admin to check the db
  • Create a new driver for every transaction

    • Yes, I am creating a new driver for each 2.5k-5k line json file, using the with GraphDatabase pattern per the documentation I found. The import loops through multiple files. From your comment, I should move this outside the for loop and just open & close the sessions, which I will do

          with GraphDatabase.driver(NEO4J_URI, auth=AUTH) as driver:
      

Will report back with what I’ve found.

Thanks again, Roger

HI Hakan,

Thanks to your suggestion, moving the driver creation outside the loop seems to have made the the database stable.

For the record, the database was not inconsistent, so I’d say my error creating the driver was root cause….

Roger