Showing results for 
Search instead for 
Did you mean: 

What does the following neo4j error mean?

Node Clone

I'm trying to run some queries on the date I've imported (while the rest are being imported) into neo4j but I'm getting the following error:

Something went wrong: "Error: Reference.set failed: First argument contains a string greater than 10485760 utf8 bytes in property 'users.auth0|' ('{"0":"{","1":""","2":"0","3":""","4":":","5":""...')" and the application can't recover.

It doesn't happen all the time. When it does, I restart the PC, wait a bit and then queries run again until they don't. Meanwhile data are being imported while I process the already imported data.

PC specs:

  • core i5
  • 16GB RAM
  • 1TB HDD

OS: Ubuntu 18.04.1 LTS

Any suggestions?



To be sure, we'd need to see your load code and how you're doing it. But from what I can see in the error message, it looks like (maybe) you're trying to process some file line by line, that file contains JSON in it, and maybe something about your input data is wonky.

Literally what the error is saying is that you're trying to pass as an argument that is a string value of about 10MB or so (10485760 bytes is about 10MB).

Hopefully it's not your intent to be putting such large string values into the database, as this isn't a good practice for indexing or performance anyway. Things like this sometimes will happen to me if say the linebreaks in my CSV are being misread or are malformatted or something like that.

This is just a guess in hopes it guides you the right way. If it's a bad guess, post your import code. 🙂

Hi David,

Thank you for the reply!

I used the following tutorial to import the whole bitcoin blockchain into neo4j. Everything was running smoothly, along with the queries, until they didn't.


The script keeps running fine. The blockchain is being imported, however queries won't run anymore as I get the error above. Sometimes they will, but 8/10 the neo4j browser crashes.

The activity monitor shows that RAM is almost maxed out.

Could hardware be the issue?

Any ideas or suggestions are welcome. 🙂

I restarted the PC and a log file was generated:

"There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 83558400 bytes for committing reserved memory.
Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Use 64 bit Java on a 64 bit OS
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
This output file may be truncated or incomplete.

Out of Memory Error (os_linux.cpp:2743), pid=6378, tid=0x00007f1a85d83700

JRE version: (8.0_191-b12) (build )
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again"

Well, that message is clear enough - you need to allocate more memory to Neo4j, which you can do by checking here:

The blockchain example dataset is quite large. I don't know off hand how much it needs but something like a minimum of 16gb or 32gb may be needed.

I re-did the whole procedure on a clean installation of ubuntu 18.04.1. The graph is minimal (some mb) but queries will still not work. The error is persistent while if it was memory issue it should have been resolved.

The only idea I am left with is delete the local directory of blockchain and download it again in case it s corrupted (it s stored in a hdd)

Hello David

I have an r4.large machine running Neo4j Enterprise 4.0,3 in an EC2 instance at AWS. I executed the command :play movies to check performance and connectivity of the server and when run the command MATCH(p:Person) RETURN p the following error occurred:

Something went wrong: "Error: Reference.set failed: First argument contains a string greater than 10485760 utf8 bytes in property '|' ('{"0":"{","1":"\"","2":"0","3":"\"","4":":","5":"\"...')" and the application can't recover.

The message is very similar to the one Andreas reported and I really don't know what is causing it. I took a look at the log, journalctl -e -u neo4j and there is nothing there related to this message:

May 12 15:43:56 ip-172-xx-xx-xx[1354]: Directories in use:
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   home:         /var/lib/neo4j
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   config:       /etc/neo4j
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   logs:         /var/log/neo4j
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   plugins:      /var/lib/neo4j/plugins
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   import:       /var/lib/neo4j/import
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   data:         /var/lib/neo4j/data
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   certificates: /var/lib/neo4j/certificates
May 12 15:43:56 ip-172-xx-xx-xx[1354]:   run:          /var/run/neo4j
May 12 15:43:56 ip-172-xx-xx-xx[1354]: Starting Neo4j.
May 12 15:43:57 ip-172-xx-xx-xx[1354]: 2020-05-12 18:43:57.275+0000 INFO  ======== Neo4j 4.0.3 ========
May 12 15:43:57 ip-172-xx-xx-xx[1354]: 2020-05-12 18:43:57.287+0000 INFO  Starting...
May 12 15:44:07 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:07.577+0000 INFO  Called db.clearQueryCaches(): Query cache already empty.
May 12 15:44:14 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:14.923+0000 INFO  Sending metrics to CSV file at /var/lib/neo4j/metrics
May 12 15:44:14 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:14.957+0000 INFO  Bolt enabled on yy.yy.yy.yy:7687.
May 12 15:44:14 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:14.957+0000 INFO  Started.
May 12 15:44:15 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:15.141+0000 INFO  Server thread metrics have been registered successfully
May 12 15:44:16 ip-172-xx-xx-xx[1354]: 2020-05-12 18:44:16.481+0000 INFO  Remote interface available at http://zz.zzz.zz.zzz:7474/

Best regards,

Node Clone

!Problem Solved!

The browser seems to be the issue. I don't really know why chromium started throwing errors. Chrome & Mozilla work, Chromium doesn't.

Node Link

FWIW, I've encountered the same error on Google Chrome when simply clicking on a node label button in neo4j browser. The browser seems to freeze, with the label button remains highlighted, the "Google Chrome Helper (Renderer)" process starts using 100% of one of the 4 cores of my MacBook Pro, and I cannot interact with the browser for a long time.

Sometimes, the error occurs, and other times, the rendering finishes and i am able to briefly interact with the browser, but then it freezes again.

When I re-open the tab and try again, the same thing happens. I have to restart Chrome to fix it.

MacBook Pro (Retina, 15-inch, Mid 2014)
Chrome Version 77.0.3865.120 (Official Build) (64-bit)

Just updated Google Chrome from 77 to 78 and the issue has disappeared.


@david.allen @charles1

I recently ran into the same issue. I was running Chrome 79 and did now update to Chrome 80. The error message keeps popping up for me – and I am pretty certain that it's an issue with the Neo4j Browser Sync. The error will not occur if I signed out and cleared the local storage. As soon as I logged back in and tried anything the same error would return. So I tried the same in Safari and there I would get the same behaviour. I'm not sure what it is, that made the application put 10 MB of data in the "grass" property storage, and how to remove it, but obviously it's huge and apparently too big. I did check the scripts, that I had favorited, but they only make up 6373 Bytes.

Any ideas how to fix this?

Node Link

Something went wrong: *"Error: Reference.set failed: First argument contains a string greater than 10485760 utf8 bytes in property 'users.auth0|' ('{"0":"{","1":"\"","2":"0","3":"\"","4":":","5":"\"...')"* and the application can't recover.

Neo4j Browser version: 3.2.15
Neo4j Server version: [3.4.11]

Might be server memory related. Running WITH EXISTS where property exists only on about 20 nodes out of 180K. However browser behaving very strange, every action takes about 2-4 seconds. Not client related, not browser related (chrome at 81). Just repeated in Safari, exact same behaviour.

Using standard helm chart. Logged in to one of the pods and top showing

Mem: 3914700K used, 126932K free, 45760K shrd, 6360K buff, 243844K cached CPU: 13% usr 9% sys 0% nic 52% idle 4% io 0% irq 0% sirq Load average: 1.90 2.70 2.61 3/967 926 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 1 0 neo4j S 2364m 59% 1 1% /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -cp /plugins:/var/lib/neo4j/conf:/var/lib/neo4j/lib/*:/plugins/* -server -Xms512M -Xmx512M -XX:+UseG1GC -XX 920 0 root S 2312 0% 1 0% /bin/bash 926 920 root R 1532 0% 1 0% top

So not too happy about memory situation, however manifestation of that with the strange error is peculiar.

UPDATE: So, because error message saying User blabla I decided to do one last test, logged out of BrowserSync/Auth0: tada, split of the second all the results are showing. Unfortunately this is then a BUG related to BrowserSync/Auth0. If you experience this, just open an Incognito/Private browser session, make sure you. are not logged in, run the query. It worked for me just now. Confirm in this thread please.

@renatospaka and @peter this does indeed look like an issue with Neo4j Browser sync, and is unrelated to Google or cloud packaging. Logging out of browser sync is one option; you can also try :style reset (warning, this will cause you to lose custom styles in Neo4j Browser). A hypothesis on what's causing this is that your styles might be getting too large and causing errors combined with Neo4j browser sync, but I am not sure if this will fix or not.