Showing results for 
Search instead for 
Did you mean: 

Head's Up! Site migration is underway. Phase 2: migrate recent content

Loading just the fields in an object, rather than a fixed depth load?

Node Link

In the older SDN it was possible to load an object, and have the SDN framework populate all the sub-objects. Is there any work to make this possible in the later OGM?

I know it's possible to do an unbounded depth load:

    session.load(MyClass, nodeId, -1)

but this pulls in the entire graph, whether it's referenced by MyClass or not (there are other objects that reference MyClass, so there are relationships in the graph).

One of the benefits of using an object mapper framework is so that it does the heavy lifting for you. It's frustrating for every caller of load to have to know to what depth the objects they are loading are required by their callers. This breaks a number of coding patterns :(.

I just want to load an object fully according to it's domain model.

Is there any work in progress in this area? (I don't want to reinvent the wheel).



Hey Joe,

in short: You're right. Setting the depth to fetch to -1 brings in the entire graph.

I had to create a reproducer to see what kind of query gets generated.

Sadly, even when having a domain class like this

public class BandEntity {
	private Long id;

	private String name;

	public Long getId() {
		return id;

	public String getName() {
		return name;

	public void setName(String name) { = name;

and a repository like this

public interface BandRepository extends Neo4jRepository<BandEntity, Long> {

with no relationship whatsoever, a call like

Iterable<BandEntity> bands = this.bandRepository.findAll(-1);

issuses this query: MATCH (n:Band) WITH n MATCH p=(n)-[*0..]-(m) RETURN p, returning everything connected to a band in this case.

I totally agree this is a nogo and we are tracking this here

Regarding the fact that one has to specifiy the fetch depth, I tend to agree with you. In the end, there is a scheme (the class hierarchy as defined by the developer) and OGM should take that into account while fetching and that's about it.
However, we need some time to discuss those potentially changes. We're gonna fix the < 0 issue in one of the next releases, though.

Regarding the fact that one has to specifiy the fetch depth, it's funny that a 'save' takes the object model into consideration, but the load doesn't. Very strange. The old Neo4j-SDN 3.x did work that way, and I thought that OGM was a redesign.

I don't know much about SDN 3.x.

As a matter of fact, the model is taken into account on both reads and writes, apart from when a fetch depth of -1 is requested.

But as promised, we already spiked an effort to improve that. You might want to give some feedback:

I'd recommend embracing the extra work of defining your cypher queries manually, skipping the auto-gens which are really just a sane starting point. Write your cypher to specifically pull pertinent subgraphs, and if necessary. Unfortunately this hits its limits in the fact that SDN queries have to be statically compilable, and in some cases you'll have to expand your logic into a DB extension.

@Jiropole A sane suggestion, but in lots of ways it undermines the benefits of using an OGM. Remember, OGM came from SDN, and SDN3.x did this deep object loading by default. Now we can't load objects in way from OGM (whether by load with a depth or from a cypher query) with any kind of efficiency.

Given the OGM knows about the object hierarchy when saving, it ought to know about it when loading - otherwise users are pretty stuck between a rock and a hard place.

For example, our code uses this central method:

public <T> Set<T> getByProperty(Class<T> clazz, String property, def value) {
    def a = cyLabel(cyIdentifier("a"), clazz)
    def p = cyProperty(a, property)
    def q = cyQuery(cyMatch(cyPath(cyNode(a))), cyWhere(cyEqualTo(p, cyValue(value))), cyReturn(a.identifier))
    cypherService.cypher(q).objects as Set<T>

It worked efficiently under SDN, when SDN did the deep loading efficiently.
What you suggesting requires a complete refactoring of the code base, and not generic respository - making it much more like a standard database application, and removing the benefits of polymorphism and NOSQL. :(.

Hi @michael.simons, thanks for undertaking this spike. I've commented on the issue in github. Thanks again, Joe