Has anyone integrated a REST API with GRANDStack? Struggling to fetch from legacy REST API


I am building an MVP on top of the GRAND Stack and as long as we deal with neo4J / GraphQL, things were relatively easy and we already manage > 100 relations between several dozens of nodes. The central data schema gradually approaches 1K LoC and, so far, it runs surprisingly stable and hassle-free in development and testing.

There are just two REST API calls, both POST, and to integrate them, I did the following:

  1. Replicated the JSON spec's as GraphQL Schema
  2. Wrote a RESTData Source with the required authentication headers
  3. Wrote custom resolver
  4. Added custom resolver to as parameter to makeAugmentedSchema
  5. Added RestDataSource to Apollo Server

However, when I run the query, I get a proper JSON / REST response logged to the console but the GraphQL return type remains null(!) means for some strange reason there is no matching between the REST return type and the GraphQL type. In Playground, a non-nullable error is thrown, which is kinda expected when trying to return something from a null entity. However, I just cannot figure out how exactly to match the REST return to the GraphQL type.

I made a striped down sandbox for reproducing the issue. Any help or advice of how to integrate these two REST endpoints is most appreciated because I am totally stuck:-(

Also, is there a better way to do REST integration?

Thank you

type or paste code here


Essentially, the REST return string needed to be parsed back as JSON.

return JSON.parse(policy);

That said, integrating REST with GRANDStack needs as a lot of improvements.

I hope that this doesn't come across wrong or anything but the whole point of the GRANDstack is that you don't use REST at all. Now if you wanted to make a RRANDstack I'm sure there would be a lot of support for that. For some small projects I've played with we've found quite a bit of success just using the neo4j-javascript driver to deal with the few things that we're REST oriented. But the main focus is the synergy between GraphQL and using a native graph DB.

With all due respect Sir,

I am using internally 100% GraphQL & GraphDB. The underlying issue of the post was entirely related to integrating an external legacy REST service from a business partner. Because we do not want to deal with REST or SOAP in our internal projects let alone our partner developers, it is my job to integrate all external services and expose them through one single unified business graph API.

After having developed a fair codebase with the GRANDstack, I am increasingly underwhelmed by its lack of flexibility beyond CRUD operations. I don't believe it will survive the test of time.

That’s an interesting point. I don’t work for Neo4j or anything and I’ve also done quite a bit with the GRANDstack. I’m curious what all do you find lacking? I’m wondering what limitations others have found in it as well as some I’ve run across. This might be a good thread to share some of that info in. Maybe get a broader dialogue started?

Grandstack lacks:

  1. Proper documentation for system integration. My next avenue is SOAP exposure as GraphQL...
  2. Proper debugging
  3. Customizable generator

Beyond that:

  1. Interfaces & type inheritance is completely broken and results in massive code duplication
  2. Seeding data breaks with very few seedfiles, I just re-wrote it to use async/await
  3. Appolo Schema Federation support, well, broken for now

The last point inflicts some significant pain as my master schema keeps growing and service modularity will eventually become a hard requirment but I guess, by then, the GrandStack has been retired.

All that would almost be excusable if and only if there would be no JavaScript. I wasted so much time debugging internals of the stack just to find a missing configuration in Babel. JS is just way too fragile for anything other than browser scripting.

For operations, the GRANDStack in its current form is a nightmare, at best. However, I believe the idea in itself is truly transformative with proper execution in a proper statically checked programming language.

The idea of a schema first development system with a statically typed language that has strong and stable tooling would make the entire idea a complete no-brainer.

I can only think of Go-lang at this point because Swift, while being a good and modern language, lacks the server-side tooling required for serious operations. Luckily, excellent GraphQL support exists for Go, so it all comes down to build a Cypher Query generator to get the best of both worlds, that is, schema first development with a statically typed language and the flexibility of statically generate all query / mutation scripts for neo4j with the option of customization.

A GoGraphQL stack with Go and gqlgen for neo4j would certainly elevate the entire idea from a "nice but PoC at most" project to a dead serious contender for serious mission-critical projects.


1 Like

Thanks for sharing your thoughts. I'll have to look more into Go. I haven't had the same issues with JavasScript that you have but my use cases are pretty simple. If/when I start to experience some growing pains I'll definitely explore this route.


Just a quick question, have you dealt with extracting fields from one graphQL query and use these fields as input for another query before returning the result to the client?
By any chance?

This one really bugs me because I simply cannot figure out whether I am too stupid or whether that task is just unreasonable cumbersome in the GRANDStack.


First of all, thanks for your thoughts @marvin-hansen! You've said just about everything I have been thinking for a little while now, especially with respect to the customization of the generators and Go as an apt ecosystem for developing that.

It might be a little hackish to implement, but with regard to your problem and the limited extent you'd have to do it, I might suggest that you can use the custom cypher directive to call the REST API you need through APOC library functions:

WITH "https://random.site/api/resource/$id" as url
CALL apoc.load.jsonParams(url, {`Method`: 'Post', `Accept`: 'application/json', `Other-Headers`: $context.variables}, <payload>) yield value AS res
CALL apoc.create.vNode(['ResourceType'], res) YIELD node AS resNode
RETURN resNode

If there are any relationships between your REST data and the data housed in Neo4j, this approach would allow you to mock up synthetic relationships as well. The concept also works for direct database-to-database connections or even other GraphQL endpoints.

Maybe this helps, maybe not, but good luck in any case!

Thank you @imkleats,

that APOC function is pretty damn cool. IMHO, Apoc is one of the three top reasons to use neo4j.

That said, I already done with the REST integration at this point. I have two more SOAP legacy services to integrate in my core system, so let's see how that goes. GraphQL in itself remains invaluable for heterogeneous online system integration. It just needs proper middleware.

Speaking of that middleware issue, I had and still have to battle certain issue:

  1. Weaving functions and resolver together
  2. Enforcing strict order of function calling
  3. Maintaining a modular & re-usable code base b/c mine is growing way faster than I wish it would

Addressing these issues, the guys from prisma labs came up with some solid solution I want to share.


The main driver of those core issues come from the following applied best practices:

  1. Implement (public) business logic as resolvers and expose as GraphQL API
  2. Implement workflows across the API as resolvers and expose the workfflow entry as GraphQL API
  3. Integrate all external (non-graph-QL) webservices as GraphQL and abstract them away as workflows interwoven with the remaining API

Obviously, on paper, the GRAND stack seems like a good idea because you get all the CRUD operations generated while still manually overriding custom queries & mutations and link them to custom resolvers.

On paper.

Three weeks in, I sincerely regret my decision. I just wish neo4j would actually build their own projects with the GrandStack to get an idea how painful it really is.

When you don' eat your own dog-food, how can you expect anyone else to do so?

That said, once this project is over, the first and foremost top priority follow up project
will be a proper Go based GraphQL software infrastructure that replaces the GrandStack entirely with something we can rely on in operations.

Lesson learned.

I'm not sure it's fair to criticize the neo4j team quite so harshly :-p, if only because it diminishes the effort their folks, like William Lyon, have put in to build the starter projects/demos and as an evangelist for the stack (all while continuing to support additional features/baseline functionality). That being said, it's hard to argue that it wouldn't be nice to have some more documentation/examples for advanced use cases. At some point, the community needs to take the reins, but I think the foundation might need to be a little stronger before that can happen.

This might be an incorrect impression, but it looks like a lot of the architecture behind the package might have been inspired by Join-Monster's approach to SQL transpilation. One of the issues with this is that an AST for an SQL query is quite a bit different in structure from the GraphQL query AST, which requires more bookkeeping in a recursive depth-first traversal. This bookkeeping has been done through arguments in the recursive function calls.

When you stop to think about the Cypher AST, it is practically in the exact same shape as the GraphQL query AST already. Moreover, because of this similarity, when you traverse the GraphQL query AST with visit (or visitWithTypeInfo really), you have access to all information you need to construct the Cypher AST node the moment you enter the node (i.e the visit function had already abstracted out the bookkeeping) and can simply use whatever construct floats your boat to await the results of nested children nodes (i.e. Promise, rxjs Observable, Channels in Go).

I think the Neo4j team is aware of some of the limitations around extensibility and customization that the current architecture causes. Whether they pursue a visitor pattern like I've described above to address it, I do not know, but I'd be happy to work with you on your Go port. I was actually thinking of doing it independently on my own anyway. Let me know.