Needed: Cypher Examples and resources

I am in need of several Cypher examples in various sizes and function to see what im supposed to make.

Im still in the copy/paste and butcher stage of learning to code. The tutorials gave a decent understanding of the tools to use, but I would greatly appreciate a wider selection of examples then the few i found so far.

So im looking for anything you think might help, so send everything, i want the moderators to curate and then append their collection with this post's response.

I suggest you start with the cypher manual. it has examples of each of the things taught. I learned a lot from starting there.

And looking into the "Beginner" level classes in GraphAcademy. There is a data modeling class and cypher class.

Free, Self-Paced, Hands-on Online Training | Free Neo4j Courses from GraphAcademy1e71876_gaNTQ5NjY1MDA4LjE2MTI1ODkwMDU._ga_DL38Q8KGQC*MTY0OTY5NjI3NC40NjcuMS4xNjQ5Njk2NjIxLjA.&_ga=2.233796382.1896214637.1649519197-549665008.1612589005

I can provide cypher examples if you have a specific case you want to discuss.

1 Like

I will go over those resources you suggested. I would really appreciate help with my specific case too.

I'm taking inspiration from the Panama papers and the wiki-data process. Im trying to apply a variant of clean_wiki which took Wikipedia and turned it into wiki-data, and use it on news articles with meta data for the articles as property values.

The goal is to break an article into "facts", then merge multiple related articles into one graph with the duplication of facts from multiple sources "confirming" and showing trends across the field.

As of now im using the Tutorial: Build a Knowledge Graph using NLP and Ontologies as a starting point.

Like the tutorial my test set is currently set so each node is a full article with metadata as property values. The next step is to reduce it further into "facts". I was going to use this snippet as a starting point for getting the NLP extraction. I am not trying to merge in the ontology portion so i was working on how to simplify it down when i decided to look for examples.

Thank you for the help

MATCH (a:Article {uri: "https://dev.to/lirantal/securing-a-nodejs--rethinkdb--tls-setup-on-docker-containers"})
CALL apoc.nlp.gcp.entities.stream(a, {
 nodeProperty: 'body',
 key: $key
})


YIELD node, value

SET node.processed = true
WITH node, value
UNWIND value.entities AS entity


WITH entity, node
WHERE not(entity.metadata.wikipedia_url is null)


MERGE (page:Resource {uri: entity.metadata.wikipedia_url})
SET page:WikipediaPage


MERGE (node)-[:HAS_ENTITY]->(page)

Is there something specific with this code or some other code you want to investigate?

Right now i need to unpack new nodes and connections from the existing full article nodes. And then write them to the graph.

The example does that and works in ontology overlap from wikidata which im not trying to set up right now.

My first priority is just unpack connected nodes and figure out how big of chunks to pass for best results.