[First of these livestreams is TOMORROW, see below]
We just announced a few new Online Meetups. If you're not familiar with Online Meetups, they're livestream events with members of the Neo4j Community. Most online meetups are hosted by @mark.needham or @karin.wolok .
To stay up to date with all events, please Join the Neo4j Online Meetup group.
We hope you enjoy and please let us know if you have suggestions for future presentations!
Football (Soccer) exploration with Neo4j and RLang
Tomorrow @ 830am SF, 16:30 London
@chucheria Bea Hernandez, Data Scientist at Olympic Channel and co-organiser of R-Ladies Madrid.
Bea will be showing how she used the new Neo4j R-driver to build to analyze home advantage and competitiveness in football.
Event-driven Graph Analytics using Neo4j and Apache Kafka
Next Thursday, 30 May @ 830am SF, 16:30 London
@ljubica.lazarevic, Engineer at Neo4j
Commonly we will want to get insight from any analytical processing on our operational data. For example, we may want to leverage the connectedness of customers to products and their networks to identify recommendation opportunities. However, doing analytical work on operational databases is seldom a good idea, and usually, there will be separate databases for each of the tasks. Also, we may want to stream insight as and when it becomes available.
This in itself can bring in new challenges: How do we keep the data on both database instances in sync? How do we stream results as and when they’re generated from our analysis onto our transactional database?
In this talk we will describe a scenario where graph databases in a cluster and read replica format are used for both operational means, and for delivering the analytical work, and how we can use this architectural pattern with Kafka to stream back analytical results to the operational databases as soon as they’re available, whilst ensuring all of the databases are up to date with the same data. This example uses the newly released Apache Kafka plugin for Neo4j.
Getting Started with Provenance and Neo4j
Thursday, 6 June @ 830am SF, 16:30 London
Stefan Bieliauskas, Software Engineer at casecheck GmbH
We all want to know “Where does our meat come from?” or “Is this a reliable information or fake news?”. If we ask this kind of questions it is always about the Provenance of information or physical objects.
In this session we'll learn how to use Neo4j to store and query provenance data.