export whole Neo4j database / cypher result to GraphJSON - json

I´ve already had a look at different post like this and this but nothing seems to be answered 100%.
My current problem is, that I want to visualyze - and ideally - analyze my Neo4j-Graph with a library (or software/tool).
The database-server is running on a remote (virtual) server and it seems that there is no chance to export the database to a format where I can work on with.
I´ve tried exporting the graph in a .graphml-file to import this file in Gephi, but Gephi doesn´t find the properties. Gephi-streaming with apoc-procedures and the graph-streaming plugin also does not work, because it´s a remote server (also with the tool mentioned here).
Now I´m currently testing around with Alchemy.js... So far, so good. But as it seems there´s no way to export the graph/query to the GraphJson-format?
Is there really no "easy" way to export the data?
Thanks for your help in advance!

This is how I would proceed
Run this query from the post you mentioned in the Neo4j Browser or in any bolt driver:
MATCH (a)-[r]->(b)
WITH collect(
{
source: id(a),
target: id(b),
caption: type(r)
}
) AS edges
RETURN edges
Now that you have loaded the data, you can simply download it as JSON using download button.(if you are using bolt driver ignore)
Either you manually downloaded JSON from Neo4j Browser or you are using bolt driver, you will end up with something like this.
{
"columns": [
"edges"
],
"data": [
{
"row": [
[
{
"source": 31288,
"target": 152,
"caption": "HAS_PAYMENT_METHOD"
}
]
],
"meta": [
null
],
"graph": {
"nodes": [
],
"relationships": [
]
}
}
]
Now all you have to is to filter out data.row results and you are done. Probably using bolt driver is the better choice as you have to clean up data anyway and it doesnt run into issues with returning a lots of data to the browser(it might crash).
Update: added python version
from neo4j.v1 import GraphDatabase
driver = GraphDatabase.driver("bolt://localhost:7687", auth=("neo4j", "neo4j"))
session = driver.session()
result = session.run("MATCH (a)-[r]->(b) WITH collect({source: id(a),target: id(b),caption: type(r)}) AS edges RETURN edges")
for record in result:
print(record["edges"])
Hope this helps

Related

Room object in Revit files

I followed the instruction in the link below to extract Room objects from Revit models:
https://forge.autodesk.com/blog/new-rvt-svf-model-derivative-parameter-generates-additional-content-including-rooms-and-spaces
I made the changes as instructed and tested the sample Revit file (rac_basic_sample_project.rvt). But, still I don't see the rooms or the viewables (phases). Below is fhe request I post. Am I missing anything?
{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YzQ4ZDUxNDNhMDRiNDAxNmI3ODYxY2NlMzQ2ZDkyNjdfZmFjaWxpdHlfOTUvZWIyYzMzNDgtNDAxYS00ZjQ3LTgwM2EtMjM1OGYwYmI0YjY2LnJ2dA"
},
"output": {
"destination": {
"region": "us"
},
"formats": [
{
"type": "svf",
"views": [
"3d"
],
"advanced": {
"generateMasterViews": true
}
}
]
}
}
I just tested the feature and I can see the room data:
The JSON payload seems ok, so try checking the following things:
Make sure you use the x-ads-force header (explained in the blog post you linked to); if you had already processed your Revit model before, triggering a new Model Derivative job would not do anything unless you force the translation
Try using another design (and from a newer version of Revit if possible); in my screenshot I'm using one of the official samples for Revit 2020, although I remember being able to get the room data from older samples as well
The room data is only available in certain "viewables" so make sure you're looking at the right one; for my sample project, for example, the room data is not available in the "{3D}" viewable but it is available in the "Working Drawings" viewable

getDegree()/isOutgoing() funcitons don't work in graphAware/neo4j-to-elasticsearch mapping.json

Neo4j Version: 3.2.2
Operating System: Ubuntu 16.04
I use getDegree() function in mapping.json file, but the return would always be null; I'm using the dataset neo4j tutorial Movie/Actor dataset.
Output from elasticsearch request
mapping.json
{
"defaults": {
"key_property": "uuid",
"nodes_index": "default-index-node",
"relationships_index": "default-index-relationship",
"include_remaining_properties": true
},
"node_mappings": [
{
"condition": "hasLabel('Person')",
"type": "getLabels()",
"properties": {
"getDegree": "getDegree()",
"getDegree(type)": "getDegree('ACTED_IN')",
"getDegree(direction)": "getGegree('OUTGOING')",
"getDegree('type', 'direction')": "getDegree('ACTED_IN', 'OUTGOING')",
"getDegree-degree": "degree"
}
}
],
"relationship_mappings": [
{
"condition": "allRelationships()",
"type": "type",
}
]
}
Also if I use isOutgoing(), isIncoming(), otherNode function in relationship_mappings properties part, elasticsearch would never load the relationship data from neo4j. I think I probably have some misunderstanding of this sentence only when one of the participating nodes "looking" at the relationship is provided on this page https://github.com/graphaware/neo4j-framework/tree/master/common#inclusion-policies
mapping.json
{
"defaults": {
"key_property": "uuid",
"nodes_index": "default-index-node",
"relationships_index": "default-index-relationship",
"include_remaining_properties": true
},
"node_mappings": [
{
"condition": "allNodes()",
"type": "getLabels()"
}
],
"relationship_mappings": [
{
"condition": "allRelationships()",
"type": "type",
"properties": {
"isOutgoing": "isOutgoing()",
"isIncomming": "isIncomming()",
"otherNode": "otherNode"
}
}
]
}
BTW, is there any page that list all of the functions that we can use in mapping.json? I know two of them
github.com/graphaware/neo4j-framework/tree/master/common#inclusion-policies
github.com/graphaware/neo4j-to-elasticsearch/blob/master/docs/json-mapper.md
but it seems there are more, since I can use getType(), which hasn't been listed in any of the above pages.
Please let me know if I can provide any further help to solve the problem
Thanks!
The getDegree() function is not available to use, in contrary to getType(). I will explain why :
When the mapper (the part responsible to create a node or relationship representation as ES document ) is doing its job, it receive a DetachedGraphObject being a detached node or relationship.
The meaning of detached is that it is happening outside of a transaction and thus query operations are not available against the database anymore. The getType() is available because it is part of the relationship metadata and it is cheap, however if we would want to do the same for getDegree() this can be seriously more costly during the DetachedObject creation (which happen in a tx) depending on the number of different types etc.
This is however something we are working on, by externalising the mapper in a standalone java application coupled with a broker like kafa, rabbit,.. between neo and this app. We would not, however offer the possibilty to requery the graph in the current version of the module as it can have serious performance impacts if the user is not very careful.
As last, the only suggestion I can give you is to keep a property on your node with the updates of degrees you need to replicate to ES.
UPDATE
Regarding this part of the documentation :
For Relationships only when one of the participating nodes "looking" at the relationship is provided:
This is used only when not using the json definition, so you can use one or the other. the json definition has been added later as addition and both cannot be used together.
For answering this part, it means that the nodes of the incoming or outgoing side, depending on the definition, should be included in the inclusion policy for nodes, like hasLabel('Employee') || hasProperty('form') || getProperty('age', 0) > 20 . If you have an allNodes policy then it is fine.

Which file should I keep: .io-config.json or ionic.config.json?

I recently migrated my Ionic app into Ionic Cloud and after running ionic io init in the command-line, I noticed that I end up with two (config?) json files that seem to have the same purpose. However they have different names and I am not sure which one should be kept. The contents are as follows:
.io-config.json
{
"app_id": "id",
"api_key": "key"
}
ionic.config.json
{
"name": "name",
"app_id": "id",
"watchPatterns": [
"www/**/*",
"!www/lib/**/*"
]
}
Which one should be kept?
According to an expert on Ionic's Slack, both files should be kept. They each have their own specific purpose.

How to export large dataset collection of mongodb to CSV file on button click via node express

I have db contain large dataset - json objects - (array) around ~10k i have for now. I want to to fetch all from db and generate csv and download via route..
Here's sample json object:
{
"_id" : ObjectId("56bc3a7da30befd952349542"),
"asin" : "B00T2Q1S18",
"searchRank" : 113,
"name" : "FREEing Racing Miku 2014 (EV Mirai Version) Figma Action Figure",
"createdAt" : ISODate("2016-02-11T07:38:37.774Z"),
"updatedAt" : ISODate("2016-02-11T07:44:07.667Z"),
"linkIds" : [
"25b1071a9e908806338c4106"
],
"price" : {
"amazon" : 50.49
},
"ranks" : [
{
"number" : 43619,
"category" : "Baby Toys"
}
],
"upc" : ""
}
Is there any better npm (node) library which can converts my json collection to csv..
Though I have tried those but on large dataset they aren't working.
papaparse / babyparse
json2csv
Is there any other libs that you know better or any other better approach?
Thanks.
I have done this before using an npm library called csv-builder. Based on my experience I can say that it gives good performance and It is quite easy to implement.
I have made a CSV of about 2 LAC rows and around 8-10 columns,with manipulation in between using this library.
I tried with many libs and at last I found one - a great npm module which handles large dataset problem nicely....
https://www.npmjs.com/package/csvwriter
exported upto 5 lacs + json objects (for now)..
Here is my small demo large dataset json to csv exporter app via node, express, mongodb
Hope this helps others as well, when they come over here.
Cheers,
Thanks.

Play application.conf : Where to consult the list of all possible variable?

Where can i find the list of all possible variable that is possible to set in play application.conf ?
I can't find this information on playframework website.
Thank you
If you use IDE such as eclipse or IntelliJ, you can inspect Play.application().configuration() at runtime while debugging and it will contain all possble configuration key/value pairs. It briefly looks as follows:
{
"akka":{ },
"application":{ },
"applyEvolutions":{ },
"awt":{ },
"db":{ },
"dbplugin":"disabled",
"evolutionplugin":"enabled",
"file":{ },
"java":{ },
"jline":{ },
"line":{ },
"logger":{ },
"os":{ },
"path":{ },
"play":{ },
"promise":{ },
"report":{ },
"sbt":{ },
"sun":{ },
"user":{ }
}
There is no such list of all possible variables, since the application.conf is arbitrarily extensible by all sorts of tools and components, most of them third party, and can contain any config the user wants.
For example: the configuration detailing Play's thread pools is really just Akka configuration.
The key things (DB config, languages, evolutions) are in the template, either with default values or commented out, when you initialise a new Play application.
The config page on the site discusses some additional configuration you might need, but this mostly relates to concerns external to the application, like launching and logging.