Load json file to neo4j causal cluster in google cloud - json

I have a huge json file (1 gb) data and I want to insert it into Neo4j database in google cloud platform. I have uploaded my json file in /var/lib/neo4j/import directory and I tried to use the following cypher code to create my nodes in neo4j. Unfortunately I receive the following error.
call apoc.load.json('file:///nov.json') yield value
unwind value as val
merge(a:Article{title:val.title, text:val.text, url:val.url })
Following error
Failed to invoke procedure `apoc.load.json`: Caused by: java.lang.RuntimeException: Can't read url or key file:/var/lib/neo4j/import/nov.json as json: /var/lib/neo4j/import/nov.json (No such file or directory)
I would like to know what is the best way to load this json file into neo4j database.

Once apoc.import.file.enabled=true is enabled, imports should use (file) URLs relative to the dbms.directories.import directory.
I believe the URL should be file://nov.json (and not the absolute path file:///nov.json).

Related

Dump dictionary as json file to Google Cloud storage from Jupyter Notebook on Data Proc

I am using spark on Google dataproc cluster. I have created a dictionary in Jupyter notebook which I want to dump in my GCS bucket. However, it seems the usual way of dumping to json using fopen() does not work in case of gcp. So, how can I write my dictionary as .json file to GCS. Or, is there any other way to get the dictionary?
It's funny, I could write spark dataframe to gcs without any hassle, but apparently, I can't load JSON on gcs unless I have it on my local system!
Please help!
Thank you.
The file in GCS is not in your local file system so that's why you cannot call "fopen" on it. You can either save to GCS by directly using a GCS client (for example, this tutorial), or treat the GCS location as an HDFS destination (for example, saveAsTextFile("gs://...")

Encoding Error in Json file when copied from Sharepoint to Azure Blob Storage

I have a json file in sharepoint and I am using Logic Apps to get the json file and dump it into blob storage. Further I need to open that json file in Databricks python using this code
blobstring = blob_service.get_blob_to_bytes(INPUTCONTAINERNAME, INPUTFILEPATH)
myJson = blobstring.decode('utf8')
data = json.loads(myJson)
When I try to open json in python it gives me the following error:
JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig)
After using the "utf-8-sig" as decode I get this error
JSONDecodeError: Unterminated string starting at: line 1 column 103775708
IMPORTANT: When logic app dumps the json from sharepoint on blob, lease state is expired on the blob storage. I downloaded the json from blob and uploaded the same file manually (it made the lease state available) and the python code opened the json perfectly.
I thought it was an issue with lease state so after logic app dumped the json on blob, I made the lease state available with code (so I don't have to download and upload again manually) and tried to open json but received the same errors again.
In my logic app I am using the get file content for getting the file content of the .json file and create blob for creating a blob with the file content.
Can anybody point me in the right direction?
It seems your storage blob has some setting which caused the lease state to expired, the lease state will become "expired" just after acquire the lease. I test it in my logic app(get a file from sharepoint and create blob), its lease state is "available".
And it works fine when I want to get this file in code.

In Jmeter ,from CSV file data is not Reading

Data from CSV file is not reading in jmeter slave system
Please find the below details regarding the issue .
Thread group
HTTP request
CSV data set config
[View Result Tree][4]
csv file :path
Actually CSV files are not copied automatically in Slave systems, you need to place the CSV required to Slave systems manually as per the path mentioned in CSV Data Set Config element. Use Absolute path.
For more information follow THIS

Neo4j Loading CSV file

I trying to upload CSV to Neo4j Desktop (Version 3.3.5 Enterprise). Here is my code:
LOAD CSV WITH HEADERS FROM "file///:C:/Users/dr-gouda/.Neo4jDesktop/neo4jDatabases/database-22e2ad52-6882-472b-abc6-6c1594e733f2/installation-3.3.5/import/test.csv" as types
create (a1:Type {Label: types.Label, Name: types.Name, Age: types.Age})
I got this message as an error:
Neo.ClientError.Statement.ExternalResourceFailed: Invalid URL 'file///:C:/Users/dr-gouda/.Neo4jDesktop/neo4jDatabases/database-22e2ad52-6882-472b-abc6-6c1594e733f2/installation-3.3.5/import/test.csv': no protocol: file///:C:/Users/dr-gouda/.Neo4jDesktop/neo4jDatabases/database-22e2ad52-6882-472b-abc6-6c1594e733f2/installation-3.3.5/import/test.csv
What is going on and what can I do?
Files for import are relative to the import directory for the db in question (otherwise a user using a load query to supply a file path to a sensitive file, which would be a huge security problem).
Since you already have the file there, use the relative path ... FROM "file:///test.csv" ...
Also, the : was in the wrong place.

neo4j LOAD CSV returns Couldn't Load external resource

Trying CSV import to Neo4j - doesn't seem to be working.
I'm loading a local file using the syntax:
LOAD CSV WITH HEADERS FROM "file:///location/local/my.csv" AS csvDoc
Am wondering if there's something wrong with my CSV file, or if there's some syntax problem here.
If you didn't read the title, the error is:
Couldn't load the external resource at: file:/location/local/my.csv
[Neo.TransientError.Statement.ExternalResourceFailure]
Neo4j seems to need a full path spec to get a file on the local system.
On linux or mac try
LOAD CSV FROM "file:/Users/you/location/local/my.csv"
On windows try
LOAD CSV FROM "file://c:/location/local/my.csv"
.
In the browser interface (Neo4j 3.0.3, MacOS 10.11) it looks like Neo4j prefixes your file path with $path_to_graph_database/import. So you could move your files there. If you are using a command line tool, then see this SO question.
Easy solution:
Once you choose your database location (in my case ReactomeGraphDB60)...
here I placed my ddbb
...go to that folder, and create inside a folder called "import".
Later in the cypher query write (as an example):
LOAD CSV WITH HEADERS FROM "file:///ILClasiffStruct.csv" AS row
CREATE (n:Interleukines)
SET n = row