How to load a csv file with header and data type in neo4j? - csv

Am I correct that one cannot use cypher to load a csv file with header together with datatype?
(By "with datatype", I mean header with something like this:
For entities:
orderId:ID(Order) customerId:IGNORE
For relationships:
:START_ID(Order) :END_ID(Product)
)
According to this two websites: https://neo4j.com/developer/guide-import-csv/, http://jexp.de/blog/2015/04/how-to-neo4j-data-import-minimal-example/
It seems that I could import data together with header in this way in either powershell or command prompt (I am using a windows computer):
path\to\neo4j-community-3.1.1\bin\neo4j-import --into graph.db \
--nodes:Person C:\SavedNewest\people_header.csv, C:\SavedNewest\people.csv \
--relationships:KNOWS C:\SavedNewest\friendships_header.csv,C:\SavedNewest\friendships.csv
(The csv are reconstructed according to this website: http://jexp.de/blog/2015/04/how-to-neo4j-data-import-minimal-example/)
Error from PowerShell:
At line:2 char:3
+ --nodes:Person C:\SavedNewest\people_header.csv,https://gist.githubus ...
+ ~
Missing expression after unary operator '--'.
At line:2 char:3
+ --nodes:Person C:\SavedNewest\people_header.csv,https://gist.githubus ...
+ ~~~~~~~~~~~~
Unexpected token 'nodes:Person' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingExpressionAfterOperator
Error from Command prompt:
WARNING: This command does not appear to be running with administrative rights.
Some commands may fail e.g. Start/Stop
WARNING: neo4j-import is deprecated and support for it will be removed in a future
version of Neo4j; please use neo4j-admin import instead.
Input error: Expected '--relationships' to have at least 1 valid item, but had 0 []
Caused by:Expected '--relationships' to have at least 1 valid item, but had 0 []
java.lang.IllegalArgumentException: Expected '--relationships' to have at least 1 valid item, but had 0 []
at org.neo4j.kernel.impl.util.Validators.lambda$atLeast$6(Validators.java:125)
at org.neo4j.helpers.Args.validated(Args.java:640)
at org.neo4j.helpers.Args.interpretOptionsWithMetadata(Args.java:608)
at org.neo4j.tooling.ImportTool.extractInputFiles(ImportTool.java:508)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:389)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:334)
What is the cause of error and how should I load a csv file with header and data type correctly?
Edit:
New Input for cmd:
C:\Users\tsutomu\Desktop\MSS\Bachelorarbeit\neo4j-community-3.1.1\bin\neo4j-import --into graph.db --nodes:Person "file:c:/SavedNewest/people_header.csv,file:c:/SavedNewest/people.csv" --relationships:KNOWS "file:c:/SavedNewest/friendships_header.csv,file:c:/SavedNewest/friendships.csv"
The Error:
Input error: Directory of file:c:\SavedNewest\people_header.csv doesn't exist
Caused by:Directory of file:c:\SavedNewest\people_header.csv doesn't exist
java.lang.IllegalArgumentException: Directory of file:c:\SavedNewest\people_header.csv doesn't exist
at org.neo4j.kernel.impl.util.Validators.matchingFiles(Validators.java:48)
at org.neo4j.kernel.impl.util.Converters.lambda$regexFiles$7(Converters.java:76)
at org.neo4j.kernel.impl.util.Converters.lambda$toFiles$8(Converters.java:95)
at org.neo4j.helpers.Args.interpretOptionsWithMetadata(Args.java:608)
at org.neo4j.tooling.ImportTool.extractInputFiles(ImportTool.java:508)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:388)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:334)
Same error for powershell.
The path of people_header.csv: C:\SavedNewest\people_header.csv
Is there anything I should add to environmental path?

You have a couple of issues in your command line:
You cannot have embedded spaces in your command line arguments. For example, C:\SavedNewest\people_header.csv, C:\SavedNewest\people.csv should be a single argument, so you need to either remove the space after the comma or double-quote the entire argument.
The file path URLs must be formatted appropriately. To quote from the developer guide:
Make sure to use the right URLs esp. file URLs.+ On OSX and Unix use
file:///path/to/data.csv, on Windows, please use
file:c:/path/to/data.csv

Related

error finding and uploading a file in octave

I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.

Not able to read json file in JMETER

I am trying to read json's from a text file using below command:
{__FileToString((${JSON_FILE},,)).replaceAll(' ','')}
File not readable.
Error: {"timestamp":1586945558777,"status":400,"error":"Bad Request","exception":"org.springframework.http.converter.HttpMessageNotReadableException","message":"Could not read document: Unexpected character ('' (code 95)): was expecting double-quote to start field name\n at [Source: java.io.PushbackInputStream#6df97f39; line: 1, column: 3]; nested exception is com.fasterxml.jackson.core.JsonParseException: Unexpected character ('' (code 95)): was expecting double-quote to start field name\n at [Source: java.io.PushbackInputStream#6df97f39; line: 1, column: 3]","path":"/service"}
I have gone through all related posts too but still not able to find the solution to it. Please can anyone help.
Refereneces:
JMeter - How to read JSON file?
https://devqa.io/perf/jmeter-send-json-file-as-request-in-body
https://www.360logica.com/blog/how-to-use-http-request-to-send-multiple-json-files
Thanks,
Mrinalini
The correct syntax would be:
${__strReplace(${__FileToString(${JSON_FILE})}, ,,)}
You cannot append normal Java functions like String.replaceAll() to JMeter Functions, if you want to get rid of whitespace characters you need to invoke __strReplace() function like shown above (this function is a Custom JMeter Function, it can be installed using JMeter Plugins Manager)
If you cannot use JMeter Plugins for some reason you can achieve the same using __groovy() function like:
${__groovy(new File(vars.get('JSON_FILE')).text.replaceAll(' '\, ''),)}
Demo:

neo4j throws error for "\" character

I am exporting csv file and need to read line one by one.
One of the line in csv file contains the string "C:\Program Files\". Because of this line it throws the below error.
At D:\workdir\Neo4j_Database\Database1\import\Data.csv:22798 - there's
a field starting with a quote and whereas it ends that quote there
seems to be characters in that field after that ending quote. That
isn't supported. This is what I read: 'CMM 10.0.1 Silent Installation
will install SW always in "C:\Program Files"",V10.0,
,,,,,,,,105111,AVASAAIS AG,E,,"G,"'
If I remove the last \ of the line then it does not throw this error.
I am not sure how to resolve this without modifying the csv file.
Note: CSV loader used LOAD CSV.

How to import Google Maps API into PostgreSQL?

I am trying to transfer data from a JSON file produced by the Google Maps API onto my PostgreSQL database. This is done through cURL and I made sure that the permissions have been correctly set.
The url:
https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIza-[key-redacted]-3z6ho-o
The query:
copy bookings.import(info) from program 'C:/temp/mycurl/curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIzaSyBIhOMI68hTIFarH4jrb_eKUmvY3z6ho-o" --insecure'
However, when I try to do this on my table with column 'info' of type 'json', I get the following error:
ERROR: invalid input syntax for type json DETAIL: The input string
ended unexpectedly. CONTEXT: JSON data, line 1: { COPY import, line
1, column info: "{"
********** Error **********
ERROR: invalid input syntax for type json SQL state: 22P02 Detail: The
input string ended unexpectedly. Context: JSON data, line 1: { COPY
import, line 1, column info: "{"
I am trying to not include things such as PHP or any other tool currently, yet if the only option is that I would certainly consider it.
What exactly do you guys think I am doing wrong? Is it the syntax, the format or am I missing something?
Thanks!
COPY assumes that each newline indicates a new record. Unfortunately, the Google Maps DistanceMatrix API is pretty-printing your response which means that it comes through as 23 rows, none of which are valid JSON.
You can get around this by piping the curl response through something like jq.
copy imports(info) from program 'curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=<my_key>" --insecure | /usr/local/bin/jq "." -c'
jq has lots of useful features if you want to massage the response a bit more before stashing it in the database.

Setting Jenkins build name from package.json version value

I want to include the value of the "version" parameter in package.json as part of the Jenkins build name.
I'm using the Jenkins Build Name Setter plugin - https://wiki.jenkins-ci.org/display/JENKINS/Build+Name+Setter+Plugin
So far I've tried to use PROPFILE syntax in the "Build name macro template" step:
${PROPFILE,file="./mainline/projectDirectory/package.json",property="\"version\""}
This successfully creates a build, but includes the quotes and comma surrounding the value of the version property in package.json, for example:
"0.0.1",
I want just the value inside returned, so it reads
0.0.1
How can I do this? Is there a different plugin that would work better for parsing package.json and getting it into the template, or should I resort to some sort of regex for removing the characters I don't want?
UPDATE:
I tried using token transforms based on reading the Token Macro Plugin documentation, but it's not working:
${PROPFILE%\"\,#\",file="./mainline/projectDirectory/package.json",property="\"version\""}
still just returns
However, using only one escaped character and only one of # or % works. No other combinations I tried work.
${PROPFILE%\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1" (comma removed)
${PROPFILE#\"%\"\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1", (no characters removed)
UPDATE:
Tried to use the new Jenkins Token Macro plugin's JSON macro with no luck.
Jenkins Build Name Setter set to update the build name with Macro:
${JSON,file="./mainline/pathToFiles/package.json",path="version"}-${P4_CHANGELIST}
Jenkins build logs for this job show:
10:57:55 Evaluated macro: 'Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 1, pos 74):
10:57:55 ${JSON,file="./mainline/pathToFiles/package.json",path="version"}-334319
10:57:55 ^
10:57:55
10:57:55 java.io.IOException: Unable to serialize org.jenkinsci.plugins.tokenmacro.impl.JsonFileMacro$ReadJSON#2707de37'
I implemented a new macro JSON, which takes a file and a path (which is the key hierarchy in the JSON for the value you want) in token-macro-2.1. You can only use a single transform per macro usage.
Try the token transformations # and % (see Token-Makro-Plugin):
${PROPFILE#"%",file="./mainline/projectDirectory/package.json",property="\"version\""}
(This will only help if you are using pipelines. But for what it's worth,..)
What works for me is a combination of readJSON from the Pipeline Utility Steps plugin and directly setting currentBuild.displayName, thusly:
script {
// readJSON from "Pipeline Utility Steps"
def packageJson = readJSON file: 'package.json'
def version = packageJson.version
echo "Setting build version: ${packageJson.version}"
currentBuild.displayName = env.BUILD_NUMBER + " - " + packageJson.version
// currentBuild.description = "other cool stuff"
}
Omitting error handling etc obvs.