Can't import csv to neo4j - csv

My code:
LOAD CSV FROM "C:\Users\Elmar\Desktop\tmp-raise.csv" AS line
WITH line
RETURN line
The error that it gives:
Invalid input ':': expected 'o/O' (line 1, column 18 (offset: 17))
"LOAD CSV FROM "C:\Users\Elmar\Desktop\tmp-raise.csv" AS line"
^
I have also tried:
USING PERIODIC COMMIT 10000
LOAD CSV FROM ""C:\Users\Elmar\Desktop\tmp-raise.csv" AS line
WITH line
RETURN line
What is the problem? Can anyone help me?

According to the CSV import guide, your path should be prefixed with file: and should use forward slashes. The example path given in the guide for windows is file:c:/path/to/data.csv (though I have seen example paths starting with file://). Give this a try:
USING PERIODIC COMMIT 10000
LOAD CSV FROM 'file:c:/Users/Elmar/Desktop/tmp-raise.csv' AS line
WITH line
RETURN line
If that doesn't work, give it a try with file:// as the path prefix.
EDIT: Looks like CSV loads use a relative path from the default.graphdb/import folder. I had thought that was for Mac/Unix only, but it looks like Windows does the same. If you move CSVs you want to import into the import folder, you should be able to load them using file:///theFileName.csv

Load csv from "file:///C:/xyz.csv" as line
return line
The above code works well. But do comment out the configuration
dbms.directories.import=import
in the settings.
Other solution is you can drop a (.txt, .cyp, .cql) file in the drag to import box.

Related

File not found when appending a csv file

Stata version: 12.1
I get an error "file not found" using this code:
cd "$path_in"
insheet using "df_mcd_clean.csv", comma clear
append using "df_mcd15_clean.csv" #where error happens
append using "df_ingram_liu1998_clean.csv"
append using "df_wccd_clean.csv"
I double checked that the file is indeed called that and located in the directory.
append is for appending .dta files. Therefore, if you ask to append foo.csv Stata assumes you are referring to foo.csv.dta, which it can't find.
The solutions include
Combine the .csv files outside Stata.
Read in each .csv file, save as .dta, then append.
The current version of the help for append says this:
append appends Stata-format datasets stored on disk to the end of the dataset in memory. If any filename is
specified without an extension, .dta is assumed.
and that was true too in Stata 12. (Whether the wording was identical, you can say.)

Neo4j: Import csv - quote error, but no quotes in file

I would like to import a csv-file into neo4j. This file was created by myself using the Mac textedit.app. The options in my textedit.app are set in such a way that there is "no css".
When I try to import the file neo4j says:
At [URL] # position 16471 - there's a field starting with a quote and whereas it ends that quote there seems to be characters in that field after that ending quote. That isn't supported. This is what I read: 'stylesheet"='
The problem is, there is no such line in the file. What's wrong?
Finally I solved the issue: the URL was a dropbox-URL, and instead of ?dl=0 one must state the option ?raw=1

JSON-file without line breaks, cant import file to SAS

I have a large json file (250 Mb) that has no line breaks in it when opening the file in notepad or SAS. But if I open it in Wordpad, I get the correct line breaks. I suppose this could mean the json file uses unix line breaks, which notapad can't read, but wordpad can read, from what I have read.
I need to import the file to SAS. One way of doing this migth be to open the file in wordpad, save it as a text file, which will hopefully retain the correct line breaks, so that I can read the file in SAS. I have tried reading the file, but without line breaks, I only get the first observation, and I can't get the program to find the next observation.
I have tried getting wordpad to save the file, but wordpad crashes each time, probably because of the file size. Also tried doing this through powershell, but can't figure out how to save the file once it is opened, and I see no reason why it should work seeing as wordpad crashes when i try it through point and click.
Is there another way to fix this json-file? Is there a way to view the unix code for line breaks and replace it with windows line breaks, or something to that effect?
EDIT:
I have tried adding the TERMSTR=LF option both in filename and infile, without any luck:
filename test "C:\path";
data datatest;
infile test lrecl = 32000 truncover scanover TERMSTR=LF;
input #'"Id":' ID $9.;
run;
However, If I manually edit a small portion of the file to have line breaks, it works. The TERMSTR option doesn't seem to do much for me
EDIT 2:
Solved using RECFM=F
data datatest;
infile test lrecl = 42000 truncover scanover RECFM=F ;
input #'"Id":' ID $9.;
run;
EDIT 3:
Turn out it didnt solve the problem after all. RECFM=F means all records have a fixed length, which they don't, so my data gets mixed up and a lot of info is skipped. Tried RECFM=V(ariable), but this is not working either.
I guess you're using windows, so try:
TYPE input_filename | MORE /P > output_filename
this should replace unix style text file with windows/dos one.
250 Mbytes is not too long to treat as a single record.
data want ;
infile json lrecl=250000000; *250 Mb ;
input #'"Id":' ID :$9. ##;
run;

Use Julia to read a list of paths from a txt and open it

I have a txt file with some csv paths, for instance the file Links.txt which contains
/home/someone/something/aplha1.csv
/home/someone/something/aplha2.csv
/home/someone/something/aplha3.csv
/home/someone/something/aplha4.csv
I would like to read the file line by line using Julia Lang and then for each line to read the csv file. I am using the below code
open("Links.txt") do f
for line in eachline(f)
rawnames = readcsv(line)
println("read line: ", line)
end
end
Unfortunately I am getting error
ERROR: opening file /home/someone/something/aplha1.csv
: No such file or directory
Any ideas?
thx!
The problem is line contains the end-of-line character at the end. Try changing it to
readcsv(chomp(line))

How to load ALL the columns from a *.csv into Neo4j nodes

Suppose I need to load a csv file c:\myData.csv
alfa,beta,gamma
0001,1000,thousant
0002,2000,two-K
...
in nodes
(:myData{alfa:0001,beta:1000,gamma'thousant'})
(:myData{alfa:0002,beta:2000,gamma'two-k'})
Is there a way to import ALL the columns into properties without specifying them one by one?
Something like
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line set line:myData create line
or
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line create (:myData {line.*})
Following worked for me after trying different options, Neo4j 3.3.2:
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM 'file:///apples.csv' AS appleAllLineProperties
CREATE(apple:Apple)
set apple += appleAllLineProperties
Couple of observations:
CREATE(apple: {appleAllLineProperties}) results in error since Neo4j expects appleAllLineProperties to be a parameter - which also isn't valid in this position.
Neo4j expects the file to be in the following folder
C:\Users\\AppData\Roaming\Neo4j Desktop\Application\neo4jDatabases\database-\installation-3.3.2\import
You can use
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line
create (:MyData {line})
or
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line
MATCH (m:MyData {id:line.id})
SET m += {line}