Labels on Nodes and Relationships from a CSV file - csv

I have problem when i want to add a label on a Node or to a Relatioship.
I do this in Neo4j with Cypher:
LOAD CSV WITH HEADERS FROM "file:c:/Users/Test/test.csv" AS line
CREATE (n:line.FROM)
and i get this error:
Invalid input '.': expected an identifier character, whitespace, NodeLabel, a property map, ')' or a relationship pattern (line 2, column 15 (offset: 99))
"CREATE (n:line.FROM)"
If there is not a possible way of doing this with the Cypher Language, can you recommend me an other way to do my job?
It is very important to find a solution on this problem even with a Cypher solution or any Java thing to do this job...

Depends on how dynamic you need it to be, for small variability:
LOAD CSV WITH HEADERS FROM "file:c:/Users/Test/test.csv" AS line
WHERE line.FROM = "Foo"
CREATE (n:Foo)
From Java you can use node.addLabel(DynamicLabel.label(line.from))
Otherwise you can look into my neo4j-shell-tools, which allow dynamic labels and rel-types: with #{FROM}.
see: https://github.com/jexp/neo4j-shell-tools#cypher-import

Thank you all for your answers but none of them helped me to solve my problem.
I found a solution to do exactly what i wanted. The solution was the Neo4jImporter tool (Link from official manual: Neo4jImporter tool Manual ) and not Cypher language nor Java.
So here is an example of what i have done and worked for me
A test.csv file contains the "PropertyTest" and ":LABEL". Firstly it creates one node with the label "TEST" and after the creation it adds the "proptest" property on the "TEST" node. So to add a Label on your node you use :LABEL and to add a Property on the same node you add any name you want as a header in .csv file.
Example of test.csv file:
PropertyTest,:LABEL
proptest,TEST
For windows i've done the Neo4jImport.bat command as it is described in the manual page of Neo4j.You can found the Neo4jImport.bat in Windows at "C:\Program Files\Neo4j Community\bin" and you run it from command line (cmd).
In details i opened the cmd, i followed the path to Neo4jImport.bat and finaly i wrote:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter ","
The default delimiter of Neo4jImporter is the "," but you can change it. For example if your information in .csv file is seperated with tab you can do the following:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter "TAB"
That was the way that i loaded dynamically a whole model of almost 2.000 nodes with different Labels and Properties.
Keep in mind from the manual that you can add as many labels and as many properties you want on a node by adding to your csv more headers
Example of two Labels in a node:
PropertyTest,:LABEL,:LABEL
proptest,TEST,SECOND_LABEL
Example of Neo4jImport.bat for two Labels and comma seperated CSV file:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter ","
I hope that you will find it useful to this certain problem of Labels from .csv files and please read the official manual, it helped me a lot to find a solution for my problem.

Below is the way for two csv files MIP_nodes.csv and MIP_edges.csv:
//Load csv data into the database - with dynamic label(s)
WITH "file:///MIP_nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
WITH * WHERE row.label <> ""
call apoc.merge.node ([row.label],{nodeId:row.nodeId, name: row.name, type: row.type, created: row.created, property1: row.property1, property2: row.property2})
YIELD node as n1
//RETURN n1
WITH * WHERE row.label = ""
call apoc.merge.node (['DefaultNode'],{nodeId:row.nodeId, name: row.name, type: row.type, created: row.created, property1: row.property1, property2: row.property2})
YIELD node as n2
RETURN n1, n2
//Load csv data into the database - with dynamic relationship(s)
//:auto USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS FROM 'file:///MIP_edges.csv' AS row
MATCH (s)
WHERE s.nodeId = row.sourceId
//RETURN s
MATCH (d)
WHERE d.nodeId = row.destinationId
//RETURN d
CALL apoc.merge.relationship(s, row.label,{type:row.type, created: row.created, property1: row.property1, property2: row.property2},{}, d,{})
YIELD rel
//REMOVE rel.noOp;
RETURN rel;

Related

Write a CSV based on another CSV file creating an additional empty row? [duplicate]

import csv
with open('thefile.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
with open('/pythonwork/thefile_subset11.csv', 'w') as outfile:
writer = csv.writer(outfile)
for row in data:
if counter[row[10]] >= 504:
writer.writerow(row)
This code reads thefile.csv, makes changes, and writes results to thefile_subset1.
However, when I open the resulting csv in Microsoft Excel, there is an extra blank line after each record!
Is there a way to make it not put an extra blank line?
The csv.writer module directly controls line endings and writes \r\n into the file directly. In Python 3 the file must be opened in untranslated text mode with the parameters 'w', newline='' (empty string) or it will write \r\r\n on Windows, where the default text mode will translate each \n into \r\n.
#!python3
with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
In Python 2, use binary mode to open outfile with mode 'wb' instead of 'w' to prevent Windows newline translation. Python 2 also has problems with Unicode and requires other workarounds to write non-ASCII text. See the Python 2 link below and the UnicodeReader and UnicodeWriter examples at the end of the page if you have to deal with writing Unicode strings to CSVs on Python 2, or look into the 3rd party unicodecsv module:
#!python2
with open('/pythonwork/thefile_subset11.csv', 'wb') as outfile:
writer = csv.writer(outfile)
Documentation Links
https://docs.python.org/3/library/csv.html#csv.writer
https://docs.python.org/2/library/csv.html#csv.writer
Opening the file in binary mode "wb" will not work in Python 3+. Or rather, you'd have to convert your data to binary before writing it. That's just a hassle.
Instead, you should keep it in text mode, but override the newline as empty. Like so:
with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile:
Note: It seems this is not the preferred solution because of how the extra line was being added on a Windows system. As stated in the python document:
If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference.
Windows is one such platform where that makes a difference. While changing the line terminator as I described below may have fixed the problem, the problem could be avoided altogether by opening the file in binary mode. One might say this solution is more "elegent". "Fiddling" with the line terminator would have likely resulted in unportable code between systems in this case, where opening a file in binary mode on a unix system results in no effect. ie. it results in cross system compatible code.
From Python Docs:
On Windows, 'b' appended to the mode
opens the file in binary mode, so
there are also modes like 'rb', 'wb',
and 'r+b'. Python on Windows makes a
distinction between text and binary
files; the end-of-line characters in
text files are automatically altered
slightly when data is read or written.
This behind-the-scenes modification to
file data is fine for ASCII text
files, but it’ll corrupt binary data
like that in JPEG or EXE files. Be
very careful to use binary mode when
reading and writing such files. On
Unix, it doesn’t hurt to append a 'b'
to the mode, so you can use it
platform-independently for all binary
files.
Original:
As part of optional paramaters for the csv.writer if you are getting extra blank lines you may have to change the lineterminator (info here). Example below adapated from the python page csv docs. Change it from '\n' to whatever it should be. As this is just a stab in the dark at the problem this may or may not work, but it's my best guess.
>>> import csv
>>> spamWriter = csv.writer(open('eggs.csv', 'w'), lineterminator='\n')
>>> spamWriter.writerow(['Spam'] * 5 + ['Baked Beans'])
>>> spamWriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
The simple answer is that csv files should always be opened in binary mode whether for input or output, as otherwise on Windows there are problems with the line ending. Specifically on output the csv module will write \r\n (the standard CSV row terminator) and then (in text mode) the runtime will replace the \n by \r\n (the Windows standard line terminator) giving a result of \r\r\n.
Fiddling with the lineterminator is NOT the solution.
A lot of the other answers have become out of date in the ten years since the original question. For Python3, the answer is right in the documentation:
If csvfile is a file object, it should be opened with newline=''
The footnote explains in more detail:
If newline='' is not specified, newlines embedded inside quoted fields will not be interpreted correctly, and on platforms that use \r\n linendings on write an extra \r will be added. It should always be safe to specify newline='', since the csv module does its own (universal) newline handling.
Use the method defined below to write data to the CSV file.
open('outputFile.csv', 'a',newline='')
Just add an additional newline='' parameter inside the open method :
def writePhoneSpecsToCSV():
rowData=["field1", "field2"]
with open('outputFile.csv', 'a',newline='') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(rowData)
This will write CSV rows without creating additional rows!
I'm writing this answer w.r.t. to python 3, as I've initially got the same problem.
I was supposed to get data from arduino using PySerial, and write them in a .csv file. Each reading in my case ended with '\r\n', so newline was always separating each line.
In my case, newline='' option didn't work. Because it showed some error like :
with open('op.csv', 'a',newline=' ') as csv_file:
ValueError: illegal newline value: ''
So it seemed that they don't accept omission of newline here.
Seeing one of the answers here only, I mentioned line terminator in the writer object, like,
writer = csv.writer(csv_file, delimiter=' ',lineterminator='\r')
and that worked for me for skipping the extra newlines.
with open(destPath+'\\'+csvXML, 'a+') as csvFile:
writer = csv.writer(csvFile, delimiter=';', lineterminator='\r')
writer.writerows(xmlList)
The "lineterminator='\r'" permit to pass to next row, without empty row between two.
Borrowing from this answer, it seems like the cleanest solution is to use io.TextIOWrapper. I managed to solve this problem for myself as follows:
from io import TextIOWrapper
...
with open(filename, 'wb') as csvfile, TextIOWrapper(csvfile, encoding='utf-8', newline='') as wrapper:
csvwriter = csv.writer(wrapper)
for data_row in data:
csvwriter.writerow(data_row)
The above answer is not compatible with Python 2. To have compatibility, I suppose one would simply need to wrap all the writing logic in an if block:
if sys.version_info < (3,):
# Python 2 way of handling CSVs
else:
# The above logic
I used writerow
def write_csv(writer, var1, var2, var3, var4):
"""
write four variables into a csv file
"""
writer.writerow([var1, var2, var3, var4])
numbers=set([1,2,3,4,5,6,7,2,4,6,8,10,12,14,16])
rules = list(permutations(numbers, 4))
#print(rules)
selection=[]
with open("count.csv", 'w',newline='') as csvfile:
writer = csv.writer(csvfile)
for rule in rules:
number1,number2,number3,number4=rule
if ((number1+number2+number3+number4)%5==0):
#print(rule)
selection.append(rule)
write_csv(writer,number1,number2,number3,number4)
When using Python 3 the empty lines can be avoid by using the codecs module. As stated in the documentation, files are opened in binary mode so no change of the newline kwarg is necessary. I was running into the same issue recently and that worked for me:
with codecs.open( csv_file, mode='w', encoding='utf-8') as out_csv:
csv_out_file = csv.DictWriter(out_csv)

CSV Data Set Config not looping

I'm using v5.1.1 of JMeter and attempting to use the "CSV Data Set Config". The file is read correctly as I can tell from the Debug Sampler/Results Tree, but the file is not being read line by line. In other words, it reads the first line and never proceeds to the next line for processing.
I would like to use the data inside the CSV to iterate over a series of HTTP Requests to an external API. I currently have a single thread with only the "CSV Data Set Config" and "HTTP Request".
Do I need to wrap this with a ForEach controller or another looping construct? Perhaps I'm missing it but I do not see in the documentation that would indicate it's necessary.
Thanks
You dont need to wrap this in a ForEach loop. First line in the CSV file is a var name:
Let's say your csv file looks like
foo, bar
1, John
2, George
3, Laura
And you use an http request sampler
then ${foo} and ${bar} will get iterated sequentially. However please make sure you are mindful about the CSV Data Set Config options. The following options works ok for me:
By default CSV Data Set Config doesn't trigged any "looping", it reads next line from the CSV file for each thread (virtual user) for each iteration.
So if you want to see more values from the CSV file - either add more users or loops or both.
Given
This CSV file:
line1
line2
line3
Following CSV Data Set Config setup:
And the following Thread Group setup:
You will get the following values (assuming __threadNum() function to visualize current virtual user number and ${__jm__Thread Group__idx} pre-defined variable to show current Thread Group iteration) :
Check out JMeter Parameterization - The Complete Guide article for more information on various approaches on parameterizing JMeter tests using external data sources

using a variable to identify file in 'print -dpdf file_name'

I am trying to use a formatted string to identify the file location when using 'print -dpdf file_name' to write a plot (or figure) to a file.
I've tried:
k=1;
file_name = sprintf("\'/home/user/directory to use/file%3.3i.pdf\'",k);
print -dpdf file_name;
but that only gets me a figure written to ~/file_name.pdf which is not what I want. I've tried several other approaches but I cannot find an approach that causes the the third term (file_name, in this example) to be evaluated. I have not found any other printing function that will allow me to perform a formatted write (the '-dpdf' option) of a plot (or figure) to a file.
I need the single quotes because the path name to the location where I want to write the file contains spaces. (I'm working on a Linux box running Fedora 24 updated daily.)
If I compute the file name using the line above, then cut and paste it into the print statement, everything works exactly as I wish it to. I've tried using
k=1;
file_name = sprintf("\'/home/user/directory to use/file%3.3i.pdf\'",k);
print ("-dpdf", '/home/user/directory to use/file001.pdf');
But simply switching to a different form of print statement doesn't solve the problem,although now I get an error message:
GPL Ghostscript 9.16: **** Could not open the file '/home/user/directory to use/file001.pdf' .
**** Unable to open the initial device, quitting.
warning: broken pipe
if you use foo a b this is the same as foo ("a", "b"). In your case you called print ("-dpdf", "file_name")
k = 1;
file_name = sprintf ("/home/user/directory to use/file%3.3i.pdf", k);
print ("-dpdf", file_name);
Observe:
>> k=1;
>> file_name = sprintf ('/home/tasos/Desktop/a folder with spaces in it/this is file number %3.3i.pdf', k)
file_name = /home/tasos/Desktop/a folder with spaces in it/this is file number 001.pdf
>> plot (1 : 10);
>> print (gcf, file_name, '-dpdf')
Tadaaa!
So yeah, no single quotes needed. The reason single quotes work when you're "typing it by hand" is because you're literally creating the string on the spot with the single quotes.
Having said that, it's generally a good idea when generating absolute paths to use the fullfile command instead. Have a look at it.
Tasos Papastylianou #TasosPapastylianou provided great help. My problem is now solved.

Entry delimiter of JSON files for Hive table

We are collecting JSON data (public social media posts in particular) via REST API invocations, which we plan to dump into HDFS, then abstract a Hive table on top it using SerDe. I wonder though what would be the appropriate delimiter per JSON entry in a file? Is it new line ("\n")? So it would look like this:
{ id: entry1 ... post: }
{ id: entry2 ... post: }
...
{ id: entryn ... post: }
How about if we encounter a new line character within the JSON data itself, for example in post?
The best way would be one record per line, separated by "\n" exactly as you guessed.
This also means that you should be careful to escape "\n" that may be inside the JSON elements.
Indented JSON won't work well with hadoop/hive, since to distribute processing, hadoop must be able to tell when a records ends, so it can split processing of a file with N bytes with W workers in W chunks of size roughly N/W.
The splitting is done by the particular InputFormat that's been used, in case of text, TextInputFormat.
TextInputFormat will basically split the file at the first instance of "\n" found after byte i*N/W (for i from 1 to W-1).
For this reason, having other "\n" around would confuse Hadoop and it will give you incomplete records.
As an alternative, I wouldn't recommend it, but if you really wanted you could use a character other than "\n" by configuring the property "textinputformat.record.delimiter" when reading the file through hadoop/hive, using a character that won't be in JSON (for instance, \001 or CTRL-A is commonly used by Hive as a field delimiter) but that can be tricky since it has to also be supported by the SerDe.
Also, if you change the record delimiter, anybody who copies/uses the file on HDFS must be aware of the delimiter, or they won't be able to parse it correctly, and will need special code to do it, while keeping "\n" as a delimiter, the files will still be normal text files and can be used by other tools.
As for the SerDe, I'd recommend this one, with the disclaimer that I wrote it :)
https://github.com/rcongiu/Hive-JSON-Serde

Load a json file from biq query command line

Is it possible to load data from a json file (not just csv) using the Big Query command line tool? I am able to load a simple json file using the GUI, however, the command line is assuming a csv, and I don't see any documentation on how to specify json.
Here's the simple json file I'm using
{"col":"value"}
With schema
col:STRING
As of version 2.0.12, bq does allow uploading newline-delimited JSON files. This is an example command that does the job:
bq load --source_format NEWLINE_DELIMITED_JSON datasetName.tableName data.json schema.json
As mentioned above, "bq help load" will give you all of the details.
1) Yes you can
2) The documentation is here . Go to step 3: Upload the table in documentation.
3) You have to use --source_format flag to tell the bq that you are uploading a JSON file and not a csv.
4) The complete commmand structure is
bq load [--source_format=NEWLINE_DELIMITED_JSON] [--project_id=your_project_id] destination_data_set.destination_table data_source_uri table_schema
bq load --project_id=my_project_bq dataset_name.bq_table_name gs://bucket_name/json_file_name.json path_to_schema_in_your_machine
5) You can find other bq load variants by
bq help load
It does not support JSON formatted data loading.
Here is the documentation (bq help load) for the loadcommand with the latest bq version 2.0.9:
USAGE: bq [--global_flags] <command> [--command_flags] [args]
load Perform a load operation of source into destination_table.
Usage:
load <destination_table> <source> [<schema>]
The <destination_table> is the fully-qualified table name of table to create, or append to if the table already exists.
The <source> argument can be a path to a single local file, or a comma-separated list of URIs.
The <schema> argument should be either the name of a JSON file or a text schema. This schema should be omitted if the table already has one.
In the case that the schema is provided in text form, it should be a comma-separated list of entries of the form name[:type], where type will default
to string if not specified.
In the case that <schema> is a filename, it should contain a single array object, each entry of which should be an object with properties 'name',
'type', and (optionally) 'mode'. See the online documentation for more detail:
https://code.google.com/apis/bigquery/docs/uploading.html#createtable
Note: the case of a single-entry schema with no type specified is
ambiguous; one can use name:string to force interpretation as a
text schema.
Examples:
bq load ds.new_tbl ./info.csv ./info_schema.json
bq load ds.new_tbl gs://mybucket/info.csv ./info_schema.json
bq load ds.small gs://mybucket/small.csv name:integer,value:string
bq load ds.small gs://mybucket/small.csv field1,field2,field3
Arguments:
destination_table: Destination table name.
source: Name of local file to import, or a comma-separated list of
URI paths to data to import.
schema: Either a text schema or JSON file, as above.
Flags for load:
/usr/local/bin/bq:
--[no]allow_quoted_newlines: Whether to allow quoted newlines in CSV import data.
-E,--encoding: <UTF-8|ISO-8859-1>: The character encoding used by the input file. Options include:
ISO-8859-1 (also known as Latin-1)
UTF-8
-F,--field_delimiter: The character that indicates the boundary between columns in the input file. "\t" and "tab" are accepted names for tab.
--max_bad_records: Maximum number of bad records allowed before the entire job fails.
(default: '0')
(an integer)
--[no]replace: If true erase existing contents before loading new data.
(default: 'false')
--schema: Either a filename or a comma-separated list of fields in the form name[:type].
--skip_leading_rows: The number of rows at the beginning of the source file to skip.
(an integer)
gflags:
--flagfile: Insert flag definitions from the given file into the command line.
(default: '')
--undefok: comma-separated list of flag names that it is okay to specify on the command line even if the program does not define a flag with that name.
IMPORTANT: flags in this list that have arguments MUST use the --flag=value format.
(default: '')