How to load ALL the columns from a *.csv into Neo4j nodes - csv

Suppose I need to load a csv file c:\myData.csv
alfa,beta,gamma
0001,1000,thousant
0002,2000,two-K
...
in nodes
(:myData{alfa:0001,beta:1000,gamma'thousant'})
(:myData{alfa:0002,beta:2000,gamma'two-k'})
Is there a way to import ALL the columns into properties without specifying them one by one?
Something like
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line set line:myData create line
or
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line create (:myData {line.*})

Following worked for me after trying different options, Neo4j 3.3.2:
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM 'file:///apples.csv' AS appleAllLineProperties
CREATE(apple:Apple)
set apple += appleAllLineProperties
Couple of observations:
CREATE(apple: {appleAllLineProperties}) results in error since Neo4j expects appleAllLineProperties to be a parameter - which also isn't valid in this position.
Neo4j expects the file to be in the following folder
C:\Users\\AppData\Roaming\Neo4j Desktop\Application\neo4jDatabases\database-\installation-3.3.2\import

You can use
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line
create (:MyData {line})
or
LOAD CSV WITH HEADERS FROM 'file:/c:/myData.csv' AS line
MATCH (m:MyData {id:line.id})
SET m += {line}

Related

CSV Data Set Config not looping

I'm using v5.1.1 of JMeter and attempting to use the "CSV Data Set Config". The file is read correctly as I can tell from the Debug Sampler/Results Tree, but the file is not being read line by line. In other words, it reads the first line and never proceeds to the next line for processing.
I would like to use the data inside the CSV to iterate over a series of HTTP Requests to an external API. I currently have a single thread with only the "CSV Data Set Config" and "HTTP Request".
Do I need to wrap this with a ForEach controller or another looping construct? Perhaps I'm missing it but I do not see in the documentation that would indicate it's necessary.
Thanks
You dont need to wrap this in a ForEach loop. First line in the CSV file is a var name:
Let's say your csv file looks like
foo, bar
1, John
2, George
3, Laura
And you use an http request sampler
then ${foo} and ${bar} will get iterated sequentially. However please make sure you are mindful about the CSV Data Set Config options. The following options works ok for me:
By default CSV Data Set Config doesn't trigged any "looping", it reads next line from the CSV file for each thread (virtual user) for each iteration.
So if you want to see more values from the CSV file - either add more users or loops or both.
Given
This CSV file:
line1
line2
line3
Following CSV Data Set Config setup:
And the following Thread Group setup:
You will get the following values (assuming __threadNum() function to visualize current virtual user number and ${__jm__Thread Group__idx} pre-defined variable to show current Thread Group iteration) :
Check out JMeter Parameterization - The Complete Guide article for more information on various approaches on parameterizing JMeter tests using external data sources

how to set header from a single file when reading multiple csv files with spark?

I have multi .csv file with same format. the name of them is like file_#.csv. the header of them is in first file (file_1.csv).
I read this file with spark whit this code:
spark.read.csv('*.csv', header=True)
When I show the result the header is not the header of first file, it is one of the data row.
How can we say to spark that header is in the which file?
If you know the file that has the header row, then you can generate the schema by reading the schema from the header file and then use the same schema to read all other files.
df1 = spark.read.csv('a.csv', header=True)
header = spark.read.csv('a.csv', header=False).first()
df2 = spark.read.schema(df1.schema).csv(*.csv, header=False).filter(lambda line: line != header)
The code is also removing the header line from the data. You can improve upon the filter function if few fields can be used to distinguish the header from the data.
Not possible in any generic elegant way using standard spark.read apis.

NEO4J Load csv not working

I use the following Neo4J query to attempt to load from a .csv placed in the 'import' directory of the relevant database
load csv with headers from "file:///fb4.csv" as line with line
create (:Entry {name:"Co-ordinates", X:toInteger(line.`X`), Y:toInteger(`Y`), Z:toInteger(`Z`), rock_type:line.`GM fault block 4`})
The file 'fb4.csv' from which I load it from has the following first few lines:
# Exported from Leapfrog Geo - UTF-8 encoding
X,Y,Z,GM fault block 4
1492275,5215985,165,Enys Formation
1492285,5215985,165,Enys Formation
After waiting the 40 required seconds to run the query totally, usually I have 4.6 million co-ordinates with merely the 'name' property set - none of the others are set. i.e. None of the fields that are imported from fb4.csv are set, at all.
How does one sort this problem out properly?
The load csv statement does not process the first line as a comment, but interprets it as a header:
load csv with headers from "file:///fb4.csv" as line with line
return line
==>
{
"# Exported from Leapfrog Geo - UTF-8 encoding": "X"
}
{
"# Exported from Leapfrog Geo - UTF-8 encoding": "1492275"
}
{
"# Exported from Leapfrog Geo - UTF-8 encoding": "1492285"
}
Remove the comment line from the csv-file.

Spark - How to write a single csv file WITHOUT folder?

Suppose that df is a dataframe in Spark. The way to write df into a single CSV file is
df.coalesce(1).write.option("header", "true").csv("name.csv")
This will write the dataframe into a CSV file contained in a folder called name.csv but the actual CSV file will be called something like part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv.
I would like to know if it is possible to avoid the folder name.csv and to have the actual CSV file called name.csv and not part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv. The reason is that I need to write several CSV files which later on I will read together in Python, but my Python code makes use of the actual CSV names and also needs to have all the single CSV files in a folder (and not a folder of folders).
Any help is appreciated.
A possible solution could be convert the Spark dataframe to a pandas dataframe and save it as csv:
df.toPandas().to_csv("<path>/<filename>")
EDIT: As caujka or snark suggest, this works for small dataframes that fits into driver. It works for real cases that you want to save aggregated data or a sample of the dataframe. Don't use this method for big datasets.
If you want to use only the python standard library this is an easy function that will write to a single file. You don't have to mess with tempfiles or going through another dir.
import csv
def spark_to_csv(df, file_path):
""" Converts spark dataframe to CSV file """
with open(file_path, "w") as f:
writer = csv.DictWriter(f, fieldnames=df.columns)
writer.writerow(dict(zip(fieldnames, fieldnames)))
for row in df.toLocalIterator():
writer.writerow(row.asDict())
If the result size is comparable to spark driver node's free memory, you may have problems with converting the dataframe to pandas.
I would tell spark to save to some temporary location, and then copy the individual csv files into desired folder. Something like this:
import os
import shutil
TEMPORARY_TARGET="big/storage/name"
DESIRED_TARGET="/export/report.csv"
df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
part_filename = next(entry for entry in os.listdir(TEMPORARY_TARGET) if entry.startswith('part-'))
temporary_csv = os.path.join(TEMPORARY_TARGET, part_filename)
shutil.copyfile(temporary_csv, DESIRED_TARGET)
If you work with databricks, spark operates with files like dbfs:/mnt/..., and to use python's file operations on them, you need to change the path into /dbfs/mnt/... or (more native to databricks) replace shutil.copyfile with dbutils.fs.cp.
A more databricks'y' solution is here:
TEMPORARY_TARGET="dbfs:/my_folder/filename"
DESIRED_TARGET="dbfs:/my_folder/filename.csv"
spark_df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
temporary_csv = os.path.join(TEMPORARY_TARGET, dbutils.fs.ls(TEMPORARY_TARGET)[3][1])
dbutils.fs.cp(temporary_csv, DESIRED_TARGET)
Note if you are working from Koalas data frame you can replace spark_df with koalas_df.to_spark()
For pyspark, you can convert to pandas dataframe and then save it.
df.toPandas().to_csv("<path>/<filename.csv>", header=True, index=False)
There is no dataframe spark API which writes/creates a single file instead of directory as a result of write operation.
Below both options will create one single file inside directory along with standard files (_SUCCESS , _committed , _started).
1. df.coalesce(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
2. df.repartition(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
If you don't use coalesce(1) or repartition(1) and take advantage of sparks parallelism for writing files then it will create multiple data files inside directory.
You need to write function in driver which will combine all data file parts to single file(cat part-00000* singlefilename ) once write operation is done.
I had the same problem and used python's NamedTemporaryFile library to solve this.
from tempfile import NamedTemporaryFile
s3 = boto3.resource('s3')
with NamedTemporaryFile() as tmp:
df.coalesce(1).write.format('csv').options(header=True).save(tmp.name)
s3.meta.client.upload_file(tmp.name, S3_BUCKET, S3_FOLDER + 'name.csv')
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html for more info on upload_file()
Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same in Databricks.
fpath=output+'/'+'temp'
def file_exists(path):
try:
dbutils.fs.ls(path)
return True
except Exception as e:
if 'java.io.FileNotFoundException' in str(e):
return False
else:
raise
if file_exists(fpath):
dbutils.fs.rm(fpath)
df.coalesce(1).write.option("header", "true").csv(fpath)
else:
df.coalesce(1).write.option("header", "true").csv(fpath)
fname=([x.name for x in dbutils.fs.ls(fpath) if x.name.startswith('part-00000')])
dbutils.fs.cp(fpath+"/"+fname[0], output+"/"+"name.csv")
dbutils.fs.rm(fpath, True)
You can go with pyarrow, as it provides file pointer for hdfs file system. You can write your content to file pointer as a usual file writing. Code example:
import pyarrow.fs as fs
HDFS_HOST: str = 'hdfs://<your_hdfs_name_service>'
FILENAME_PATH: str = '/user/your/hdfs/file/path/<file_name>'
hadoop_file_system = fs.HadoopFileSystem(host=HDFS_HOST)
with hadoop_file_system.open_output_stream(path=FILENAME_PATH) as f:
f.write("Hello from pyarrow!".encode())
This will create a single file with the specified name.
To initiate pyarrow you should define environment CLASSPATH properly, set the output of hadoop classpath --glob to it
df.write.mode("overwrite").format("com.databricks.spark.csv").option("header", "true").csv("PATH/FOLDER_NAME/x.csv")
you can use this and if you don't want to give the name of CSV everytime you can write UDF or create an array of the CSV file name and give it to this it will work

Load csv file with integers in Octave 3.2.4 under Windows

I am trying to import in Octave a file (i.e. data.txt) containing 2 columns of integers, such as:
101448,1077
96906,924
105704,1017
I use the following command:
data = load('data.txt')
However, the "data" matrix that results has a 1 x 1 dimension, with all the content of the data.txt file saved in just one cell. If I adjust the numbers to look like floats:
101448.0,1077.0
96906.0,924.0
105704.0,1017.0
the loading works as expected, and I obtain a matrix with 3 rows and 2 columns.
I looked at the various options that can be set for the load command but none of them seem to help. The data file has no headers, just plain integers, comma separated.
Any suggestions on how to load this type of data? How can I force Octave to cast the data as numeric?
The load function is not to read csv files. It is meant to load files saved from Octave itself which define variables.
To read a csv file use csvread ("data.txt"). Also, 3.2.4 is a very old version no longer supported, you should upgrade.