Accessing csv file placed in hdfs using spark - csv

I have placed a csv file into the hdfs filesystem using hadoop -put command. I now need to access the csv file using pyspark csv. Its format is something like
`plaintext_rdd = sc.textFile('hdfs://x.x.x.x/blah.csv')`
I am a newbie to hdfs. How do I find the address to be placed in hdfs://x.x.x.x?
Here's the output when I entered
hduser#remus:~$ hdfs dfs -ls /input
Found 1 items
-rw-r--r-- 1 hduser supergroup 158 2015-06-12 14:13 /input/test.csv
Any help is appreciated.

you need to provide the full path of your files in HDFS and the url will be mentioned in your hadoop configuration core-site or hdfs-site where you mentioned.
Check your core-site.xml & hdfs-site.xml for get the details about
url.
Easy way to find any url is access your hdfs from your browser and get the path.
If you are using absolute path in your file system use file:///<your path>

Try to specify absolute path without hdfs://
plaintext_rdd = sc.textFile('/input/test.csv')
Spark while running on the same cluster with HDFS use hdfs:// as default FS.

Start the spark shell or the spark-submit by pointing to the package which can read csv files, like below:
spark-shell --packages com.databricks:spark-csv_2.11:1.2.0
And in the spark code, you can read the csv file as below:
val data_df = sqlContext.read.format("com.databricks.spark.csv")
.option("header", "true")
.schema(<pass schema if required>)
.load(<location in HDFS/S3>)

Related

How a HDFS directory by year month and day is created?

Following the question in this link, there is another question about the creating the directory on Hadoop HDFS.
I am new to Hadoop/Flume and I have picked up a project which use Flume to save csv data into HDFS. The setting for the Flume sink will be as follows:
contract-snapshot.sinks.hdfs-sink-contract-snapshot.hdfs.path = /dev/wimp/contract-snapshot/year=%Y/month=%n/day=%e/snapshottime=%k%M
With this Flume setting, the corresponding csv file will be saved into the HDFS, under the folder:
"/wimp/contract-snapshot/year=2020/month=6/day=10/snapshottime=1055/contract-snapshot.1591779548475.csv"
I am trying to setup the whole system locally, I have hadoop installed locally on my windows pc, how can I create a directory of "/wimp/contract-snapshot/year=2020/month=6/day=10/snapshottime=1055/" on the local hdfs?
In the cmd terminal, the code:
hadoop fs -mkdir /wimp/contract-snapshot
can create a folder /wimp/contract-snapshot. However the following code does not work in the cmd terminal
hadoop fs -mkdir /wimp/contract-snapshot/year=2020
How to create hdfs directory by year, month, day?
hadoop fs -mkdir "/wimp/contract-snapshot/year=2020"
Adding quotation solves the problem.

cloudera quick start load csv table hdfs with terminal

I am new to all this as I am only in my second semester and I just need help understanding a command I need to do. I am trying to load a local csv file to hdfs on cloudera using the terminal. I have to use that data and work with Pig for an assignment. I have tried everything and it still gives me 'no such file or directory'. I have turned off safe mode, checked the directories and even made sure the file could be read. Here are the commands I have tried to load the data:
hadoop fs -copyFromLocal 2008.csv
hdfs dfs -copyFromLocal 2008.csv
hdfs dfs -copyFromLocal 2008.csv /user/root
hdfs dfs -copyFromLocal 2008.csv /home/cloudera/Desktop
Nothing at all has worked and keeps giving me
'2008.csv' no such file or directory
. What could I do to fix this? Thank you very much.
I have to use that data and work with Pig for an assignment
You can run Pig without HDFS.
pig -x local
I have tried everything and it still gives me 'no such file or directory'
Well, that error is not from HDFS, it seems to be from your local shell.
ls shows you the files available to use in the current directory for -copyFromLocal or -put to work without an absolute path.
For complete assurance for what you are copying, as well as to where, use full paths in both arguments. The second path is always HDFS if using those two flags.
Try this
hadoop fs -mkdir -p /user/cloudera # just in case
hadoop fs -copyFromLocal ./2008.csv /user/cloudera/
Or even
hadoop fs -copyFromLocal /home/cloudera/Desktop/2008.csv /user/cloudera/
What I think you are having issues with, is that /user/root is not correct unless you are running commands as the root user, and neither is /home/cloudera/Desktop because HDFS has no concept of a Desktop.
The default behavior without the second path is
hadoop fs -copyFromLocal <file> /user/$(whoami)/
(Without the trailing slash, or a pre-existing directory, it'll copy <file> literally as a file, which can be unexpected in certain situations, for example, when trying to copy a file into a user directory, but the directory doesn't exist yet)
I believe you already check and made yourself sure that 2008.csv exists. That's why I think the permissions on this file not allowing you to copy it.
try: sudo -u hdfs cat 2008.csv
If you get permission denied error, this is your issue. Arrange permissions of the file or create a new one if so. If again you get "no file" error, try to use whole path for the file like:
hdfs dfs -copyFromLocal /user/home/csvFiles/2008.csv /user/home/cloudera/Desktop

I have graph.db folder from neo4j. It contains lot of neostore*.* files. How do i export a csv file from this?

I have graph.db folder from neo4j. It contains lot of neostore*.* files. How do i export a csv file from this ?
Note: I have this graph.db sent from my friend.
Download and install Neo4j if you haven't already
Move the graph.db directory that you have now into the data/ directory of the fresh Neo4j installation, replacing the existing graph.db directory in the fresh Neo4j instance. (Note: If you are using the desktop Neo4j application you can simply choose the location of your existing graph.db directory when starting Neo4j).
Start Neo4j server
To generate CSVs you have a few options:
Export from Neo4j Browser With Neo4j running, open your web browser and navigate to http://localhost:7474. Execute a Cypher query. Click on the "Download" icon and choose "Export CSV" to download a CSV representation of the data returned. See screenshot below.
neo4j-shell-tools Use neo4j-shell-tools to export results of a Cypher query. Use -o file.csv to specify output should be written to CSV file.
See this blog post for more info.

Importing CSV file into Hadoop

I am new with Hadoop, I have a file to import into hadoop via command line (I access the machine through SSH)
How can I import the file in hadoop?
How can I check afterward (command)?
2 steps to import csv file
move csv file to hadoop sanbox (/home/username) using winscp or cyberduck.
use -put command to move file from local location to hdfs.
hdfs dfs -put /home/username/file.csv /user/data/file.csv
There are three flags that we can use for load data from local machine into HDFS,
-copyFromLocal
We use this flag to copy data from the local file system to the Hadoop directory.
hdfs dfs –copyFromLocal /home/username/file.csv /user/data/file.csv
If the folder is not created as HDFS or root user we can create the folder:
hdfs dfs -mkdir /user/data
-put
As #Sam mentioned in the above answer we also use -put flag to copy data from the local file system to the Hadoop directory.
hdfs dfs -put /home/username/file.csv /user/data/file.csv
-moveFromLocal
we also use -moveFromLocal flag to copy data from the local file system to the Hadoop directory. But this will remove the file from the local directory
hdfs dfs -moveFromLocal /home/username/file.csv /user/data/file.csv

Cypher Neo4j Couldn't load the external resource

In a Windows environment, I'm trying to load a .csv file with statement:
LOAD CSV WITH HEADERS FROM "file:///E:/Neo4j/customers.csv" AS row
It seems not to work properly and returns:
Couldn't load the external resource at:
file:/E:/Neo4j/Customers.csv
Neo.TransientError.Statement.ExternalResourceFailure
What am I doing wrong? thanks in advance
I was getting this error on Community Edition 3.0.1 on Mac OS X 10.10
It appears that the LOAD CSV file:/// looks for files in a predefined directory. One would think that in the argument that one would give the Cypher statement the full path but that is not the case.
The file:/// - for my situation" meant that neo4j would append the given argument you gave to one that was already predefined and then go look for that combined path
The file:/// pre-defined directory directory did not exist entirely
/Users/User/Documents/Neo4j/default.graphdb/import, in my computers directory structure I was missing the "/import" folder, which was not created at install
To fix on my system, I created an "import" directory, put the file to be read in that directory. I executed the Cypher load statement I ONLY put the name of the file to be read in the file argument i.e.
LOAD CSV file:///data.csv
this worked for me.
It appears to be a security configuration. Here's the original answer I found: https://stackoverflow.com/a/37444571/327004
You can add the following setting in conf/neo4j.conf in order to bypass this :
dbms.security.allow_csv_import_from_file_urls=true
Or change the import directory dbms.directories.import=import
You can find the answer in the file
"C:\Users\Jack\AppData\Roaming\Neo4j Community Edition\neo4j.conf"
(above "dbms.directories.import=import")
For version neo4j-community_windows-x64_3_1_1 you have to comment out this line or you have to create the folder \import (which isn´t created through the installation) and add your file into the folder.
There it´s written that due to security reasons they only allow file load from the \Documents\Neo4j\default.graphdb\import folder
After commenting out on # dbms.directories.import=import , you can execute e.g. from
LOAD CSV FROM "file:///C:/Users/Jack/Documents/products.csv" AS row
In neo4j.conf I didn´t have to add/set
dbms.security.allow_csv_import_from_file_urls=true
On (Arch) Linux + neo4j-community-3.4.0-alpha09, edit $NEO4J_HOME/conf
/neo4j.conf:
uncomment or add: dbms.security.allow_csv_import_from_file_urls=true
comment: #dbms.directories.import=import
Restart neo4j (in terminal: neo4j restart), and reload the Neo4j Browser (http://localhost:7474/browser/) if you are using a web browser as your Neo4j interface/GUI.
Then, you should be able to load a csv from outside your $NEO4J_HOME/... directory
E.g.,
LOAD CSV WITH HEADERS FROM "file:///mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS ...
where my $NEO4J_HOME/ is /mnt/Vancouver/apps/neo4j/neo4j-community-3.4.0-alpha09/
LOAD CSV WITH HEADERS FROM "file:/mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS ...
also works, but not
LOAD CSV WITH HEADERS FROM "file://mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS...
or
LOAD CSV WITH HEADERS FROM "/mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS...
i.e. use ...file:/... or ...file:///...
It's probably an URL issue, try file:c:/path/to/data.csv
See my blog posts:
http://jexp.de/blog/2014/10/load-cvs-with-success/
http://jexp.de/blog/2014/06/load-csv-into-neo4j-quickly-and-successfully/
For the ubuntu system, I placed the file in /usr/lib/neo4j which helped me solved the issue. On every other location, i tried giving full permissions(777) but the problem remains the same. After going through another stackoverflow post, i realized that the file should be kept in neo4j directory.
In the Neo4j desktop select the database you are using, go to the setting and there you will find the solution... just comment the "dbms.directories.import=import" line
# This setting constrains all LOAD CSV import files to be under the import directory. Remove or comment it out to
# allow files to be loaded from anywhere in the filesystem; this introduces possible security problems. See the
# LOAD CSV section of the manual for details.
dbms.directories.import=import ### COMMENT THIS LINE
For macOS Mojave v 10.14.5
Actually, I had to uncomment dbms.directories.import=import from ~/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-e2dd2a9c-d450-4639-861b-1e7e42b56b31/installation-3.5.5/conf/neo4j.conf and restart the service. Then it worked. All files has to be placed in import directory.
Run command LOAD CSV WITH HEADERS FROM 'FILE:/<yourCSV>.csv' as l return l
I am using the Neo4j Desktop and as others have said, the default graph database has a predefined import location. You can find the location by using the UI. If you put the CSV into the import directory, then you can use the relative path directly from you load csv command
Neo4j version is 3.1.1, OS is win10.
For me, LOAD CSV would read from Neo4j_Database_Location/testDB/import/artists.csv.
At first, I put csv file on the path F:\code\java\helloworld\artists.csv, and my cypher sentence is
LOAD CSV FROM 'file:///F:\\code\\java\\helloworld\\artists.csv' AS line
CREATE(:Artist {name:line[1],year:toInt(line[2])})
Then I get the error message returned as follows:
Couldn't load the external resource at: file:/D:/Neo4j/db/testDB/import/code/java/helloworld/artists.csv
It means neo4j itself concat the file path.
"D:/Neo4j/db/testDB/import/" is the Neo4j database location, and the "code/java/helloworld/artists.csv" is the csv file location.
For example, I install Neo4j on the path D:\Neo4j\Neo4j CE 3.1.1, and database loaction is D:\Neo4j\db. I put the CSV file on the path D:\Neo4j\db\testDB\import\artist.csv. If you don't have the file folder "import" on the path, you should creat it by yourself and put your file in the folder "import".
Then, put your csv file in the path, and input cyper sentence:
LOAD CSV from 'file:///artist.csv' as LINE
CREATE(:Artist {name:line[1],year:toInt(line[2])})
In a word, once you put the CSV file in the right path, the problem can be solved.
Related explaination in the LOAD CSV developer-manal
If dbms.directories.import is set to the default value import, using the above URLs in LOAD CSV would read from /import/myfile.csv and import/myproject/myfile.csv respectively.
If it is set to /data/csv, using the above URLs in LOAD CSV would read from /data/csv/myfile.csv and /data/csv/myproject/myfile.csv respectively.
Set the Property "dbms.directories.import=import"
Create folder 'import' explicitly at "/Users/User/Documents/Neo4j/default.graphdb/" because pre-defined directory did not exist entirely
place the csv data set here in the import folder
then run the code like - LOAD CSV FROM "file:///C:/customers.csv" AS row
In addition after you run the line, you can analyze what is going wrong in the code section to get a better understanding
you put your dataset into the import directory in neo4j-community path.
Then re-run your command.
Add your csv file in the import folder of neo4j installation guide to do this.
open neo4j and start graph of ur project
then in open folders tab open import folders
Copy ur csv file in this folder
Copy that part in ur load syntax as file:///C:/neo4j_module_datasets/test.csv since ur neo4j in running
in C drive
Snapshot for your reference
Use the following syntax:
LOAD CSV WITH HEADERS FROM "file:///my_collection.csv" AS row CREATE (n:myCollection) SET n = row
If you are running a docker then, follow these commands before running above query:
docker run \
-p=7474:7474 \
-p=7687:7687 \
-v=$HOME/neo4j/data:/data \
-v=$HOME/neo4j/logs:/logs \
-v=$HOME/local_import_dir:/var/lib/neo4j/import \
neo4j:3.0
Then,
sudo cp my_collection.csv /home/bajju/local_import_dir/
One of the following should solve the LOAD CSV errors (assuming you have dbms.security.allow_csv_import_from_file_urls=true)
If using Linux check for the permissions for the file. Change it using chmod 777 file_name.csv
Check if the file format/format for the contents within the file is correct.
The easiest way (be ware of security) is to serve you directory over http and use the http import
in the command line go the folder where csv files are lcoated
run the following depending on your python env.
Python 2
$ python -m SimpleHTTPServer 8000
Python 3
$ python3 -m http.server 8000
- Now you can load your files from your local host
LOAD CSV FROM 'http://localhost:8000/mycsvfile.csv' AS row
return row
- you can actually expose files on one host and load them where your DB is running by exposing the folder and replacing localhost with your IP