Load json file in hive using json serde - json

I am trying to upload json file on hadoop using json serde. I have uploaded jar lib to hadoop but getting error while running hive command
I have uploaded json serde jar file to /apps/hive/warehouse/lib path.Now, when i am tring to run this command
ADD JAR /apps/hive/warehouse/lib/json-serde-1.3-jar-with-dependencies.jar;
I got this error
H110 Unable to submit statement. Error while processing statement:
/apps/hive/warehouse/lib/json-serde-1.3.7-SNAPSHOT-jar-with-dependencies.jar
does not exist [ERROR_STATUS]

Looks like your jar is in HDFS location. Use:
add jar hdfs:///apps/hive/warehouse/lib/json-serde-1.3-jar-with-dependencies.jar;

Try to use URL (add file//) before filename:
ADD JAR file///apps/hive/warehouse/lib/json-serde-1.3-jar-with-dependencies.jar;
Also you should be able to add jar from a repository if your Hive is 1.2.0 and above.

Related

How to ingest a .JSON file using Flume?

I have a .json file with data and I need ingest/send all that information to hdfs using Flume, do you know an scrip for console to do that?

How to install and use po2json?

I want convert some wordpress .po language files to .json format and used wp-cli but PO files converted to multi files of json but I need to a single json file.
So installed po2json using:
npm install po2json
I am getting this error:
C:\Users\Mehdi\Desktop\po2json 1.0.0>po2json translation.po translation.json
'po2json' is not recognized as an internal or external command,
operable program or batch file.
Can anybody help me to use po2json easily?
I have tried to install https://openbase.com/js/#myrotvorets/po2json using:
npm i #myrotvorets/po2json
And finally I got the output with the following code:
po2json sourcefile.po > destfile.json

Accessing csv file placed in hdfs using spark

I have placed a csv file into the hdfs filesystem using hadoop -put command. I now need to access the csv file using pyspark csv. Its format is something like
`plaintext_rdd = sc.textFile('hdfs://x.x.x.x/blah.csv')`
I am a newbie to hdfs. How do I find the address to be placed in hdfs://x.x.x.x?
Here's the output when I entered
hduser#remus:~$ hdfs dfs -ls /input
Found 1 items
-rw-r--r-- 1 hduser supergroup 158 2015-06-12 14:13 /input/test.csv
Any help is appreciated.
you need to provide the full path of your files in HDFS and the url will be mentioned in your hadoop configuration core-site or hdfs-site where you mentioned.
Check your core-site.xml & hdfs-site.xml for get the details about
url.
Easy way to find any url is access your hdfs from your browser and get the path.
If you are using absolute path in your file system use file:///<your path>
Try to specify absolute path without hdfs://
plaintext_rdd = sc.textFile('/input/test.csv')
Spark while running on the same cluster with HDFS use hdfs:// as default FS.
Start the spark shell or the spark-submit by pointing to the package which can read csv files, like below:
spark-shell --packages com.databricks:spark-csv_2.11:1.2.0
And in the spark code, you can read the csv file as below:
val data_df = sqlContext.read.format("com.databricks.spark.csv")
.option("header", "true")
.schema(<pass schema if required>)
.load(<location in HDFS/S3>)

Cypher Neo4j Couldn't load the external resource

In a Windows environment, I'm trying to load a .csv file with statement:
LOAD CSV WITH HEADERS FROM "file:///E:/Neo4j/customers.csv" AS row
It seems not to work properly and returns:
Couldn't load the external resource at:
file:/E:/Neo4j/Customers.csv
Neo.TransientError.Statement.ExternalResourceFailure
What am I doing wrong? thanks in advance
I was getting this error on Community Edition 3.0.1 on Mac OS X 10.10
It appears that the LOAD CSV file:/// looks for files in a predefined directory. One would think that in the argument that one would give the Cypher statement the full path but that is not the case.
The file:/// - for my situation" meant that neo4j would append the given argument you gave to one that was already predefined and then go look for that combined path
The file:/// pre-defined directory directory did not exist entirely
/Users/User/Documents/Neo4j/default.graphdb/import, in my computers directory structure I was missing the "/import" folder, which was not created at install
To fix on my system, I created an "import" directory, put the file to be read in that directory. I executed the Cypher load statement I ONLY put the name of the file to be read in the file argument i.e.
LOAD CSV file:///data.csv
this worked for me.
It appears to be a security configuration. Here's the original answer I found: https://stackoverflow.com/a/37444571/327004
You can add the following setting in conf/neo4j.conf in order to bypass this :
dbms.security.allow_csv_import_from_file_urls=true
Or change the import directory dbms.directories.import=import
You can find the answer in the file
"C:\Users\Jack\AppData\Roaming\Neo4j Community Edition\neo4j.conf"
(above "dbms.directories.import=import")
For version neo4j-community_windows-x64_3_1_1 you have to comment out this line or you have to create the folder \import (which isn´t created through the installation) and add your file into the folder.
There it´s written that due to security reasons they only allow file load from the \Documents\Neo4j\default.graphdb\import folder
After commenting out on # dbms.directories.import=import , you can execute e.g. from
LOAD CSV FROM "file:///C:/Users/Jack/Documents/products.csv" AS row
In neo4j.conf I didn´t have to add/set
dbms.security.allow_csv_import_from_file_urls=true
On (Arch) Linux + neo4j-community-3.4.0-alpha09, edit $NEO4J_HOME/conf
/neo4j.conf:
uncomment or add: dbms.security.allow_csv_import_from_file_urls=true
comment: #dbms.directories.import=import
Restart neo4j (in terminal: neo4j restart), and reload the Neo4j Browser (http://localhost:7474/browser/) if you are using a web browser as your Neo4j interface/GUI.
Then, you should be able to load a csv from outside your $NEO4J_HOME/... directory
E.g.,
LOAD CSV WITH HEADERS FROM "file:///mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS ...
where my $NEO4J_HOME/ is /mnt/Vancouver/apps/neo4j/neo4j-community-3.4.0-alpha09/
LOAD CSV WITH HEADERS FROM "file:/mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS ...
also works, but not
LOAD CSV WITH HEADERS FROM "file://mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS...
or
LOAD CSV WITH HEADERS FROM "/mnt/Vancouver/Programming/data/metabolism/practice/a.csv" AS...
i.e. use ...file:/... or ...file:///...
It's probably an URL issue, try file:c:/path/to/data.csv
See my blog posts:
http://jexp.de/blog/2014/10/load-cvs-with-success/
http://jexp.de/blog/2014/06/load-csv-into-neo4j-quickly-and-successfully/
For the ubuntu system, I placed the file in /usr/lib/neo4j which helped me solved the issue. On every other location, i tried giving full permissions(777) but the problem remains the same. After going through another stackoverflow post, i realized that the file should be kept in neo4j directory.
In the Neo4j desktop select the database you are using, go to the setting and there you will find the solution... just comment the "dbms.directories.import=import" line
# This setting constrains all LOAD CSV import files to be under the import directory. Remove or comment it out to
# allow files to be loaded from anywhere in the filesystem; this introduces possible security problems. See the
# LOAD CSV section of the manual for details.
dbms.directories.import=import ### COMMENT THIS LINE
For macOS Mojave v 10.14.5
Actually, I had to uncomment dbms.directories.import=import from ~/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-e2dd2a9c-d450-4639-861b-1e7e42b56b31/installation-3.5.5/conf/neo4j.conf and restart the service. Then it worked. All files has to be placed in import directory.
Run command LOAD CSV WITH HEADERS FROM 'FILE:/<yourCSV>.csv' as l return l
I am using the Neo4j Desktop and as others have said, the default graph database has a predefined import location. You can find the location by using the UI. If you put the CSV into the import directory, then you can use the relative path directly from you load csv command
Neo4j version is 3.1.1, OS is win10.
For me, LOAD CSV would read from Neo4j_Database_Location/testDB/import/artists.csv.
At first, I put csv file on the path F:\code\java\helloworld\artists.csv, and my cypher sentence is
LOAD CSV FROM 'file:///F:\\code\\java\\helloworld\\artists.csv' AS line
CREATE(:Artist {name:line[1],year:toInt(line[2])})
Then I get the error message returned as follows:
Couldn't load the external resource at: file:/D:/Neo4j/db/testDB/import/code/java/helloworld/artists.csv
It means neo4j itself concat the file path.
"D:/Neo4j/db/testDB/import/" is the Neo4j database location, and the "code/java/helloworld/artists.csv" is the csv file location.
For example, I install Neo4j on the path D:\Neo4j\Neo4j CE 3.1.1, and database loaction is D:\Neo4j\db. I put the CSV file on the path D:\Neo4j\db\testDB\import\artist.csv. If you don't have the file folder "import" on the path, you should creat it by yourself and put your file in the folder "import".
Then, put your csv file in the path, and input cyper sentence:
LOAD CSV from 'file:///artist.csv' as LINE
CREATE(:Artist {name:line[1],year:toInt(line[2])})
In a word, once you put the CSV file in the right path, the problem can be solved.
Related explaination in the LOAD CSV developer-manal
If dbms.directories.import is set to the default value import, using the above URLs in LOAD CSV would read from /import/myfile.csv and import/myproject/myfile.csv respectively.
If it is set to /data/csv, using the above URLs in LOAD CSV would read from /data/csv/myfile.csv and /data/csv/myproject/myfile.csv respectively.
Set the Property "dbms.directories.import=import"
Create folder 'import' explicitly at "/Users/User/Documents/Neo4j/default.graphdb/" because pre-defined directory did not exist entirely
place the csv data set here in the import folder
then run the code like - LOAD CSV FROM "file:///C:/customers.csv" AS row
In addition after you run the line, you can analyze what is going wrong in the code section to get a better understanding
you put your dataset into the import directory in neo4j-community path.
Then re-run your command.
Add your csv file in the import folder of neo4j installation guide to do this.
open neo4j and start graph of ur project
then in open folders tab open import folders
Copy ur csv file in this folder
Copy that part in ur load syntax as file:///C:/neo4j_module_datasets/test.csv since ur neo4j in running
in C drive
Snapshot for your reference
Use the following syntax:
LOAD CSV WITH HEADERS FROM "file:///my_collection.csv" AS row CREATE (n:myCollection) SET n = row
If you are running a docker then, follow these commands before running above query:
docker run \
-p=7474:7474 \
-p=7687:7687 \
-v=$HOME/neo4j/data:/data \
-v=$HOME/neo4j/logs:/logs \
-v=$HOME/local_import_dir:/var/lib/neo4j/import \
neo4j:3.0
Then,
sudo cp my_collection.csv /home/bajju/local_import_dir/
One of the following should solve the LOAD CSV errors (assuming you have dbms.security.allow_csv_import_from_file_urls=true)
If using Linux check for the permissions for the file. Change it using chmod 777 file_name.csv
Check if the file format/format for the contents within the file is correct.
The easiest way (be ware of security) is to serve you directory over http and use the http import
in the command line go the folder where csv files are lcoated
run the following depending on your python env.
Python 2
$ python -m SimpleHTTPServer 8000
Python 3
$ python3 -m http.server 8000
- Now you can load your files from your local host
LOAD CSV FROM 'http://localhost:8000/mycsvfile.csv' AS row
return row
- you can actually expose files on one host and load them where your DB is running by exposing the folder and replacing localhost with your IP

Hadoop ConnectException

I recently installed hadoop on my local ubuntu. I have started data-node by invoking bin/start-all.sh script. However when I try to run the word count program
bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /home/USER/Desktop/books /home/USER/Desktop/books-output
I always get a connect exception. The folder 'books' is on my deskop(local filesystem). Any suggestions on how to overcome this?
I have followed every steps in this tutorial. I am not sure how to get rid of that error. All help will be appreciated.
copy your books file into your hdfs
and for the input path argument use hdfs path of your copied book file.
for more detail go through below link.
http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_1_--_Running_WordCount#Basic_Hadoop_Admin_Commands
There is a bit of confusion here, when you run the hadoop ... command then the default filesystem which it uses is the hadoop distributed filesystem hence the files must be located on the hdfs for hadoop to access it.
To copy files from the local filesystem to the hadoop filesystem you have to use the following command
hdfs dfs -copyFromLocal /path/in/local/file/system /destination/on/hdfs
One more thing if you want to run the program from your IDE directly then sometimes you get this issue which can be solved by adding the
core-site.xml and hdfs-site.xml files in the conf variable something like
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"));
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"));
change the path above to the hdfs-site.xml and core-site.xml to your local path.
So the above arguments can also be provided from the command line by adding them to the classPath with -cp tag.