Python: how to read json file and export to MySQL database - mysql

My backend code using Python 2.7 able to convert from dataframe to json using df.to_json()but I need to export this json file into MySQL database since frontend code using angular 2 is javascript.
import pandas as pd
from sqlalchemy import create_engine
df.to_csv("abc.csv")
df.to_json("abc_json.json")
engine = create_engine('mysql+mysqldb://user:pw#sbc.mysql.pythonanywhere-services.com/abc$default')
df.to_sql(name='KLSE', con=engine, if_exists='replace')
Code above was run without problem but I want to MySQL database in json format so that frontend code can query.
I can not find related link in google or stackoverflow with similar issues.Thanks for help.

Related

Why is pyspark unable to read this csv file?

I was unable to find this problem in the numerous Stack Overflow similar questions "how to read csv into a pyspark dataframe?" (see list of similar sounding but different questions at end).
The CSV file in question resides in the tmp directory of the driver of the cluster, note that this csv file is intentionally NOT in the Databricks DBFS cloud storage. Using DBFS will not work for the use case that led to this question.
Note I am trying to get this working on Databricks runtime 10.3 with Spark 3.2.1 and Scala 2.12.
y_header = ['fruit','color','size','note']
y = [('apple','red','medium','juicy')]
y.append(('grape','purple','small','fresh'))
import csv
with (open('/tmp/test.csv','w')) as f:
w = csv.writer(f)
w.writerow(y_header)
w.writerows(y)
Then use python os to verify the file was created:
import os
list(filter(lambda f: f == 'test.csv',os.listdir('/tmp/')))
Now verify that the databricks Spark API can see the file, have to use file:///
dbutils.fs.ls('file:///tmp/test.csv')
Now, optional step, specify a dataframe schema for Spark to apply to the csv file:
from pyspark.sql.types import *
csv_schema = StructType([StructField('fruit', StringType()), StructField('color', StringType()), StructField('size', StringType()), StructField('note', StringType())])
Now define the PySpark dataframe:
x = spark.read.csv('file:///tmp/test.csv',header=True,schema=csv_schema)
Above line runs no errors, but remember, due to lazy execution, the spark engine still has not read the file. So next we will give Spark a command that forces it to execute the dataframe:
display(x)
And the error is:
FileReadException: Error while reading file file:/tmp/test.csv. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Caused by: FileNotFoundException: File file:/tmp/test.csv does not exist. . .
and digging into the error I found this: java.io.FileNotFoundException: File file:/tmp/test.csv does not exist. And I already tried restarting the cluster, restart did not clear the error.
But I can prove the file does exist, only for some reason Spark and Java are unable to access it, because I can read in the same file with pandas no problem:
import pandas as p
p.read_csv('/tmp/test.csv')
So how do I get spark to read this csv file?
appendix - list of similar spark read csv questions I searched through that did not answer my question: 1 2 3 4 5 6 7 8
I guess databricks file loader doesn't seem to recognize the absolute path /tmp/.
you can try the following work around.
Read the file using path using Pandas Dataframe
Pass the pandas dataframe to Spark using CreateDataFrame function
Code :
df_pd = pd.read_csv('File:///tmp/test.csv')
sparkDF=spark.createDataFrame(df_pd)
sparkDF.display()
Output :
I made email contact with a Databricks architect, who confirmed that Databricks can only read locally (from the cluster) in a single node setup.
So DBFS is the only option for random writing/reading of text data files in a typical cluster which contains >1 node.

How to import data from json file to mongodb atlas collection

I wanted to import data to my collection in mongodb atlas, and I was following the documentation: https://docs.mongodb.com/compass/beta/import-export/ but there is no "ADD DATA" and I don't know if Im using some other version or Im doing something else wrongly.
I need to import whole file which is json array.
The docs you referenced are for a future version of Compass. If you want to import from EJSON at the command line you can use mongoimport.
Here's the simplest syntax, but there are many variations possible.
mongoimport --db=users --collection=contacts --file=contacts.json

DHIS2 Import Data

Im using DHIS 2 Live with embedded database.
I have and excel, with some data, i transformed it to csv and tried to import it using the import tools.
ImportTools
ImportExample
It doesnt print any error just stop and does not insert anything.
I tried to import some data in json too but i dont know if i should import it like data or meta data.

how to import json data in neo4j

I have json data and i want to import in neo4j.
Export data option will be there in neo4j but how to import JSON data in neo4j.
This is the link of jsfiddle. http://jsfiddle.net/harmeetsingh090/mkdm4t44/
Please help if someone know.
You can use jq to manipulate your data into CSV format and then use the LOAD CSV command.
Neo4J doesn't have a native way of doing this, but there is a plugin for Neo4J called apoc.load.json. You can load data doing the following:
CALL apoc.load.json("file:///<path_to_file>/example.json") YIELD value as document
UNWIND document.root AS root
MERGE (e:ExampleNode {id: root.id})
...
You can find more information on the plug here: https://neo4j-contrib.github.io/neo4j-apoc-procedures/. I've recently used this and found it to be quite intuitive.

Does mongoDB has a mechanism like mysql ,which simple import .sql file into database?

As the title goes , I wonder if MongoDB has a data file format to import directly ?I know that mysql has "sql" file format for it to import directly .I am now in a project has the same requirement.Any one can tell me ?
MongoDB can import data using the mongoimport tool from JSON, CSV and TSV data format as you can see here
MongoDB internally represents data as a binary-encoded JSON (BSON), so importing and exporting in JSON format is really fast and intuitive
Of course.Mongodb use mongodump/mongoexport to export the data to outside files and use mongorestore/mongoimport to import data to its databases.more details , just reference to mongodb doc.mongodump and mongoexport ,mongorestore and mongoimport ,do have some differences .More details ,please refer to mongodb doc.