I have a notebook where I used Python and I want to export the data from there into a Json file. Do you know how this would be possible?
Related
I want to upload records of students of a university using CSV file. I have upload the CSV file using react-native-document-picker. Now the problem is that, I am unable to read CSV Data. My main motive is to upload CSV data to firebase. How to read CSV data in React Native or covert CSVtoJSON?
You need to convert CSV to JSON before pushing data to Firebase. There're numerous utility libraries for that. You can try https://www.npmjs.com/package/csvtojson
I have created a mount in databricks which connects to my blob storage and I am able to read files from blob to databricks using a notebook.
I then transposed a .txt to json format using pyspark and now I would like to load it back to the blob storage. Does anyone know how I would do that?
Here are a few things I have tried:
my_json.write.option("header", "true").json("mnt/my_mount/file_name.json")
write.json(my_json, mnt/my_mount)
Neither work. I can put load a csv file from databricks to blob using:
my_data_frame.write.option("header", "true").csv("mnt/my_mount_name/file name.csv")
This works fine but I can't find a solution for moving a json.
Any ideas?
Disclaimer: I am new to pySpark but this is what I have done.
This is what I did after referencing the docs pyspark.sql.DataFrameWriter.json
# JSON
my_dataframe.write.json("/mnt/my_mount/my_json_file_name.json")
# For a single JSON file
my_dataframe.repartition(1).write.json("/mnt/my_mount/my_json_file_name.json")
# Parquet
my_dataframe.write.mode("Overwrite").partitionBy("myCol").parquet("/mnt/my_mount/my_parquet_file_name.parquet")
I know to read in the csv is:
pd.read_csv("s3://data-science/misc/survey.csv")
But I am trying to export results into there using:
filex.to_csv("s3://data-science/misc/filex.csv")
and this does not work - how can this be done?
If you are looking for get the CSV beside the path where you will save it, then try using just the name of the new file and it will be saved in the actual path (from where you excecute the script:
df1.to_csv('df1.csv', sep=', encoding='utf-8')
and I recommend paying attention to the arg's
I have a CSV file in media folder inside my django project. I want to read the data from the CSV file and store it in JSON format either directly in the database or convert it into JSON from CSV file and then store it, and enable me to view it on an html page, in my Django Web Application.
as I dont know the data format in your csv file i supposed that you have to element in your csv file ...
now you can write your csv file in a list of dictionaries. so you can do anything you want to...
import pandas as pd
csv = pd.read_csv('./your_file.csv')
dataset = []
for i, (item1, item2) in csv.iterrows():
dataset.append({"item1":item1, "item2":item2})
I imported a JSON file into Firebase Realtime db, but cannot see the db schema change as per my JSON. Where will my JSON file data be seen in the Firebase db?
I think you know very well that how to import or export database json.
Now if your database JSON file is correct then it import your json file in the firebase database and automatically create the schema of the database according to the json file...
Otherwise it show you error
check your json file and try again
Here is the comment I'm using to import JSON files to a database:
firebase database:set /location/ file.json -P project-id
You can replace set by push and update depending on what you are trying to achieve (read Firebase CLI references for more options)
Your data will then be uploaded to the "location" you mentioned in the above command.
Note: pay attention to your JSON file as it can contain the location too what will result in having your data uploaded at the wrong location...