How to convert multiple nested JSON files into single CSV file using python? - json

I have about 200 nested JSON files with varying levels of nesting from one to three. Each JSON file consist of more than thousand data points. The keys of the values are same in all the files. My objective is to combine the data in all the files in a tabular format in a single CSV file so that I can read all the data and analyze it. I am looking for a simpler python code with brief explanation of each steps of the code to help in understanding the whole sequence of the code.

You can use this code snippet.
First of all install pandas using
pip install pandas
After that, you can use this code to convert JSON files to CSV.
# code to save all data to a single file
import pandas as pd
import glob
path = './path to directory/*.json'
files = glob.glob(path)
data_frames = []
for file in files:
f = open(file, 'r')
data_frames.append(pd.read_json(f))
f.close()
pd.concat(data_frames).to_csv("data.csv")
# code to save CSV data to individual files
import pandas as pd
import glob
path = './path to directory/*.json'
files = glob.glob(path)
for file in files:
f = open(file, 'r')
jsonData = pd.read_json(f.read())
jsonData.to_csv(f.name+".csv")
f.close()

Related

Splitting sentences from a .txt file to .csv using NLTK

I have a corpus of newspaper articles in a .txt file, and I'm trying to split the sentences from it to a .csv in order to annotate each sentence.
I was told to use NLTK for this purpose, and I found the following code for sentence splitting:
import nltk
from nltk.tokenize import sent_tokenize
sent_tokenize("Here is my first sentence. And that's a second one.")
However, I'm wondering:
How does one use a .txt file as an input for the tokenizer (so that I don't have to just copy and paste everything), and
How does one output a .csv file instead of just printing the sentences in my terminal.
Reading a .txt file & tokenizing its sentences
Assuming the .txt file is located in the same folder as your Python script, you can read a .txt file and tokenize the sentences using NLTK as shown below:
from nltk import sent_tokenize
with open("myfile.txt") as file:
textFile = file.read()
tokenTextList = sent_tokenize(textFile)
print(tokenTextList)
# Output: ['Here is my first sentence.', "And that's a second one."]
Writing a list of sentence tokens to .csv file
There are a number of options for writing a .csv file. Pick whichever is more convenient (e.g. if you already have pandas loaded, use the pandas option).
To write a .csv file using the pandas module:
import pandas as pd
df = pd.DataFrame(tokenTextList)
df.to_csv("myCSVfile.csv", index=False, header=False)
To write a .csv file using the numpy module:
import numpy as np
np.savetxt("myCSVfile.csv", tokenTextList, delimiter=",", fmt="%s")
To write a .csv file using the csv module:
import csv
with open('myCSVfile.csv', 'w', newline='') as file:
write = csv.writer(file, lineterminator='\n')
# write.writerows([tokenTextList])
write.writerows([[token] for token in tokenTextList]) # For pandas style output

Create Multiple .json Files from an Excel file with multiple sheets using Pandas

I 'm trying to convert a very big number of Excel Files with multiple sheets (some of them very big also) to .json files. So I created a list with the names of the sheets and then made a loop to create a data frame for each sheet and then I wrote this dataframe to a .json file. My code is :
from zipfile import ZipFile
from bs4 import BeautifulSoup
import pandas as pd
file = 'filename.xlsx'
with ZipFile(file) as zipped_file:
summary = zipped_file.open(r'xl/workbook.xml').read()
soup = BeautifulSoup(summary, "xml")
sheets = [sheet.get("name") for sheet in soup.find_all("sheet")]
for i in sheets:
df = pd.read_excel(file, sheet_name = i, index = False, header = 1)
json_file = df.to_json(("{}.json").format(i))
This code works like a charm when the sheets are not very big. When I run it for an excel file it works and creates the json files I want up to the point that it finds a very big sheet with a lot of data and it crashes.
So my question is : Is there a different more efficient way to do this without crashing the program. When I run the df=pd.read_excel command separately for each sheet it works without a problem, but I need this to happen in a loop
Import numpy. Declare an empty numpy array, out_array. Then, given a list of paths, paths, for each path in paths, read the file into a temporary dataframe, temp_df, get the values of the temporary dataframe using the .values() method, store the values into a temporary numpy array, temp_array, concatenate out_array and temp_array using numpy.concatenate.
Once this loop completes the process, convert the out_array to dataframe, out_df, using pandas.DataFrame. And finally, set column names for your new dataframe.

How to read a csv file using pyarrow in python

I have made a connection to my HDFS using the following command
import pyarrow as pa
import pyarrow.parquet as pq
fs = pa.hdfs.connect(self.namenode, self.port, user=self.username, kerb_ticket = self.cert)
I'm using the following command to read a parquet file
fs.read_parquet()
but there is not read method for regular text files (e.g. a csv file). How can I read a csv file using pyarrow.
You need to create a file-like object and use the CSV module directly. See pyarrow.csv.read_csv
You can set up a spark session to connect to hdfs, then read it from there.
ss = SparkSession.builder.appName(...)
csv_file = ss.read.csv('/user/file.csv')
Another way is to open the file first, then read it using csv.csv_read
Here is what I used at the end.
from pyarrow import csv
file = 'hdfs://user/file.csv'
with fs.open(file, 'rb') as f:
csv_file = csv.read_csv(f)

Loading a CSV file from Blob Storage Container using PySpark

I am unable to load a CSV file directly from Azure Blob Storage into a RDD by using PySpark in a Jupyter Notebook.
I have read through just about all of the other answers to similar problems but I haven't found specific instructions for what I am trying to do. I know I could also load the data into the Notebook by using Pandas, but then I would need to convert the Panda DF into an RDD afterwards.
My ideal solution would look something like this, but this specific code give me the error that it can't infer a schema for CSV.
#Load Data
source = <Blob SAS URL>
elog = spark.read.format("csv").option("inferSchema", "true").option("url",source).load()
I have also taken a look at this answer: reading a csv file from azure blob storage with PySpark
but I am having trouble defining the correct path.
Thank you very much for your help!
Here is my sample code with Pandas to read a blob url with SAS token and convert a dataframe of Pandas to a PySpark one.
First, to get a Pandas dataframe object via read a blob url.
import pandas as pd
source = '<a csv blob url with SAS token>'
df = pd.read_csv(source)
print(df)
Then, you can convert it to a PySpark one.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("testDataFrame").getOrCreate()
spark_df = spark.createDataFrame(df)
spark_df.show()
Or, the same result with the code below.
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext()
sqlContest = SQLContext(sc)
spark_df = sqlContest.createDataFrame(df)
spark_df.show()
Hope it helps.

Spark - How to write a single csv file WITHOUT folder?

Suppose that df is a dataframe in Spark. The way to write df into a single CSV file is
df.coalesce(1).write.option("header", "true").csv("name.csv")
This will write the dataframe into a CSV file contained in a folder called name.csv but the actual CSV file will be called something like part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv.
I would like to know if it is possible to avoid the folder name.csv and to have the actual CSV file called name.csv and not part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv. The reason is that I need to write several CSV files which later on I will read together in Python, but my Python code makes use of the actual CSV names and also needs to have all the single CSV files in a folder (and not a folder of folders).
Any help is appreciated.
A possible solution could be convert the Spark dataframe to a pandas dataframe and save it as csv:
df.toPandas().to_csv("<path>/<filename>")
EDIT: As caujka or snark suggest, this works for small dataframes that fits into driver. It works for real cases that you want to save aggregated data or a sample of the dataframe. Don't use this method for big datasets.
If you want to use only the python standard library this is an easy function that will write to a single file. You don't have to mess with tempfiles or going through another dir.
import csv
def spark_to_csv(df, file_path):
""" Converts spark dataframe to CSV file """
with open(file_path, "w") as f:
writer = csv.DictWriter(f, fieldnames=df.columns)
writer.writerow(dict(zip(fieldnames, fieldnames)))
for row in df.toLocalIterator():
writer.writerow(row.asDict())
If the result size is comparable to spark driver node's free memory, you may have problems with converting the dataframe to pandas.
I would tell spark to save to some temporary location, and then copy the individual csv files into desired folder. Something like this:
import os
import shutil
TEMPORARY_TARGET="big/storage/name"
DESIRED_TARGET="/export/report.csv"
df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
part_filename = next(entry for entry in os.listdir(TEMPORARY_TARGET) if entry.startswith('part-'))
temporary_csv = os.path.join(TEMPORARY_TARGET, part_filename)
shutil.copyfile(temporary_csv, DESIRED_TARGET)
If you work with databricks, spark operates with files like dbfs:/mnt/..., and to use python's file operations on them, you need to change the path into /dbfs/mnt/... or (more native to databricks) replace shutil.copyfile with dbutils.fs.cp.
A more databricks'y' solution is here:
TEMPORARY_TARGET="dbfs:/my_folder/filename"
DESIRED_TARGET="dbfs:/my_folder/filename.csv"
spark_df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
temporary_csv = os.path.join(TEMPORARY_TARGET, dbutils.fs.ls(TEMPORARY_TARGET)[3][1])
dbutils.fs.cp(temporary_csv, DESIRED_TARGET)
Note if you are working from Koalas data frame you can replace spark_df with koalas_df.to_spark()
For pyspark, you can convert to pandas dataframe and then save it.
df.toPandas().to_csv("<path>/<filename.csv>", header=True, index=False)
There is no dataframe spark API which writes/creates a single file instead of directory as a result of write operation.
Below both options will create one single file inside directory along with standard files (_SUCCESS , _committed , _started).
1. df.coalesce(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
2. df.repartition(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
If you don't use coalesce(1) or repartition(1) and take advantage of sparks parallelism for writing files then it will create multiple data files inside directory.
You need to write function in driver which will combine all data file parts to single file(cat part-00000* singlefilename ) once write operation is done.
I had the same problem and used python's NamedTemporaryFile library to solve this.
from tempfile import NamedTemporaryFile
s3 = boto3.resource('s3')
with NamedTemporaryFile() as tmp:
df.coalesce(1).write.format('csv').options(header=True).save(tmp.name)
s3.meta.client.upload_file(tmp.name, S3_BUCKET, S3_FOLDER + 'name.csv')
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html for more info on upload_file()
Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same in Databricks.
fpath=output+'/'+'temp'
def file_exists(path):
try:
dbutils.fs.ls(path)
return True
except Exception as e:
if 'java.io.FileNotFoundException' in str(e):
return False
else:
raise
if file_exists(fpath):
dbutils.fs.rm(fpath)
df.coalesce(1).write.option("header", "true").csv(fpath)
else:
df.coalesce(1).write.option("header", "true").csv(fpath)
fname=([x.name for x in dbutils.fs.ls(fpath) if x.name.startswith('part-00000')])
dbutils.fs.cp(fpath+"/"+fname[0], output+"/"+"name.csv")
dbutils.fs.rm(fpath, True)
You can go with pyarrow, as it provides file pointer for hdfs file system. You can write your content to file pointer as a usual file writing. Code example:
import pyarrow.fs as fs
HDFS_HOST: str = 'hdfs://<your_hdfs_name_service>'
FILENAME_PATH: str = '/user/your/hdfs/file/path/<file_name>'
hadoop_file_system = fs.HadoopFileSystem(host=HDFS_HOST)
with hadoop_file_system.open_output_stream(path=FILENAME_PATH) as f:
f.write("Hello from pyarrow!".encode())
This will create a single file with the specified name.
To initiate pyarrow you should define environment CLASSPATH properly, set the output of hadoop classpath --glob to it
df.write.mode("overwrite").format("com.databricks.spark.csv").option("header", "true").csv("PATH/FOLDER_NAME/x.csv")
you can use this and if you don't want to give the name of CSV everytime you can write UDF or create an array of the CSV file name and give it to this it will work