How to read a csv file using pyarrow in python - pyarrow

I have made a connection to my HDFS using the following command
import pyarrow as pa
import pyarrow.parquet as pq
fs = pa.hdfs.connect(self.namenode, self.port, user=self.username, kerb_ticket = self.cert)
I'm using the following command to read a parquet file
fs.read_parquet()
but there is not read method for regular text files (e.g. a csv file). How can I read a csv file using pyarrow.

You need to create a file-like object and use the CSV module directly. See pyarrow.csv.read_csv

You can set up a spark session to connect to hdfs, then read it from there.
ss = SparkSession.builder.appName(...)
csv_file = ss.read.csv('/user/file.csv')
Another way is to open the file first, then read it using csv.csv_read
Here is what I used at the end.
from pyarrow import csv
file = 'hdfs://user/file.csv'
with fs.open(file, 'rb') as f:
csv_file = csv.read_csv(f)

Related

Can you import a file by using a variable

I have a python script that uses json to store data. In the data, there are also file names, so I was wondering if I could import a file using a variable. Example~
file = "apps/messanger"
import file as msg
If this isn't possible, I would have confirmed my hypothesis and just import all of my files separately. But, if it is possible, I would like to know how just because it would make my life easier.
Thanks for any help!
-Jester
I'm not too good with python but when you handle files you normally use
file = open("path to file", 'r or w') # r for read, w for write
file.close() # when you are done with the file you must close it
If you are going to name it msg, then change the variable from file to msg, like
msg = open("apps/messenger", 'r')
msg.close() # when finished with the file

How to convert multiple nested JSON files into single CSV file using python?

I have about 200 nested JSON files with varying levels of nesting from one to three. Each JSON file consist of more than thousand data points. The keys of the values are same in all the files. My objective is to combine the data in all the files in a tabular format in a single CSV file so that I can read all the data and analyze it. I am looking for a simpler python code with brief explanation of each steps of the code to help in understanding the whole sequence of the code.
You can use this code snippet.
First of all install pandas using
pip install pandas
After that, you can use this code to convert JSON files to CSV.
# code to save all data to a single file
import pandas as pd
import glob
path = './path to directory/*.json'
files = glob.glob(path)
data_frames = []
for file in files:
f = open(file, 'r')
data_frames.append(pd.read_json(f))
f.close()
pd.concat(data_frames).to_csv("data.csv")
# code to save CSV data to individual files
import pandas as pd
import glob
path = './path to directory/*.json'
files = glob.glob(path)
for file in files:
f = open(file, 'r')
jsonData = pd.read_json(f.read())
jsonData.to_csv(f.name+".csv")
f.close()

How to use gsutil compose in GoogleShell and skip first rows?

I am trying to use "compose" command in the shell to merge the files I get in my bucket GCP. Problem appears when this command merges those csv files but does not skip the headers.
What I finally get is a merge of 24 csv files but also 24 headers.
Trying to do this in python but also no solution.
Any help??
There doesn't exist any flag on gsutil to skip csv headers but I have this python script that can make the workaround.
This script downloads the csv files from the bucket, append them skipping the headers and then upload the appended file to the bucket again.
import csv
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('YOUR.BUCKET.NAME')
blob = bucket.get_blob('FILE1.NAME')
blob.download_to_filename('FILE1.NAME')
blob2 = bucket.get_blob('FILE1.NAME')
blob.download_to_filename('FILE2.NAME')
csvs = ["FILE1.NAME", "FILE2.NAME"]
writer = csv.writer(open('appended_output.csv', 'wt'))
for x in csvs:
with open(x, "rt") as files:
reader = csv.reader(files)
next(reader, None)
for data in reader:
writer.writerow(data)
blob = bucket.blob("appended_output.csv")
blob.upload_from_filename("appended_output.csv")

Loading a CSV file from Blob Storage Container using PySpark

I am unable to load a CSV file directly from Azure Blob Storage into a RDD by using PySpark in a Jupyter Notebook.
I have read through just about all of the other answers to similar problems but I haven't found specific instructions for what I am trying to do. I know I could also load the data into the Notebook by using Pandas, but then I would need to convert the Panda DF into an RDD afterwards.
My ideal solution would look something like this, but this specific code give me the error that it can't infer a schema for CSV.
#Load Data
source = <Blob SAS URL>
elog = spark.read.format("csv").option("inferSchema", "true").option("url",source).load()
I have also taken a look at this answer: reading a csv file from azure blob storage with PySpark
but I am having trouble defining the correct path.
Thank you very much for your help!
Here is my sample code with Pandas to read a blob url with SAS token and convert a dataframe of Pandas to a PySpark one.
First, to get a Pandas dataframe object via read a blob url.
import pandas as pd
source = '<a csv blob url with SAS token>'
df = pd.read_csv(source)
print(df)
Then, you can convert it to a PySpark one.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("testDataFrame").getOrCreate()
spark_df = spark.createDataFrame(df)
spark_df.show()
Or, the same result with the code below.
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext()
sqlContest = SQLContext(sc)
spark_df = sqlContest.createDataFrame(df)
spark_df.show()
Hope it helps.

Spark - How to write a single csv file WITHOUT folder?

Suppose that df is a dataframe in Spark. The way to write df into a single CSV file is
df.coalesce(1).write.option("header", "true").csv("name.csv")
This will write the dataframe into a CSV file contained in a folder called name.csv but the actual CSV file will be called something like part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv.
I would like to know if it is possible to avoid the folder name.csv and to have the actual CSV file called name.csv and not part-00000-af091215-57c0-45c4-a521-cd7d9afb5e54.csv. The reason is that I need to write several CSV files which later on I will read together in Python, but my Python code makes use of the actual CSV names and also needs to have all the single CSV files in a folder (and not a folder of folders).
Any help is appreciated.
A possible solution could be convert the Spark dataframe to a pandas dataframe and save it as csv:
df.toPandas().to_csv("<path>/<filename>")
EDIT: As caujka or snark suggest, this works for small dataframes that fits into driver. It works for real cases that you want to save aggregated data or a sample of the dataframe. Don't use this method for big datasets.
If you want to use only the python standard library this is an easy function that will write to a single file. You don't have to mess with tempfiles or going through another dir.
import csv
def spark_to_csv(df, file_path):
""" Converts spark dataframe to CSV file """
with open(file_path, "w") as f:
writer = csv.DictWriter(f, fieldnames=df.columns)
writer.writerow(dict(zip(fieldnames, fieldnames)))
for row in df.toLocalIterator():
writer.writerow(row.asDict())
If the result size is comparable to spark driver node's free memory, you may have problems with converting the dataframe to pandas.
I would tell spark to save to some temporary location, and then copy the individual csv files into desired folder. Something like this:
import os
import shutil
TEMPORARY_TARGET="big/storage/name"
DESIRED_TARGET="/export/report.csv"
df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
part_filename = next(entry for entry in os.listdir(TEMPORARY_TARGET) if entry.startswith('part-'))
temporary_csv = os.path.join(TEMPORARY_TARGET, part_filename)
shutil.copyfile(temporary_csv, DESIRED_TARGET)
If you work with databricks, spark operates with files like dbfs:/mnt/..., and to use python's file operations on them, you need to change the path into /dbfs/mnt/... or (more native to databricks) replace shutil.copyfile with dbutils.fs.cp.
A more databricks'y' solution is here:
TEMPORARY_TARGET="dbfs:/my_folder/filename"
DESIRED_TARGET="dbfs:/my_folder/filename.csv"
spark_df.coalesce(1).write.option("header", "true").csv(TEMPORARY_TARGET)
temporary_csv = os.path.join(TEMPORARY_TARGET, dbutils.fs.ls(TEMPORARY_TARGET)[3][1])
dbutils.fs.cp(temporary_csv, DESIRED_TARGET)
Note if you are working from Koalas data frame you can replace spark_df with koalas_df.to_spark()
For pyspark, you can convert to pandas dataframe and then save it.
df.toPandas().to_csv("<path>/<filename.csv>", header=True, index=False)
There is no dataframe spark API which writes/creates a single file instead of directory as a result of write operation.
Below both options will create one single file inside directory along with standard files (_SUCCESS , _committed , _started).
1. df.coalesce(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
2. df.repartition(1).write.mode("overwrite").format("com.databricks.spark.csv").option("header",
"true").csv("PATH/FOLDER_NAME/x.csv")
If you don't use coalesce(1) or repartition(1) and take advantage of sparks parallelism for writing files then it will create multiple data files inside directory.
You need to write function in driver which will combine all data file parts to single file(cat part-00000* singlefilename ) once write operation is done.
I had the same problem and used python's NamedTemporaryFile library to solve this.
from tempfile import NamedTemporaryFile
s3 = boto3.resource('s3')
with NamedTemporaryFile() as tmp:
df.coalesce(1).write.format('csv').options(header=True).save(tmp.name)
s3.meta.client.upload_file(tmp.name, S3_BUCKET, S3_FOLDER + 'name.csv')
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html for more info on upload_file()
Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same in Databricks.
fpath=output+'/'+'temp'
def file_exists(path):
try:
dbutils.fs.ls(path)
return True
except Exception as e:
if 'java.io.FileNotFoundException' in str(e):
return False
else:
raise
if file_exists(fpath):
dbutils.fs.rm(fpath)
df.coalesce(1).write.option("header", "true").csv(fpath)
else:
df.coalesce(1).write.option("header", "true").csv(fpath)
fname=([x.name for x in dbutils.fs.ls(fpath) if x.name.startswith('part-00000')])
dbutils.fs.cp(fpath+"/"+fname[0], output+"/"+"name.csv")
dbutils.fs.rm(fpath, True)
You can go with pyarrow, as it provides file pointer for hdfs file system. You can write your content to file pointer as a usual file writing. Code example:
import pyarrow.fs as fs
HDFS_HOST: str = 'hdfs://<your_hdfs_name_service>'
FILENAME_PATH: str = '/user/your/hdfs/file/path/<file_name>'
hadoop_file_system = fs.HadoopFileSystem(host=HDFS_HOST)
with hadoop_file_system.open_output_stream(path=FILENAME_PATH) as f:
f.write("Hello from pyarrow!".encode())
This will create a single file with the specified name.
To initiate pyarrow you should define environment CLASSPATH properly, set the output of hadoop classpath --glob to it
df.write.mode("overwrite").format("com.databricks.spark.csv").option("header", "true").csv("PATH/FOLDER_NAME/x.csv")
you can use this and if you don't want to give the name of CSV everytime you can write UDF or create an array of the CSV file name and give it to this it will work