How to Merge two json files into one in python3.6 - json

Merge two json files into one in python3.6
I tried data1.update(data2),it didn't work
import json
with open("test.json") as fin1:
data1 = json.load(fin1)
with open("test_userz.json") as fin2:
data2 = json.load(fin2)
data1.update(data2)
with open("merged.json", "w") as fout:
json.dump(data1, fout)

you could merge like so
>>> data1=json.loads('{"test1":"one"}')
>>> data2=json.loads('{"test2":"two"}')
>>> data3=[]
>>> data3.append(data1)
>>> data3.append(data2)
>>> json.dumps(data3)
'[{"test1": "one"}, {"test2": "two"}]'

Related

How to convert a PyArrow table to a in-memory csv

I'm searching for a way to convert a PyArrow table to a csv in memory so that I can dump the csv object directly into a database. With pyarrow.csv.write_csv() it is possible to create a csv file on disk, but is it somehow possible to create a csv object in memory? I have difficulties to understand the documentation. Thanks a lot in advance for the help!
Yes, it is possible. You can use Python io module to write to memory:
>>> import pyarrow as pa
>>> from pyarrow import csv
>>> import io
# Create a Table
>>> t = pa.Table.from_arrays([[1, 2, 3], ["a", "b", "c"]], ["c1", "c2"])
# Write to memory
>>> buf = io.BytesIO()
>>> csv.write_csv(t, buf, csv.WriteOptions(include_header=True))
>>> buf.seek(0)
0
# Read from memory for demo purposes
>>> csv.read_csv(buf)
pyarrow.Table
c1: int64
c2: string
----
c1: [[1,2,3]]
c2: [["a","b","c"]]

Code Workbooks - File not found using hadoop_path

I have a python transform in code workbooks that is running this code:
import pandas as pd
def contents(dataset_with_files):
fs = dataset_with_files.filesystem()
filenames = [f.path for f in fs.ls()]
fp = fs.hadoop_path + "/" + filenames[0]
with open(fp, 'r') as f:
t = f.read()
rows = {"text": [t]}
return pd.DataFrame(rows)
But I am getting the error FileNotFoundError: [Errno 2] No such file or directory:
My understanding is that this is the correct way to access a file in the hdfs, is this a repository versus code workbooks limitation?
This documentation helped me figure it out:
https://www.palantir.com/docs/foundry/code-workbook/transforms-unstructured/
It was actually a pretty small change. If you are using the filesystem() you only need the relative path.
import pandas as pd
def contents_old(pycel_test):
fs = pycel_test.filesystem()
filenames = [f.path for f in fs.ls()]
with fs.open(filenames[0], 'r') as f:
value = ...
rows = {"values": [value]}
return pd.DataFrame(rows)
There is also this option, but I found it 10x slower.
from pyspark.sql import Row
def contents(dataset_with_files):
fs = dataset_with_files.filesystem() # This is the FileSystem object.
MyRow = Row("column")
def process_file(file_status):
with fs.open(file_status.path, 'r') as f:
...
rdd = fs.files().rdd
rdd = rdd.flatMap(process_file)
df = rdd.toDF()
return df

convert dict of dict into a dataframe

I have a slightly complicated json that I need to convert into a dataframe. This is a standard output json from another API and hence the field names will not change.
I have the below dict which is more complicated than what I have worked with till now
>>> import pandas as pd
>>> data = [{'annotation_spec': {'description': 'Story_Driven',
... 'display_name': 'Story_Driven'},
... 'segments': [{'confidence': 0.52302074,
... 'segment': {'end_time_offset': {'nanos': 973306000, 'seconds': 14},
... 'start_time_offset': {}}}]},
... {'annotation_spec': {'description': 'real', 'display_name': 'real'},
... 'segments': [{'confidence': 0.5244379,
... 'segment': {'end_time_offset': {'nanos': 973306000, 'seconds': 14},
... 'start_time_offset': {}}}]}]
I looked through all related SO posts and the closest I can get this into a dataframe is this
from pandas.io.json import json_normalize
pd.DataFrame.from_dict(json_normalize(data,record_path=
['segments'],meta=[['annotation_spec','description'],
['annotation_spec','display_name']],errors='ignore'))
This gives me an output like this
>>> from pandas.io.json import json_normalize
>>> pd.DataFrame.from_dict(json_normalize(data,record_path=['segments'],meta=[['annotation_spec','description'],['annotation_spec','display_name']],errors='ignore'))
confidence segment annotation_spec.description annotation_spec.display_name
0 0.523021 {u'end_time_offset': {u'nanos': 973306000, u's... Story_Driven Story_Driven
1 0.524438 {u'end_time_offset': {u'nanos': 973306000, u's... real real
>>>
I want to break down the "segment"column above as well into its components. How can I do that?
Basically json_normalize takes care of nested dicts, here we have a problem because of the list in the segements key.
So if the length of the list will always be 1, we can just remove the list and then apply json_normalize
### function to remove the lsit, we basically check if its a list, if so just take the first element
remove_list = lambda dct:{k:(v[0] if type(v)==list else v) for k,v in dct.items()}
data_clean = [remove_list(entry) for entry in data]
json_normalize(data_clean, sep="__")

How to convert this json file to pandas dataframe

The format in the file looks like this
{ 'match' : 'a', 'score' : '2'},{......}
I've tried pd.DataFrame and I've also tried reading it by line but it gives me everything in one cell
I'm new to python
Thanks in advance
Expected result is a pandas dataframe
Try use json_normalize() function
Example:
from pandas.io.json import json_normalize
values = [{'match': 'a', 'score': '2'}, {'match': 'b', 'score': '3'}, {'match': 'c', 'score': '4'}]
df = json_normalize(values)
print(df)
Output:
If one line of your file corresponds to one JSON object, you can do the following:
# import library for working with JSON and pandas
import json
import pandas as pd
# make an empty list
data = []
# open your file and add every row as a dict to the list with data
with open("/path/to/your/file", "r") as file:
for line in file:
data.append(json.loads(line))
# make a pandas data frame
df = pd.DataFrame(data)
If there is more than only one JSON object on one row of your file, then you should find those JSON objects, for example here are two possible options. The solution with the second option would look like this:
# import all you will need
import pandas as pd
import json
from json import JSONDecoder
# define function
def extract_json_objects(text, decoder=JSONDecoder()):
pos = 0
while True:
match = text.find('{', pos)
if match == -1:
break
try:
result, index = decoder.raw_decode(text[match:])
yield result
pos = match + index
except ValueError:
pos = match + 1
# make an empty list
data = []
# open your file and add every JSON object as a dict to the list with data
with open("/path/to/your/file", "r") as file:
for line in file:
for item in extract_json_objects(line):
data.append(item)
# make a pandas data frame
df = pd.DataFrame(data)

Export JSON to CSV using Python

I wrote a code to extract some information from a website. the output is in JSON and I want to export it to CSV. So, I tried to convert it to a pandas dataframe and then export it to CSV in pandas. I can print the results but still, it doesn't convert the file to a pandas dataframe. Do you know what the problem with my code is?
# -*- coding: utf-8 -*-
# To create http request/session
import requests
import re, urllib
import pandas as pd
from BeautifulSoup import BeautifulSoup
url = "https://www.indeed.com/jobs?
q=construction%20manager&l=Houston&start=10"
# create session
s = requests.session()
html = s.get(url).text
# exctract job IDs
job_ids = ','.join(re.findall(r"jobKeysWithInfo\['(.+?)'\]", html))
ajax_url = 'https://www.indeed.com/rpc/jobdescs?jks=' +
urllib.quote(job_ids)
# do Ajax request and convert the response to json
ajax_content = s.get(ajax_url).json()
print(ajax_content)
#Convert to pandas dataframe
df = pd.read_json(ajax_content)
#Export to CSV
df.to_csv("c:\\users\\Name\desktop\\newcsv.csv")
The error message is:
Traceback (most recent call last):
File "C:\Users\Mehrdad\Desktop\Indeed 06.py", line 21, in
df = pd.read_json(ajax_content)
File "c:\python27\lib\site-packages\pandas\io\json\json.py", line 408, in read_json
path_or_buf, encoding=encoding, compression=compression,
File "c:\python27\lib\site-packages\pandas\io\common.py", line 218, in get_filepath_or_buffer
raise ValueError(msg.format(_type=type(filepath_or_buffer)))
ValueError: Invalid file path or buffer object type:
The problem was that nothing was going into the dataframe when you called read_json() because it was a nested JSON dict:
import requests
import re, urllib
import pandas as pd
from pandas.io.json import json_normalize
url = "https://www.indeed.com/jobs?q=construction%20manager&l=Houston&start=10"
s = requests.session()
html = s.get(url).text
job_ids = ','.join(re.findall(r"jobKeysWithInfo\['(.+?)'\]", html))
ajax_url = 'https://www.indeed.com/rpc/jobdescs?jks=' + urllib.quote(job_ids)
ajax_content= s.get(ajax_url).json()
df = json_normalize(ajax_content).transpose()
df.to_csv('your_output_file.csv')
Note that I called json_normalize() to collapse the nested columns from the JSON. I also called transpose() so that the rows were labelled with the job ID rather than columns. This will give you a dataframe that looks like this:
0079ccae458b4dcf <p><b>Company Environment: </b></p><p>Planet F...
0c1ab61fe31a5c62 <p><b>Commercial Construction Project Manager<...
0feac44386ddcf99 <div><div>Trendmaker Homes is currently seekin...
...
It's not really clear what your expected output is, though ... what are you expecting the DataFrame/CSV file to look like?. If you actually were looking for just a single row/Series with the job ID's as column labels, just remove the call to transpose()