Creating a Pandas DataFrame from a CSV file with JSON in it - json

I have a Postgres database where two columns are jsonb data. I used this command to get a CSV copy of the database: \copy (SELECT * FROM articles) TO articles.csv CSV DELIMITER ‘,’ HEADER
I am using Python 3.6. When I load this CSV file into a Pandas dataframe with read_csv I get what appears to be a doubly encoded string for all the json data:
e.g. articles.iloc[0]['word_count'] gives me:
'"{\\"he\\":8,\\"is\\":8,\\"a\\":26,\\"wealthy\\":1,\\"international\\":2,\\"entrepreneur\\":1,\\"known\\":3,\\"for\\":9,\\"generous\\":1,\\"donations\\":2,\\"to\\":17,\\"his\\":6,\\"alma\\":1,\\"mater\\":1,\\"harvard\\":11,\\"now\\":2,\\"court\\":12,\\"says\\":1,\\"the\\":51,\\"university\\":3,\\"must\\":2,\\"cooperate\\":1,\\"in\\":21,\\"hunt\\":1,\\"assets\\":3,\\"federal\\":2,\\"judge\\":2,\\"boston\\":3,\\"has\\":4,\\"ruled\\":2,\\"that\\":10,\\"provide\\":1,\\"testimony\\":1,\\"and\\":11,\\"produce\\":1,\\"documents\\":3,\\"disclosing\\":1,\\"bank\\":1,\\"accounts\\":1,\\"routing\\":1,\\"numbers\\":1,\\"wire\\":1,\\"transfers\\":1,\\"other\\":2,\\"interbank\\":1,\\"messages\\":1,\\"used\\":1,\\"by\\":11,\\"an\\":6,\\"alumnus\\":1,\\"charles\\":1,\\"c\\":2,\\"spackman\\":19,\\"send\\":1,\\"money\\":2,\\"mr\\":19,\\"hong\\":5,\\"kongbased\\":1,\\"businessman\\":1,\\"leads\\":2,\\"group\\":4,\\"global\\":1,\\"investment\\":1,\\"holding\\":1,\\"company\\":10,\\"with\\":3,\\"billion\\":1,\\"under\\":1,\\"management\\":1,\\"ruling\\":4,\\"places\\":1,\\"ivy\\":1,\\"league\\":1,\\"college\\":1,\\"uncomfortable\\":1,\\"predicament\\":1,\\"of\\":19,\\"revealing\\":1,\\"confidential\\":1,\\"financial\\":1,\\"information\\":3,\\"gleaned\\":1,\\"from\\":2,\\"influential\\":1,\\"benefactor\\":1,\\"no\\":2,\\"small\\":1,\\"donor\\":1,\\"according\\":2,\\"website\\":1,\\"sponsors\\":1,\\"scholarship\\":2,\\"fund\\":1,\\"asian\\":1,\\"students\\":1,\\"at\\":2,\\"harvardasia\\":1,\\"council\\":1,\\"served\\":1,\\"as\\":2,\\"cochairman\\":1,\\"reunion\\":1,\\"gifts\\":1,\\"class\\":1,\\"year\\":1,\\"also\\":2,\\"korean\\":6,\\"name\\":1,\\"yoo\\":1,\\"shin\\":1,\\"choi\\":1,\\"obtained\\":1,\\"undergraduate\\":1,\\"degree\\":1,\\"economics\\":1,\\"spokeswoman\\":1,\\"melodie\\":1,\\"jackson\\":1,\\"said\\":8,\\"would\\":2,\\"not\\":5,\\"comment\\":2,\\"on\\":4,\\"order\\":1,\\"part\\":1,\\"longfought\\":1,\\"quest\\":1,\\"aggrieved\\":1,\\"investor\\":2,\\"sang\\":1,\\"cheol\\":1,\\"woo\\":3,\\"collect\\":2,\\"judgment\\":4,\\"against\\":1,\\"involving\\":1,\\"south\\":4,\\"business\\":3,\\"deal\\":1,\\"case\\":3,\\"could\\":2,\\"have\\":3,\\"furtherreaching\\":1,\\"implications\\":1,\\"douglas\\":1,\\"kellner\\":2,\\"manhattan\\":1,\\"lawyer\\":2,\\"who\\":2,\\"specializes\\":1,\\"recovering\\":1,\\"hidden\\":1,\\"worldwide\\":1,\\"if\\":2,\\"diverted\\":1,\\"funds\\":1,\\"when\\":1,\\"should\\":1,\\"been\\":3,\\"paying\\":1,\\"thats\\":1,\\"fraudulent\\":1,\\"transfer\\":1,\\"they\\":2,\\"sue\\":2,\\"get\\":2,\\"back\\":2,\\"theyd\\":1,\\"be\\":1,\\"entitled\\":1,\\"it\\":4,\\"can\\":1,\\"show\\":1,\\"was\\":6,\\"fraudulently\\":1,\\"transferred\\":1,\\"john\\":1,\\"han\\":1,\\"firm\\":2,\\"kobre\\":1,\\"kim\\":1,\\"which\\":5,\\"handling\\":1,\\"investors\\":1,\\"had\\":3,\\"plans\\":1,\\"unwittingly\\":1,\\"entangled\\":1,\\"dispute\\":1,\\"collection\\":1,\\"effort\\":1,\\"dates\\":1,\\"stock\\":2,\\"collapse\\":2,\\"littauer\\":2,\\"technologies\\":1,\\"ltd\\":1,\\"technology\\":1,\\"seoul\\":1,\\"high\\":2,\\"major\\":1,\\"fled\\":1,\\"korea\\":3,\\"amid\\":1,\\"claims\\":1,\\"price\\":1,\\"manipulation\\":1,\\"departing\\":1,\\"before\\":3,\\"authorities\\":2,\\"arrested\\":1,\\"partner\\":1,\\"later\\":3,\\"insiders\\":1,\\"profited\\":1,\\"selling\\":1,\\"their\\":1,\\"shares\\":1,\\"while\\":2,\\"minority\\":1,\\"shareholders\\":1,\\"including\\":1,\\"suffered\\":1,\\"enormous\\":1,\\"losses\\":1,\\"ordered\\":1,\\"pay\\":1,\\"million\\":2,\\"mushroomed\\":1,\\"because\\":5,\\"accumulating\\":1,\\"interest\\":1,\\"managing\\":1,\\"director\\":1,\\"richard\\":1,\\"lee\\":1,\\"related\\":1,\\"lawsuit\\":1,\\"pending\\":1,\\"kong\\":4,\\"filed\\":1,\\"appeared\\":1,\\"unaware\\":1,\\"until\\":2,\\"just\\":1,\\"overturned\\":1,\\"supreme\\":1,\\"all\\":1,\\"defendants\\":1,\\"except\\":1,\\"upheld\\":1,\\"him\\":1,\\"did\\":2,\\"appear\\":1,\\"defend\\":1,\\"himself\\":1,\\"acknowledging\\":1,\\"fined\\":1,\\"connection\\":1,\\"matter\\":1,\\"maintains\\":1,\\"commit\\":1,\\"offenses\\":1,\\"woos\\":1,\\"lawyers\\":1,\\"argue\\":1,\\"efforts\\":1,\\"hampered\\":1,\\"what\\":1,\\"papers\\":2,\\"called\\":1,\\"mazelike\\":1,\\"network\\":1,\\"offshore\\":1,\\"nominees\\":1,\\"trusts\\":1,\\"many\\":1,\\"are\\":1,\\"managed\\":1,\\"close\\":1,\\"family\\":1,\\"members\\":1,\\"classmates\\":1,\\"example\\":1,\\"estate\\":1,\\"where\\":1,\\"lives\\":1,\\"section\\":1,\\"forbes\\":1,\\"described\\":1,\\"wealthiest\\":1,\\"neighborhood\\":1,\\"earth\\":1,\\"owned\\":2,\\"through\\":1,\\"series\\":1,\\"shell\\":1,\\"companies\\":1,\\"turn\\":3,\\"british\\":1,\\"virgin\\":1,\\"islands\\":1,\\"say\\":1,\\"entered\\":1,\\"feb\\":1,\\"william\\":1,\\"g\\":1,\\"young\\":1,\\"district\\":1,\\"gives\\":1,\\"march\\":1,\\"over\\":2,\\"banking\\":1,\\"orders\\":1,\\"spackmans\\":2,\\"daughter\\":1,\\"claire\\":1,\\"sophomore\\":1,\\"testify\\":1,\\"records\\":1,\\"about\\":1,\\"her\\":1,\\"fathers\\":1,\\"american\\":1,\\"citizen\\":1,\\"permanent\\":1,\\"resident\\":1,\\"well\\":1,\\"partly\\":1,\\"son\\":1,\\"james\\":1,\\"adopted\\":1,\\"americans\\":1,\\"after\\":1,\\"biological\\":1,\\"parents\\":1,\\"died\\":1,\\"during\\":1,\\"war\\":1,\\"advanced\\":1,\\"world\\":1,\\"become\\":1,\\"chief\\":1,\\"prudentials\\":1,\\"insurance\\":1,\\"holdings\\":1,\\"younger\\":1,\\"include\\":1,\\"entertainment\\":1,\\"produced\\":1,\\"science\\":1,\\"fiction\\":1,\\"movie\\":1,\\"snowpiercer\\":1,\\"starring\\":1,\\"tilda\\":1,\\"swinton\\":1,\\"octavia\\":1,\\"spencer\\":1}"'
In order to get a python dictionary from the above string I have to call json.loads(json.loads()) on it. Since I want to convert the whole column to dictionaries I tried articles['word_count'].apply( lambda x: json.loads(json.loads(x)) ) but this gives me an error:
TypeError: the JSON object must be str, bytes or bytearray, not 'float'
How do I fix this? OR am I missing a command when I export to CSV from my database? OR am I missing a command when I call read_csv in Pandas?
Note: I have tried the 'converter' option with read_csv and I get this error: JSONDecodeError: Expecting value: line 1 column 1 (char 0) My function is:
def dec(s):
return json.loads( json.loads(s) )

Use pd.io.json.json_normalize() to convert an entire column of JSON data into a separate DataFrame with the same number of rows:
http://pandas.pydata.org/pandas-docs/version/0.19.0/generated/pandas.io.json.json_normalize.html
For your case it'd be something like this:
pd.io.json.json_normalize(articles.word_count)
You might have to preprocess it if Pandas doesn't understand the escaping in your input data.
Beyond all that, since your data comes from a database, you should consider just loading it directly, without the CSV intermediary. Pandas has functions for this, such as read_sql_query() and read_sql_table().

Related

Error when importing GeoJson into BigQuery

I'm trying to load GeoJson data [1] into BigQuery via Cloud Shell but I'm getting the following error:
Failed to parse JSON: Top-level GeoJson 'type' member should have value 'Feature', but was 'FeatureCollection'.; ParsedString returned false; Could not parse value; Parser terminated before end of string
It feels like the GeoJson file is not formatted properly for BQ but I have no idea if that's true or how to fix it.
[1] https://github.com/tonywr71/GeoJson-Data/blob/master/australian-suburbs.geojson
Expounding on #scespinoza's answer, I was able to convert to new-line delimited GeoJSON and load it to Bigquery with the following steps:
geojson2ndjson geodata.txt > geodata_converted.txt
Using this command, I encountered an error:
But was able to create a workaround by splitting the data into 2 tables, applying the same command.
Loaded table in Bigquery:
Your file is in standard GeoJSON format, but BigQuery only accepts new-line delimited GeoJSON files and individual GeoJSON objects (see documentation: https://cloud.google.com/bigquery/docs/geospatial-data#geojson-files). So, you should first convert the dataset to the appropiated format. Here is a good and simple explanation on how it works: https://stevage.github.io/ndgeojson/.

Method Error in Julia: thinks CSV is boolean and won't convert to string

I am trying to read a CSV file into Julia. When I open the file in Excel, its a 199x7 matrix of numbers. I am using the following code to create a variable, Xrel:
Xrel = CSV.read(joinpath(data_path,"Xrel.csv"), header=false)
However, when I try to do this, Julia produces:
"MethodError: Cannot 'convert' an object of type Bool to an object of type String."
data_path is defined in previous code to save space.
I've checked my paths and opened the CSV without a problem in R - it's only in Julia that I am having an issue.
I am confused as to why Julia is saying that my data is Boolean when its a matrix of numbers?
How can I resolve this to read in my CSV file?
Thanks!!
I think you should use either CSV.File() or add a sink variable to CSV.read()
CSV.File(joinpath(data_path,"Xrel.csv"), header=false)
# or with a DataFrame as sink
using DataFrames
CSV.read(joinpath(data_path,"Xrel.csv"), DataFrame, header=false)
from the docs:
?CSV.read
CSV.read(source, sink::T; kwargs...) => T
Read and parses a delimited file, materializing directly using the sink function.
CSV.read supports all the same keyword arguments as CSV.File.

How to read a csv in pyspark using error_bad_line = False as we use in pandas

I am trying to read a csv into pyspark but the problem is that it has a text column due to which there are some bad line in the data
This text column also contains the new line characters due to which the data in further columns is getting corrupted
I have tried using pandas and use some extra parameters to load my csv
a = pd.read_csv("Mycsvname.csv",sep = '~',quoting=csv.QUOTE_NONE, dtype = str,error_bad_lines=False, quotechar='~', lineterminator='\n' )
It is working fine in pandas but I want to load the csv in pyspark
So, is there any similar way to load a csv in pyspark with all the above parameters?
In the current version of spark (I think it is even there from spark 2.2 onwards), you can also read multi-line from csv.
If the newline is your only problem with the text column you can use a read command like this:
spark.read.csv("YOUR_FILE_NAME", header="true", escape="\"", quote="\"", multiLine=True)
Note: in our case the escape and quotation characters where both " so you might want to edit those options with your ~ and include sep = '~'.
You can also look at the documentation (http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html?highlight=csv#pyspark.sql.DataFrameReader.csv) for more details

How open and read JSON file?

I have json file but this file have weight 186 mb. I try read via python .
import json
f = open('file.json','r')
r = json.loads(f.read())
ValueError: Extra data: line 88 column 2 -...
FILE
How to open it? Help me
Your JSON file isn't a JSON file, it's several JSON files mashed together.
The first instance of this occurs in the 1630070th character:
'шова"}]}]}{"response":[{"count'
^ here
That said, jq appears to be able to handle it, so the individual parts are fine.
You'll need to split the file at the boundaries of the individual JSON objects. Try catching the JSONDecodeError and use its .colno to slice the text into correct chunks.
It should be:
r = json.loads(f)

Saving Pandas DataFrame and meta-data to JSON format

I have a need to save a Pandas DataFrame, along with some metadata to a file in JSON format. (The JSON format is a requirement.)
Background
A) I can successfully read/write my rather large Pandas Dataframe from/to JSON using DataFrame.to_json() and DataFrame.from_json(). No problems.
B) I have no problems saving my metadata (dict) to JSON using json.dump()/json.load()
My first attempt
Since Pandas does not support DataFrame metadata directly, my first thought was to
top_level_dict = {}
top_level_dict['data'] = df.to_dict()
top_level_dict['metadata'] = {'some':'stuff'}
json.dump(top_level_dict, fp)
Failure modes
C) I have found that even the simplified case of
df_dict = df.to_dict()
json.dump(df_dict, fp)
fails with:
TypeError: key (u'US', 112, 5, 80, 'wl') is not a string
D) Investigating, I've found that the complement also fails.
df.to_json(fp)
json.load(fp)
fails with
384 raise ValueError("No JSON object could be decoded")
ValueError: Expecting : delimiter: line 1 column 17 (char 16)
So it appears that Pandas JSON format and the Python's JSON library are not compatible.
My first thought is to chase down a way to modify the df.to_dict() output of C to make it amenable to Python's JSON library, but I keep hearing "If you're struggling to do something in Python, you're probably doing it wrong." in my head.
Question
What is the cannonical/recommended method for adding metadata to a Pandas DataFrame and storing to a JSON-formatted file?
Python 2.7.10
Pandas 0.17
Edit 1:
While trying out Evan Wright's great answer, I found the source of my problems: Pandas (as of 0.17) does not like saving Multi-Indexed DataFrames to JSON. The library I had created to save my (Multi-Indexed) DataFrames is quietly performing a df.reset_index() before calling DataFrame.to_json(). My newer code was not. So it was DataFrame.to_json() burping on the MultiIndex.
Lesson: Read the documentation kids, even when it's your own documentation.
Edit 2:
If you need to store both the DataFrame and the metadata in a single JSON object, see my answer below.
You should be able to just put the data on separate lines.
Writing:
f = open('test.json', 'w')
df.to_json(f)
print >> f
json.dump(metadata, f)
Reading:
f = open('test.json')
df = pd.read_json(next(f))
metdata = json.loads(next(f))
In my question, I erroneously stated that I needed the JSON in a file. In that situation, Evan Wright's answer is my preferred solution.
In my case, I actually need to store the JSON output as a single "blob" in a database, so my dictionary-wrangling approach appears to be necessary.
If you similarly need to store the data and metadata in a single JSON blob, the following code will work:
top_level_dict = {}
top_level_dict['data'] = df.to_dict()
top_level_dict['metadata'] = {'some':'stuff'}
with open(FILENAME, 'w') as outfile:
json.dump(top_level_dict, outfile)
Just make sure DataFrame is singly-indexed. If it's Multi-Indexed, reset the index (i.e. df.reset_index()) before doing the above.
Reading the data back in:
with open(FILENAME, 'r') as infile:
top_level_dict = json.load(infile)
df_as_dict = top_level_dict.pop('data', {})
df = pandas.DataFrame().as_dict(df_as_dict)
meta = top_level_dict['metadata']
At this point, you'll need to re-create your Multi-Index (if applicable)