How do you read a file inside a zip file as text, not bytes? - csv

A simple program for reading a CSV file inside a ZIP archive:
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
works in Python 2.7:
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
but not in Python 3.2:
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
The csv module in Python 3 wants to see a text file, but zipfile.ZipFile.open returns a zipfile.ZipExtFile that is always treated as binary data.
How does one make this work in Python 3?

I just noticed that Lennart's answer didn't work with Python 3.1, but it does work with Python 3.2. They've enhanced zipfile.ZipExtFile in Python 3.2 (see release notes). These changes appear to make zipfile.ZipExtFile work nicely with io.TextWrapper.
Incidentally, it works in Python 3.1, if you uncomment the hacky lines below to monkey-patch zipfile.ZipExtFile, not that I would recommend this sort of hackery. I include it only to illustrate the essence of what was done in Python 3.2 to make things work nicely.
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
# items_file.readable = lambda: True
# items_file.writable = lambda: False
# items_file.seekable = lambda: False
# items_file.read1 = items_file.read
items_file = io.TextIOWrapper(items_file)
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0} -- row = {1}'.format(idx, row))
If I had to support py3k < 3.2, then I would go with the solution in my other answer.
Update for 3.6+
Starting w/3.6, support for mode='U' was removed^1:
Changed in version 3.6: Removed support of mode='U'. Use io.TextIOWrapper for reading compressed text files in universal newlines mode.
Starting w/3.8, a Path object was added which gives us an open() method that we can call like the built-in open() function (passing newline='' in the case of our CSV) and we get back an io.TextIOWrapper object the csv readers accept. See Yuri's answer, here.

You can wrap it in a io.TextIOWrapper.
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
Should work.

And if you just like to read a file into a string:
with ZipFile('spam.zip') as myzip:
with myzip.open('eggs.txt') as myfile:
eggs = myfile.read().decode('UTF-8'))

Lennart's answer is on the right track (Thanks, Lennart, I voted up your answer) and it almost works:
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
The problem appears to be that io.TextWrapper's first required parameter is a buffer; not a file object.
This appears to work:
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250

Starting with Python 3.8, the zipfile module has the Path object, which we can use with its open() method to get an io.TextIOWrapper object, which can be passed to the csv readers:
import csv, sys, zipfile
# Give a string path to the ZIP archive, and
# the archived file to read from
items_zipf = zipfile.Path(sys.argv[1], at='items.csv')
# Then use the open method, like you'd usually
# use the built-in open()
items_f = items_zipf.open(newline='')
# Pass the TextIO-like file to your reader as normal
for row in csv.DictReader(items_f):
print(row)

Here's a minimal recipe to open a zip file and read a text file inside that zip. I found the trick to be the TextIOWrapper read() method, not mentioned in any answers above (BytesIO.read() was mentioned above, but Python docs recommend TextIOWrapper).
import zipfile
import io
# Create the ZipFile object
zf = zipfile.ZipFile('my_zip_file.zip')
# Read a file that is inside the zip...reads it as a binary file-like object
my_file_binary = zf.open('my_text_file_inside_zip.txt')
# Convert the binary file-like object directly to text using TextIOWrapper and it's read() method
my_file_text = io.TextIOWrapper(my_file_binary, encoding='utf-8', newline='').read()
I wish they kept the mode='U' parameter in the ZipFile open() method to do this same thing since that was so succinct but, alas, that is obsolete.

Related

Has PackageLoader changed with Jinja2 (3.0.1) and Python3 (3.9.5)?

I'm using Jinja2 (3.0.1), Python3 (3.9.5), and macOS (11.3.1).
These lines used to work:
from jinja2 import Environment, PackageLoader
e = Environment(loader = PackageLoader("__main__", "."))
but now produce:
Traceback (most recent call last):
File "/Users/downing/Dropbox/jinja2/Jinja2.py", line 36, in <module>
main()
File "/Users/downing/Dropbox/jinja2/Jinja2.py", line 31, in main
e = Environment(loader = PackageLoader("__main__", "."))
File "/usr/local/lib/python3.9/site-packages/jinja2/loaders.py", line 286, in __init__
spec = importlib.util.find_spec(package_name)
File "/usr/local/Cellar/python#3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/util.py", line 114, in find_spec
raise ValueError('{}.__spec__ is None'.format(name))
ValueError: __main__.__spec__ is None
Just discovered that FileSystemLoader still works and looks more appropriate for my use anyway.
The PackageLoader docs say:
Changed in version 3.0: No longer uses setuptools as a dependency.
which seems significant.
__main__.__spec__ can be None if __main__ isn't imported from a module. Probably python -m Jinja2 would still work for you, even if python Jinja2.py doesn't.
Like you, I switched to FileSystemLoader for this case, but I added a conditional to continue using PackageLoader when possible. That allows python -m path.to.my.module to continue working if the module is installed in a ZIP file or egg:
if __spec__ is not None:
loader = jinja2.PackageLoader('__main__')
else:
loader = jinja2.FileSystemLoader(
os.path.join(os.path.dirname(__file__), 'templates')
)
env = jinja2.Environment(loader=loader)

Automatitation of python file in bash [duplicate]

In Python, how can we find out the command line arguments that were provided for a script, and process them?
For some more specific examples, see Implementing a "[command] [action] [parameter]" style command-line interfaces? and How do I format positional argument help using Python's optparse?.
import sys
print("\n".join(sys.argv))
sys.argv is a list that contains all the arguments passed to the script on the command line. sys.argv[0] is the script name.
Basically,
import sys
print(sys.argv[1:])
The canonical solution in the standard library is argparse (docs):
Here is an example:
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_argument("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
args = parser.parse_args()
argparse supports (among other things):
Multiple options in any order.
Short and long options.
Default values.
Generation of a usage help message.
Just going around evangelizing for argparse which is better for these reasons.. essentially:
(copied from the link)
argparse module can handle positional
and optional arguments, while
optparse can handle only optional
arguments
argparse isn’t dogmatic about
what your command line interface
should look like - options like -file
or /file are supported, as are
required options. Optparse refuses to
support these features, preferring
purity over practicality
argparse produces more
informative usage messages, including
command-line usage determined from
your arguments, and help messages for
both positional and optional
arguments. The optparse module
requires you to write your own usage
string, and has no way to display
help for positional arguments.
argparse supports action that
consume a variable number of
command-line args, while optparse
requires that the exact number of
arguments (e.g. 1, 2, or 3) be known
in advance
argparse supports parsers that
dispatch to sub-commands, while
optparse requires setting
allow_interspersed_args and doing the
parser dispatch manually
And my personal favorite:
argparse allows the type and
action parameters to add_argument()
to be specified with simple
callables, while optparse requires
hacking class attributes like
STORE_ACTIONS or CHECK_METHODS to get
proper argument checking
There is also argparse stdlib module (an "impovement" on stdlib's optparse module). Example from the introduction to argparse:
# script.py
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'integers', metavar='int', type=int, choices=range(10),
nargs='+', help='an integer in the range 0..9')
parser.add_argument(
'--sum', dest='accumulate', action='store_const', const=sum,
default=max, help='sum the integers (default: find the max)')
args = parser.parse_args()
print(args.accumulate(args.integers))
Usage:
$ script.py 1 2 3 4
4
$ script.py --sum 1 2 3 4
10
If you need something fast and not very flexible
main.py:
import sys
first_name = sys.argv[1]
last_name = sys.argv[2]
print("Hello " + first_name + " " + last_name)
Then run python main.py James Smith
to produce the following output:
Hello James Smith
The docopt library is really slick. It builds an argument dict from the usage string for your app.
Eg from the docopt readme:
"""Naval Fate.
Usage:
naval_fate.py ship new <name>...
naval_fate.py ship <name> move <x> <y> [--speed=<kn>]
naval_fate.py ship shoot <x> <y>
naval_fate.py mine (set|remove) <x> <y> [--moored | --drifting]
naval_fate.py (-h | --help)
naval_fate.py --version
Options:
-h --help Show this screen.
--version Show version.
--speed=<kn> Speed in knots [default: 10].
--moored Moored (anchored) mine.
--drifting Drifting mine.
"""
from docopt import docopt
if __name__ == '__main__':
arguments = docopt(__doc__, version='Naval Fate 2.0')
print(arguments)
One way to do it is using sys.argv. This will print the script name as the first argument and all the other parameters that you pass to it.
import sys
for arg in sys.argv:
print arg
#set default args as -h , if no args:
if len(sys.argv) == 1: sys.argv[1:] = ["-h"]
I use optparse myself, but really like the direction Simon Willison is taking with his recently introduced optfunc library. It works by:
"introspecting a function
definition (including its arguments
and their default values) and using
that to construct a command line
argument parser."
So, for example, this function definition:
def geocode(s, api_key='', geocoder='google', list_geocoders=False):
is turned into this optparse help text:
Options:
-h, --help show this help message and exit
-l, --list-geocoders
-a API_KEY, --api-key=API_KEY
-g GEOCODER, --geocoder=GEOCODER
I like getopt from stdlib, eg:
try:
opts, args = getopt.getopt(sys.argv[1:], 'h', ['help'])
except getopt.GetoptError, err:
usage(err)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage()
if len(args) != 1:
usage("specify thing...")
Lately I have been wrapping something similiar to this to make things less verbose (eg; making "-h" implicit).
As you can see optparse "The optparse module is deprecated with and will not be developed further; development will continue with the argparse module."
Pocoo's click is more intuitive, requires less boilerplate, and is at least as powerful as argparse.
The only weakness I've encountered so far is that you can't do much customization to help pages, but that usually isn't a requirement and docopt seems like the clear choice when it is.
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
print(args.accumulate(args.integers))
Assuming the Python code above is saved into a file called prog.py
$ python prog.py -h
Ref-link: https://docs.python.org/3.3/library/argparse.html
You may be interested in a little Python module I wrote to make handling of command line arguments even easier (open source and free to use) - Commando
Yet another option is argh. It builds on argparse, and lets you write things like:
import argh
# declaring:
def echo(text):
"Returns given word as is."
return text
def greet(name, greeting='Hello'):
"Greets the user with given name. The greeting is customizable."
return greeting + ', ' + name
# assembling:
parser = argh.ArghParser()
parser.add_commands([echo, greet])
# dispatching:
if __name__ == '__main__':
parser.dispatch()
It will automatically generate help and so on, and you can use decorators to provide extra guidance on how the arg-parsing should work.
I recommend looking at docopt as a simple alternative to these others.
docopt is a new project that works by parsing your --help usage message rather than requiring you to implement everything yourself. You just have to put your usage message in the POSIX format.
Also with python3 you might find convenient to use Extended Iterable Unpacking to handle optional positional arguments without additional dependencies:
try:
_, arg1, arg2, arg3, *_ = sys.argv + [None] * 2
except ValueError:
print("Not enough arguments", file=sys.stderr) # unhandled exception traceback is meaningful enough also
exit(-1)
The above argv unpack makes arg2 and arg3 "optional" - if they are not specified in argv, they will be None, while if the first is not specified, ValueError will be thouwn:
Traceback (most recent call last):
File "test.py", line 3, in <module>
_, arg1, arg2, arg3, *_ = sys.argv + [None] * 2
ValueError: not enough values to unpack (expected at least 4, got 3)
My solution is entrypoint2. Example:
from entrypoint2 import entrypoint
#entrypoint
def add(file, quiet=True):
''' This function writes report.
:param file: write report to FILE
:param quiet: don't print status messages to stdout
'''
print file,quiet
help text:
usage: report.py [-h] [-q] [--debug] file
This function writes report.
positional arguments:
file write report to FILE
optional arguments:
-h, --help show this help message and exit
-q, --quiet don't print status messages to stdout
--debug set logging level to DEBUG
import sys
# Command line arguments are stored into sys.argv
# print(sys.argv[1:])
# I used the slice [1:] to print all the elements except the first
# This because the first element of sys.argv is the program name
# So the first argument is sys.argv[1], the second is sys.argv[2] ecc
print("File name: " + sys.argv[0])
print("Arguments:")
for i in sys.argv[1:]:
print(i)
Let's name this file command_line.py and let's run it:
C:\Users\simone> python command_line.py arg1 arg2 arg3 ecc
File name: command_line.py
Arguments:
arg1
arg2
arg3
ecc
Now let's write a simple program, sum.py:
import sys
try:
print(sum(map(float, sys.argv[1:])))
except:
print("An error has occurred")
Result:
C:\Users\simone> python sum.py 10 4 6 3
23
This handles simple switches, value switches with optional alternative flags.
import sys
# [IN] argv - array of args
# [IN] switch - switch to seek
# [IN] val - expecting value
# [IN] alt - switch alternative
# returns value or True if val not expected
def parse_cmd(argv,switch,val=None,alt=None):
for idx, x in enumerate(argv):
if x == switch or x == alt:
if val:
if len(argv) > (idx+1):
if not argv[idx+1].startswith('-'):
return argv[idx+1]
else:
return True
//expecting a value for -i
i = parse_cmd(sys.argv[1:],"-i", True, "--input")
//no value needed for -p
p = parse_cmd(sys.argv[1:],"-p")
Several of our biotechnology clients have posed these two questions recently:
How can we execute a Python script as a command?
How can we pass input values to a Python script when it is executed as a command?
I have included a Python script below which I believe answers both questions. Let's assume the following Python script is saved in the file test.py:
#
#----------------------------------------------------------------------
#
# file name: test.py
#
# input values: data - location of data to be processed
# date - date data were delivered for processing
# study - name of the study where data originated
# logs - location where log files should be written
#
# macOS usage:
#
# python3 test.py "/Users/lawrence/data" "20220518" "XYZ123" "/Users/lawrence/logs"
#
# Windows usage:
#
# python test.py "D:\data" "20220518" "XYZ123" "D:\logs"
#
#----------------------------------------------------------------------
#
# import needed modules...
#
import sys
import datetime
def main(argv):
#
# print message that process is starting...
#
print("test process starting at", datetime.datetime.now().strftime("%Y%m%d %H:%M"))
#
# set local values from input values...
#
data = sys.argv[1]
date = sys.argv[2]
study = sys.argv[3]
logs = sys.argv[4]
#
# print input arguments...
#
print("data value is", data)
print("date value is", date)
print("study value is", study)
print("logs value is", logs)
#
# print message that process is ending...
#
print("test process ending at", datetime.datetime.now().strftime("%Y%m%d %H:%M"))
#
# call main() to begin processing...
#
if __name__ == '__main__':
main(sys.argv)
The script can be executed on a macOS computer in a Terminal shell as shown below and the results will be printed to standard output (be sure the current directory includes the test.py file):
$ python3 test.py "/Users/lawrence/data" "20220518" "XYZ123" "/Users/lawrence/logs"
test process starting at 20220518 16:51
data value is /Users/lawrence/data
date value is 20220518
study value is XYZ123
logs value is /Users/lawrence/logs
test process ending at 20220518 16:51
The script can also be executed on a Windows computer in a Command Prompt as shown below and the results will be printed to standard output (be sure the current directory includes the test.py file):
D:\scripts>python test.py "D:\data" "20220518" "XYZ123" "D:\logs"
test process starting at 20220518 17:20
data value is D:\data
date value is 20220518
study value is XYZ123
logs value is D:\logs
test process ending at 20220518 17:20
This script answers both questions posed above and is a good starting point for developing scripts that will be executed as commands with input values.
Reason for the new answer:
Existing answers specify multiple options.
Standard option is to use argparse, a few answers provided examples from the documentation, and one answer suggested the advantage of it. But all fail to explain the answer adequately/clearly to the actual question by OP, at least for newbies.
An example of argparse:
import argparse
def load_config(conf_file):
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser()
//Specifies one argument from the command line
//You can have any number of arguments like this
parser.add_argument("conf_file", help="configuration file for the application")
args = parser.parse_args()
config = load_config(args.conf_file)
Above program expects a config file as an argument. If you provide it, it will execute happily. If not, it will print the following
usage: test.py [-h] conf_file
test.py: error: the following arguments are required: conf_file
You can have the option to specify if the argument is optional.
You can specify the expected type for the argument using type key
parser.add_argument("age", type=int, help="age of the person")
You can specify default value for the arguments by specifying default key
This document will help you to understand it to an extent.

Convert a pipeline_pb2.TrainEvalPipelineConfig to JSON or YAML file for tensorflow object detection API

I want to convert a pipeline_pb2.TrainEvalPipelineConfig to JSON or YAML file format for tensorflow object detection API. I tried converting the protobuf file using :
import tensorflow as tf
from google.protobuf import text_format
import yaml
from object_detection.protos import pipeline_pb2
def get_configs_from_pipeline_file(pipeline_config_path, config_override=None):
'''
read .config and convert it to proto_buffer_object
'''
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.gfile.GFile(pipeline_config_path, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)
if config_override:
text_format.Merge(config_override, pipeline_config)
#print(pipeline_config)
return pipeline_config
def create_configs_from_pipeline_proto(pipeline_config):
'''
Returns the configurations as dictionary
'''
configs = {}
configs["model"] = pipeline_config.model
configs["train_config"] = pipeline_config.train_config
configs["train_input_config"] = pipeline_config.train_input_reader
configs["eval_config"] = pipeline_config.eval_config
configs["eval_input_configs"] = pipeline_config.eval_input_reader
# Keeps eval_input_config only for backwards compatibility. All clients should
# read eval_input_configs instead.
if configs["eval_input_configs"]:
configs["eval_input_config"] = configs["eval_input_configs"][0]
if pipeline_config.HasField("graph_rewriter"):
configs["graph_rewriter_config"] = pipeline_config.graph_rewriter
return configs
configs = get_configs_from_pipeline_file('pipeline.config')
config_as_dict = create_configs_from_pipeline_proto(configs)
But when I try converting this returned dictionary to YAML with yaml.dump(config_as_dict) it says
TypeError: can't pickle google.protobuf.pyext._message.RepeatedCompositeContainer objects
For json.dump(config_as_dict) it says :
Traceback (most recent call last):
File "config_file_parsing.py", line 48, in <module>
config_as_json = json.dumps(config_as_dict)
File "/usr/lib/python3.5/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.5/json/encoder.py", line 198, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.5/json/encoder.py", line 256, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.5/json/encoder.py", line 179, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: label_map_path: "label_map.pbtxt"
shuffle: true
tf_record_input_reader {
input_path: "dataset.record"
}
is not JSON serializable
Would appreciate some help here.
JSON can only dump a subset of the python primtivies primitives and dict and list collections (with limitation on self-referencing).
YAML is more powerful, and can be used to dump arbitrary Python objects. But only if those objects can be "investigated" during the representation phase of the dump, which essentially limits that to instances of pure Python classes. For objects created at the C level, one can make explicit dumpers, and if not available Python will try and use the pickle protocol to dump the data to YAML.
Inspecing protobuf on PyPI shows me that there are non-generic wheels available, which is always an indication for some C code optimization. Inspecting one of these files indeed shows a pre-compiled shared object.
Although you make a dict out of the config, this dict can of course only be dumped when all its keys and all its values can be dumped. Since your keys are strings (necessary for JSON), you need to look at each of the values, to find the one that doesn't dump, and convert that to a dumpable object structure (dict/list for JSON, pure Python class for YAML).
You might want to take a look at Module json_format

python 3 read csv UnicodeDecodeError

I have a very simple bit of code that takes in a CVS and puts it into a 2D array. It runs fine on Python2 but in Python3 I get the error below. Looking through the documentation,I think I need to use .decode() Could someone please explain how to use it in the context of my code and why I don't need to do anything in Python2
Error:
line 21, in
for row in datareader:
File "/usr/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa9 in position 5002: invalid start byte
import csv
import sys
fullTable = sys.argv[1]
datareader = csv.reader(open(fullTable, 'r'), delimiter=',')
full_table = []
for row in datareader:
full_table.append(row)
print(full_table)
open(argv[1], encoding='ISO-8859-1')
CSV contained characters where were not UTF-8 which seemed like the default. I am however surprised that python2 dealt with this issue without any problems.

Converting JSON files to .csv

I've found some data that someone is downloading into a JSON file (I think! - I'm a newb!). The file contains data on nearly 600 football players.
Here you can find the file
In the past, I have downloaded the json file and then used this code:
import csv
import json
json_data = open("file.json")
data = json.load(json_data)
f = csv.writer(open("fix_hists.csv","wb+"))
arr = []
for i in data:
fh = data[i]["fixture_history"]
array = fh["all"]
for j in array:
try:
j.insert(0,str(data[i]["first_name"]))
except:
j.insert(0,'error')
try:
j.insert(1,data[i]["web_name"])
except:
j.insert(1,'error')
try:
f.writerow(j)
except:
f.writerow(['error','error'])
json_data.close()
Sadly, when I do this now in command prompt, i get the following error:
Traceback (most recent call last):
File"fix_hist.py", line 12 (module)
fh = data[i]["fixture_history"]
TypeError: list indices must be integers, not str
Can this be fixed or is there another way I can grab some of the data and convert it to .csv? Specifically the 'Fixture History'? and then 'First'Name', 'type_name' etc.
Thanks in advance for any help :)
Try this tool: http://www.convertcsv.com/json-to-csv.htm
You will need to configure a few things, but should be easy enough.