I am trying to use this colab of this github page to extract the triplet [term, opinion, value] from a sentence from my custom dataset.
Here is an overview of the system architecture:
While I can use the sample offered in the colab and also train the model with my data, I don't know I should re-use this against an unlabeled sample.
If I try to run the colab as-is changing only the test and dev data with unlabeled data, I encounter this error:
DEVICE=0 { "names": "sample", "seeds": [
0 ], "sep": ",", "name_out": "results", "kwargs": {
"trainer__cuda_device": 0,
"trainer__num_epochs": 10,
"trainer__checkpointer__num_serialized_models_to_keep": 1,
"model__span_extractor_type": "endpoint",
"model__modules__relation__use_single_pool": false,
"model__relation_head_type": "proper",
"model__use_span_width_embeds": true,
"model__modules__relation__use_distance_embeds": true,
"model__modules__relation__use_pair_feature_multiply": false,
"model__modules__relation__use_pair_feature_maxpool": false,
"model__modules__relation__use_pair_feature_cls": false,
"model__modules__relation__use_span_pair_aux_task": false,
"model__modules__relation__use_span_loss_for_pruners": false,
"model__loss_weights__ner": 1.0,
"model__modules__relation__spans_per_word": 0.5,
"model__modules__relation__neg_class_weight": -1 }, "root": "aste/data/triplet_data" } { "root": "/content/Span-ASTE/aste/data/triplet_data/sample", "train_kwargs": {
"seed": 0,
"trainer__cuda_device": 0,
"trainer__num_epochs": 10,
"trainer__checkpointer__num_serialized_models_to_keep": 1,
"model__span_extractor_type": "endpoint",
"model__modules__relation__use_single_pool": false,
"model__relation_head_type": "proper",
"model__use_span_width_embeds": true,
"model__modules__relation__use_distance_embeds": true,
"model__modules__relation__use_pair_feature_multiply": false,
"model__modules__relation__use_pair_feature_maxpool": false,
"model__modules__relation__use_pair_feature_cls": false,
"model__modules__relation__use_span_pair_aux_task": false,
"model__modules__relation__use_span_loss_for_pruners": false,
"model__loss_weights__ner": 1.0,
"model__modules__relation__spans_per_word": 0.5,
"model__modules__relation__neg_class_weight": -1 }, "path_config": "/content/Span-ASTE/training_config/aste.jsonnet", "repo_span_model": "/content/Span-ASTE", "output_dir": "model_outputs/aste_sample_c7b00b66bf7ec669d23b80879fda043d", "model_path": "models/aste_sample_c7b00b66bf7ec669d23b80879fda043d/model.tar.gz", "data_name": "sample", "task_name": "aste" }
# of original triplets: 11
# of triplets for current setup: 11
# of original triplets: 7
# of triplets for current setup: 7 Traceback (most recent call last): File "/usr/lib/python3.7/pdb.py", line 1699, in main
pdb._runscript(mainpyfile)
File "/usr/lib/python3.7/pdb.py", line 1568, in _runscript
self.run(statement)
File "/usr/lib/python3.7/bdb.py", line 578, in run
exec(cmd, globals, locals) File "<string>", line 1, in <module>
File "/content/Span-ASTE/aste/main.py", line 1, in <module>
import json
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 468, in
_Fire
target=component.__name__)
File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 672, in
_CallAndUpdateTrace
component = fn(*varargs, **kwargs) File "/content/Span-ASTE/aste/main.py", line 278, in main
scores = main_single(p, overwrite=True, seed=seeds[i], **kwargs)
File "/content/Span-ASTE/aste/main.py", line 254, in main_single
trainer.train(overwrite=overwrite)
File "/content/Span-ASTE/aste/main.py", line 185, in train
self.setup_data()
File "/content/Span-ASTE/aste/main.py", line 177, in setup_data
data.load()
File "aste/data_utils.py", line 214, in load
opinion_offset=self.opinion_offset,
File "aste/evaluation.py", line 165, in read_inst
o_output = line[2].split() # opinion IndexError: list index out of range Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program
> /content/Span-ASTE/aste/evaluation.py(165)read_inst()
-> o_output = line[2].split() # opinion (Pdb)
From my understanding, it seems that it is searching for the labels to start the evaluation. The problem is that I don't have those labels - although I have provided training set with similar data and labels associated.
I am new in deep learning and also allennlp so I am probably missing knowledge. I have tried to solve this for the past 2 weeks but I am still stuck, so here I am.
KeyPi, this is a supervised learning model, it needs labelled data for your text corpus in the form sentence(ex: I charge it at night and skip taking the cord with me because of the good battery life .) followed by '#### #### ####' as a separator and list of labels(include aspect/target word index in first list and the openion token index in the sentence followed by 'POS' for Positive and 'NEG' for negitive.) [([16, 17], [15], 'POS')]
16 and 17- battery life and in index 15, we have openion word "good".
I am not sure if you have figures this out already and find some way to label the corpus.
Related
I wanted to use HTTPie or Postman to snip together a request for the Akamai FastPurge API, so I could see the structure and HTTP request as a whole with the aim to build a java module that builds the same HTTP request.
I've tried several tutorials provided by Akamai* and nothing seemed to work.
The credentials for FastPurge API are in their own edgegrid-file and I edited the config.json of HTTPie as suggested by Akamai.
It doesn't matter wether CP Codes or URL's are used, the result is the same.
The API does have read-write-authorisation and the credentials are correctly read from file.
I also tried using Postman, but Postman doesn't integrate the edgegrid-authorization so I have to do it by hand without knowing how it looks like. There exists a tutorial in how to set up an Akamai API in Postman, but it lacks the critical part of the pre-request-script.**
*: https://developer.akamai.com/akamai-101-basics-purging , https://community.akamai.com/customers/s/article/Exploring-Akamai-OPEN-APIs-from-the-command-line-using-HTTPie-and-jq?language=en_US , https://developer.akamai.com/api/core_features/fast_purge/v3.html , https://api.ccu.akamai.com/ccu/v2/docs/index.html
**: https://community.akamai.com/customers/s/article/Using-Postman-to-access-Akamai-OPEN-APIs?language=en_US
The error I receive in HTTPie are either this:
C:\python>http --auth-type=edgegrid -a default: :/ccu/v3/invalidate/url/production objects:='["https://www.example.com","http://www.example.com"]' --debug
HTTPie 1.0.3-dev
Requests 2.21.0
Pygments 2.3.1
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)]
c:\python\python.exe
Windows 10
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/doc#config",
"httpie": "1.0.3-dev"
},
"default_options": "['--verbose', '--traceback', '--auth-type=edgegrid', '--print=Hhb', '--timeout=300', '--style=autumn', '-adefault:']"
},
"config_dir": "%home",
"is_windows": true,
"stderr": "<colorama.ansitowin32.StreamWrapper object at 0x066D1A70>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>",
"stdin_encoding": "utf-8",
"stdin_isatty": true,
"stdout": "<colorama.ansitowin32.StreamWrapper object at 0x066D19D0>",
"stdout_encoding": "utf-8",
"stdout_isatty": true
}>
Traceback (most recent call last):
File "c:\python\lib\site-packages\httpie\input.py", line 737, in parse_items
value = load_json_preserve_order(value)
File "c:\python\lib\site-packages\httpie\utils.py", line 7, in load_json_preserve_order
return json.loads(s, object_pairs_hook=OrderedDict)
File "c:\python\lib\json\__init__.py", line 361, in loads
return cls(**kw).decode(s)
File "c:\python\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\python\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Scripts\http-script.py", line 11, in <module>
load_entry_point('httpie==1.0.3.dev0', 'console_scripts', 'http')()
File "c:\python\lib\site-packages\httpie\__main__.py", line 11, in main
sys.exit(main())
File "c:\python\lib\site-packages\httpie\core.py", line 210, in main
parsed_args = parser.parse_args(args=args, env=env)
File "c:\python\lib\site-packages\httpie\input.py", line 152, in parse_args
self._parse_items()
File "c:\python\lib\site-packages\httpie\input.py", line 358, in _parse_items
data_class=ParamsDict if self.args.form else OrderedDict
File "c:\python\lib\site-packages\httpie\input.py", line 739, in parse_items
raise ParseError('"%s": %s' % (item.orig, e))
httpie.input.ParseError: "objects:='[https://www.example.com,http://www.example.com]'": Expecting value: line 1 column 1 (char 0)
or that:
C:\python>http POST --auth-type=edgegrid -a default: :/ccu/v3/invalidate/cpcode/production < file.json
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest,edgegrid}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: unrecognized arguments: :/ccu/v3/invalidate/cpcode/production
A typical successful response looks like this:
{
"httpStatus" : 201,
"detail" : "Request accepted.",
"estimatedSeconds" : 420,
"purgeId" : "95b5a092-043f-4af0-843f-aaf0043faaf0",
"progressUri" : "/ccu/v2/purges/95b5a092-043f-4af0-843f-aaf0043faaf0",
"pingAfterSeconds" : 420,
"supportId" : "17PY1321286429616716-211907680"
}
I would really appreciate it if somebody could either help me in correcting my setup on either HTTPie or Postman, or give me the complete structure an Akamai FastPurge HTTP request has so I can skip the hassle with HTTPie or Postman.
Thank you very much, your help is much appreciated!
Postman now has an Akamai edgeGrid authorization type. It takes the contents of your .edgerc file.
I set it on the collection and let all of the calls inherit the authorization.
Cannot extract components of data parsed from JSON to Python dictionary.
I attempted to print the value corresponding with a dictionary entry but get an error.
import urllib, json, requests
url = "https://storage.googleapis.com/osbuddy-exchange/summary.json"
response = urllib.urlopen(url)
data = json.loads(response.read())
print type(data)
for key, value in data.iteritems():
print value
print ''
print "data['entry']: ", data['99']
print "name: ", data['name']```
I was hoping I could get attributes of an entry. Say the 'buy_average' given a specific key. Instead I get an error when referencing specific components.
<type 'dict'>
22467 {u'sell_average': 3001, u'buy_average': 0, u'name': u'Bastion potion(2)', u'overall_average': 3001, u'sp': 180, u'overall_quantity': 2, u'members': True, u'sell_quantity': 2, u'buy_quantity': 0, u'id': 22467}
22464 {u'sell_average': 4014, u'buy_average': 0, u'name': u'Bastion potion(3)', u'overall_average': 4014, u'sp': 270, u'overall_quantity': 612, u'members': True, u'sell_quantity': 612, u'buy_quantity': 0, u'id': 22464}
5745 {u'sell_average': 0, u'buy_average': 0, u'name': u'Dragon bitter(m)', u'overall_average': 0, u'sp': 2, u'overall_quantity': 0, u'members': True, u'sell_quantity': 0, u'buy_quantity': 0, u'id': 5745}
...
data['entry']: {u'sell_average': 7843, u'buy_average': 7845, u'name': u'Ranarr potion (unf)', u'overall_average': 7844, u'sp': 25, u'overall_quantity': 23838, u'members': True, u'sell_quantity': 15090, u'buy_quantity': 8748, u'id': 99}
name:
Traceback (most recent call last):
File "C:/Users/Michael/PycharmProjects/osrsGE/osrsGE.py", line 16, in <module>
print "name: ", data['name']
KeyError: 'name'
Process finished with exit code 1
There is no key named 'name' in the dict named 'data'.
The first level keys are numbers like: "6", "2", "8",etc
The seconds level object has a key named 'name' so code like:
print(data['2']['name']) # Cannonball
should work
I am attempting to transfer a database from sqlite to mysql.
I've googled this error and found Stack Overflow matches, but havent seen how to debug/identify the offending "0 value AutoField" fields. I've tried skirting the issue by dumping/loading different tables, but all seem to generate the same error.
I've attempted appending -e contenttypes, --natural-foreign, and --natural-primary to my datadump command, e.g.,
python manage.py dumpdata -e contenttypes --natural-foreign --natural-primary --indent=4 > datadump_3-7-18.json
After running python manage.py loaddata --traceback datadump_3-7-18.json
It produces the traceback error:
(venv) ➜ bikerental git:(additional-features-march) ✗ python manage.py loaddata --traceback datadump_3-7-18.json
Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line
utility.execute()
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 365, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **cmd_options)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/base.py", line 335, in execute
output = self.handle(*args, **options)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/commands/loaddata.py", line 72, in handle
self.loaddata(fixture_labels)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/commands/loaddata.py", line 113, in loaddata
self.load_label(fixture_label)
File "/rentals/venv/lib/python3.6/site-packages/django/core/management/commands/loaddata.py", line 177, in load_label
obj.save(using=self.using)
File "/rentals/venv/lib/python3.6/site-packages/django/core/serializers/base.py", line 205, in save
models.Model.save_base(self.object, using=using, raw=True, **kwargs)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/base.py", line 759, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/base.py", line 842, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/base.py", line 880, in _do_insert
using=using, raw=raw)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/query.py", line 1125, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1280, in execute_sql
for sql, params in self.as_sql():
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1233, in as_sql
for obj in self.query.objs
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1233, in <listcomp>
for obj in self.query.objs
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1232, in <listcomp>
[self.prepare_value(field, self.pre_save_val(field, obj)) for field in fields]
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1172, in prepare_value
value = field.get_db_prep_save(value, connection=self.connection)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/fields/__init__.py", line 767, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepared=False)
File "/rentals/venv/lib/python3.6/site-packages/django/db/models/fields/__init__.py", line 940, in get_db_prep_value
value = connection.ops.validate_autopk_value(value)
File "/rentals/venv/lib/python3.6/site-packages/django/db/backends/mysql/operations.py", line 163, in validate_autopk_value
raise ValueError('The database backend does not accept 0 as a '
ValueError: Problem installing fixture '/rentals/bikerental/datadump_3-7-18.json': The database backend does not accept 0 as a value for AutoField.
I've noticed this seems to have something to do with Foreignkey values, so I'll post the one I have:
bike = models.ManyToManyField(Bike, blank=True)
Any way to more easily identify where in the database I assume this is coming from?
I've solved the problem by manually editing the data dump. In the table to which the ManyToManyField refers, there was a record whose ID was 0. This was due to my initial manually entered set of records, where I began the increment at 0. Removing this record, and removing references to it in the ManyToManyField allowed the loaddata command to work without error. For the record, I would just like to state that error handling could/should be improved here by being more explicit, for it had me scratching my head for nearly an entire work day.
For illustration in datadump_3-7-18.json:
{ <----- I GOT DELETED
"model": "inventory.bike", <----- I GOT DELETED
"pk": 0, <----- I GOT DELETED
"fields": { <----- I GOT DELETED
... <----- I GOT DELETED
} <----- I GOT DELETED
}, <----- I GOT DELETED
{
"model": "inventory.bike",
"pk": 1,
"fields": {
...
}
},
{
"model": "inventory.bike",
"pk": 2,
"fields": {
...
}
},
And later on in datadump_3-7-18.json, the records containing the 0 ManyToManyField foreignkey:
{
"model": "reservations.reservation",
"pk": 55,
"fields": {
...
"bike": [
0, <----- I GOT DELETED
1,
2
]
}
},
{
"model": "reservations.reservation",
"pk": 28,
"fields": {
...
"bike": [
0, <----- I GOT DELETED
1,
2,
3
]
}
},
I am trying to create JSON data to pass to InfluxDB. I create it using strings but I get errors. What am I doing wrong. I am using json.dumps as has been suggested in various posts.
Here is basic Python code:
json_body = "[{'points':["
json_body += "['appx', 1, 10, 0]"
json_body += "], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
The output I get is
Write points: [{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
Traceback (most recent call last):
line 127, in main
client.write_points(json.dumps(json_body))
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 173, in write_points
return self.write_points_with_precision(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 197, in write_points_with_precision
status_code=200
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 127, in request
raise error
influxdb.client.InfluxDBClientError
I have tried with double quotes too but get the same error. This is stub code (to minimize the solution), I realize in the example the points list contains just one list object but in reality it contains multiple. I am generating the JSON code reading through outputs of various API calls.
json_body = '[{\"points\":['
json_body += '[\"appx\", 1, 10, 0]'
json_body += '], \"name\": \"WS1\", \"columns\": [\"RName\", \"RIn\", \"SIn\", \"OIn\"]}]'
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
I understand if I used the below things would work:
json_body = [{ "points": [["appx", 1, 10, 0]], "name": "WS1", "columns": ["Rname", "RIn", "SIn", "OIn"]}]
You don't need to create JSON manually. Just pass an appropriate Python structure into write_points function. Try something like that:
data = [{'points':[['appx', 1, 10, 0]],
'name': 'WS1',
'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
client.write_points(data)
Please visit JSON.org for proper JSON structure. I can see several errors with your self-generated JSON:
The outer-most item can be an unordered object, enclosed by curly braces {}, or an ordered array, enclosed by brackets []. Don't use both. Since your data is structured like a dict, the curly braces are appropriate.
All strings need to be enclosed in double quotes, not single. "This is valid JSON". 'This is not valid'.
Your 'points' value array is surrounded by double brackets, which is unnecessary. Only use a single set.
Please check out the documentation of the json module for details on how to use it. Basically, you can feed json.dumps() your Python data structure, and it will output it as valid JSON.
In [1]: my_data = {'points': ["appx", 1, 10, 0], 'name': "WS1", 'columns': ["RName", "RIn", "SIn", "OIn"]}
In [2]: my_data
Out[2]: {'points': ['appx', 1, 10, 0], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}
In [3]: import json
In [4]: json.dumps(my_data)
Out[4]: '{"points": ["appx", 1, 10, 0], "name": "WS1", "columns": ["RName", "RIn", "SIn", "OIn"]}'
You'll notice the value of using a Python data structure first: because it's Python, you don't need to worry about single vs. double quotes, json.dumps() will automatically convert them. However, building a string with embedded single quotes leads to this:
In [5]: op_json = "[{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
In [6]: json.dumps(op_json)
Out[6]: '"[{\'points\':[[\'appx\', 1, 10, 0]], \'name\': \'WS1\', \'columns\': [\'RName\', \'RIn\', \'SIn\', \'OIn\']}]"'
since you fed the string to json.dumps(), not the data structure.
So next time, don't attempt to build JSON yourself, rely on the dedicated module to do it.
I'm trying to create a test fixture using custom manager methods as my app uses a subset of dbtables and fewer records. so i dropped the idea of using initial_data. In manager I'm doing something like this. in Managers.py:
sitedict = Site.objects.filter(pk=1234).values()[0]
custdict = Customer.objects.filter(custid=123456).values()[0]
customer = {"pk":123456,"model":"myapp.customer","fields":custdict}
site = {"pk":0001,"model":"myapp.site","fields":sitedict}
csvfile = open('shoppingcart/bsofttestdata.csv','wb')
csv_writer = csv.writer(csvfile)
csv_writer.writerow([customer,site])
then i did modify my csv file to replace single quotes with double, etc. Then i did save that file as json.Sorry if its too dumb way but this is the first time I'm creating testdata,I'd love to learn better way.Sample data of the file is like this in : myapp/fixtures/testdata.json
[{"pk": 123456, "model": "myapp.customer", "fields": {"city": "abc", "maritalstatus": None, "zipcode": "12345", "lname": "fdfdf", "state": "AZ", "agentid": 1111, "fname": "sdadsad", "email": "abcd#xxx.com", "phone": "0000000000", "custid":123456,"datecreate": datetime.datetime(2011, 3, 29, 11, 40, 18, 157612)}},{"pk":0001, "model": "myapp.site", "fields": {"url": "http://google.com", "websitecode": "", "notify": True, "fee": 210.0, "id":0001}}]
I used this to run my tests but i got the following error:
EProblem installing fixture '/var/lib/django/myproject/myapp/fixtures/testdata.json':
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.6/django/core/management/commands/loaddata.py", line 150, in handle
for obj in objects:
File "/usr/lib/pymodules/python2.6/django/core/serializers/json.py", line 41, in Deserializer
for obj in PythonDeserializer(simplejson.load(stream)):
File "/usr/lib/pymodules/python2.6/simplejson/__init__.py", line 267, in load
parse_constant=parse_constant, **kw)
File "/usr/lib/pymodules/python2.6/simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/pymodules/python2.6/simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/pymodules/python2.6/simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
in stead of using raw find replace its better to use something as shown here and when we have some datatypes that JSON doesn't support.this would be helpful to get rid of TypeError: xxxxxxx is not JSON serializable or specifically stackover post for Datetime problem will be helpful.
EDIT:
instead of writing to csv then manually modifying it,I did the following:
with open('myapp/fixtures/customer_testdata.json',mode = 'w') as f:
json.dump(customer,f,indent=2)
here is small code I used to get out of the TypeError:xxxx not json blah blah problem
for key in cust.keys():
value = cust[key]
if isinstance(cust[key],datetime.datetime):
temp = cust[key].timetuple() # this converts datetime.datetime to time.struct_time
cust.update({key:{'__class__':'time.asctime','__value__':time.asctime(temp)}})
return cust
if we convert datetime.datetime to any other type, then we have to chang the class accordingly. E.g timestamp --> float here is fantastic reference for datetime conversions
Hope this is helpful.