Avro Namespace Error - namespaces

i have a question.
I made the following avro schema:
{
"namespace": "foo",
"fields": [
{
"type": [
"string",
"null"
],
"name": "narf"
},
{
"namespace": "foo.run",
"fields": [
{
"type": [
"string",
"null"
],
"name": "baz"
}
],
"type": "record",
"name": "foo"
}
],
"type": "record",
"name": "run"
}
When i try to compile this i get the following error:
/usr/bin/python3.4 /home/marius/PycharmProjects/AvroTest/avroTest.py
Traceback (most recent call last):
File "/home/marius/PycharmProjects/AvroTest/avroTest.py", line 11, in
schema = avro.schema.Parse(open("simple.avsc").read())
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1283, in Parse
return SchemaFromJSONData(json_data, names)
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1254, in SchemaFromJSONData
return parser(json_data, names=names)
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1182, in _SchemaFromJSONObject
other_props=other_props,
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1061, in init
fields = make_fields(names=nested_names)
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1173, in MakeFields
return tuple(RecordSchema._MakeFieldList(field_desc_list, names))
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 986, in _MakeFieldList
yield RecordSchema._MakeField(index, field_desc, names)
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 957, in _MakeField
names=names,
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1254, in SchemaFromJSONData
return parser(json_data, names=names)
File "/usr/local/lib/python3.4/dist-packages/avro_python3_snapshot-1.7.7-py3.4.egg/avro/schema.py", line 1135, in _SchemaFromJSONString
% (json_string, sorted(names.names)))
avro.schema.SchemaParseException: Unknown named schema 'record', known names: ['foo.run'].
And i have no idea why. In my mind the error is the record called "foo" but the namespace i gave ("foo.run") is in the namespacelist but ut raises an error anyways. I guess i misunderstand something regarding namespaces but i could not figure out what.
Greetings
Marius

Ok found the error, is generated this schema with my own program and it had a bug when a nested record appears within a nested record. #
See: How to nest records in an Avro schema?

Related

Apache Atlas Why there is a single quote at the end of flink model file? The file path is addon/models/1000-Hadoop/1110-flink_model.json

Failed to parse this JSON file. Is it a typo error?
...
"endDef2": {
"type": "flink_process",
"name": "application",
"cardinality": "SINGLE"
},
"propagateTags": "NONE"
}
]
}'

jsonschema expecting allOf to be of type 'object' or 'boolean'

I'm using PyPi jsonschema to validate my test schema.
Here are the contents of test_schema.json:
{
"$schema": "https://json-schema.org/draft/2020-12/schema#",
"title": "Single IV Run JSON Schema",
"description": "Schema for single IV run sequence",
"type": "object",
"properties": {
"allOf": [
{ "$ref": "/schemas/test/dut.json"},
{
"properties": {
"IV": {"$ref": "/schemas/test/iv.json"}
}
}
]
},
"required": ["IV"]
}
Using jsonschema.Draft7Validator.check_schema(test_schema), I receive the following error, even though I copied the general schema structure from the official draft 7 example:
Traceback (most recent call last):
File "C:/Users/william.su/workspaces/speedit_sandbox/schema_parser.py", line 129, in <module>
Draft7Validator.check_schema(schema)
File "C:\Users\william.su\Miniconda3\lib\site-packages\jsonschema\validators.py", line 294, in check_schema
raise exceptions.SchemaError.create_from(error)
jsonschema.exceptions.SchemaError: [{'$ref': '/schemas/test/dut.json'}, {'properties': {'IV': {'$ref': '/schemas/test/iv.json'}}}] is not of type 'object', 'boolean'

Python error: Extra data: line 1 in loading a big Json file

I am trying to read a JSON file which is 370 MB
import json
data = open( "data.json" ,"r")
json.loads(data.read())
and it's not possible to easily find the root cause of the following error,
json.decoder.JSONDecodeError: Extra data: line 1 column 1024109 (char 1024108)
I looked at similar questions and tried the following StackOverflow answer
import json
data = [json.loads(line) for line in open('data.json', 'r')]
But it didn't resolve the issue. I am wondering if there is any solution to find where the error happens in the file. I am getting some other files from the same source and they run without any problem.
A small piece of the Json file is a list of dicts like,
{
"uri": "p",
"source": {
"uri": "dail",
"dataType": "pr",
"title": "Daily"
},
"authors": [
{
"type": "author",
"isAgency": false
}
],
"concepts": [
{
"amb": false,
"imp": true,
"date": "2019-05-23",
"textStart": 2459,
"textEnd": 2467
},
{
"amb": false,
"imp": true,
"date": "2019-05-09",
"textStart": 2684,
"textEnd": 2691
}
],
"shares": {},
"wgt": 100,
"relevance": 100
}
The problem with json library is loaded everything to memory and parsed in full and then handled in-memory, which for such a large amount of data is clearly problematic.
Instead I would suggest to take a look at https://github.com/henu/bigjson
import bigjson
with open('data.json', 'rb') as f:
json_data = bigjson.load(f)

Parsing and cleaning text file in Python?

I have a text file which contains raw data. I want to parse that data and clean it so that it can be used further.The following is the rawdata.
"{\x0A \x22identifier\x22: {\x0A \x22company_code\x22: \x22TSC\x22,\x0A \x22product_type\x22: \x22airtime-ctg\x22,\x0A \x22host_type\x22: \x22android\x22\x0A },\x0A \x22id\x22: {\x0A \x22type\x22: \x22guest\x22,\x0A \x22group\x22: \x22guest\x22,\x0A \x22uuid\x22: \x221a0d4d6e-0c00-11e7-a16f-0242ac110002\x22,\x0A \x22device_id\x22: \x22423e49efa4b8b013\x22\x0A },\x0A \x22stats\x22: [\x0A {\x0A \x22timestamp\x22: \x222017-03-22T03:21:11+0000\x22,\x0A \x22software_id\x22: \x22A-ACTG\x22,\x0A \x22action_id\x22: \x22open_app\x22,\x0A \x22values\x22: {\x0A \x22device_id\x22: \x22423e49efa4b8b013\x22,\x0A \x22language\x22: \x22en\x22\x0A }\x0A }\x0A ]\x0A}"
I want to remove all the hexadecimal characters,I tried parsing the data and storing in an array and cleaning it using re.sub() but it gives the same data.
for line in f:
new_data = re.sub(r'[^\x00-\x7f],\x22',r'', line)
data.append(new_data)
\x0A is the hex code for newline. After s = <your json string>, print(s) gives
>>> print(s)
{
"identifier": {
"company_code": "TSC",
"product_type": "airtime-ctg",
"host_type": "android"
},
"id": {
"type": "guest",
"group": "guest",
"uuid": "1a0d4d6e-0c00-11e7-a16f-0242ac110002",
"device_id": "423e49efa4b8b013"
},
"stats": [
{
"timestamp": "2017-03-22T03:21:11+0000",
"software_id": "A-ACTG",
"action_id": "open_app",
"values": {
"device_id": "423e49efa4b8b013",
"language": "en"
}
}
]
}
You should parse this with the json module load (from file) or loads (from string) functions. You will get a dict with 2 dicts and a list with a dict.

loading a json file using python [duplicate]

I am getting some data from a JSON file "new.json", and I want to filter some data and store it into a new JSON file. Here is my code:
import json
with open('new.json') as infile:
data = json.load(infile)
for item in data:
iden = item.get["id"]
a = item.get["a"]
b = item.get["b"]
c = item.get["c"]
if c == 'XYZ' or "XYZ" in data["text"]:
filename = 'abc.json'
try:
outfile = open(filename,'ab')
except:
outfile = open(filename,'wb')
obj_json={}
obj_json["ID"] = iden
obj_json["VAL_A"] = a
obj_json["VAL_B"] = b
And I am getting an error, the traceback is:
File "rtfav.py", line 3, in <module>
data = json.load(infile)
File "/usr/lib64/python2.7/json/__init__.py", line 278, in load
**kw)
File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 88 column 2 - line 50607 column 2 (char 3077 - 1868399)
Here is a sample of the data in new.json, there are about 1500 more such dictionaries in the file
{
"contributors": null,
"truncated": false,
"text": "#HomeShop18 #DreamJob to professional rafter",
"in_reply_to_status_id": null,
"id": 421584490452893696,
"favorite_count": 0,
"source": "Mobile Web (M2)",
"retweeted": false,
"coordinates": null,
"entities": {
"symbols": [],
"user_mentions": [
{
"id": 183093247,
"indices": [
0,
11
],
"id_str": "183093247",
"screen_name": "HomeShop18",
"name": "HomeShop18"
}
],
"hashtags": [
{
"indices": [
12,
21
],
"text": "DreamJob"
}
],
"urls": []
},
"in_reply_to_screen_name": "HomeShop18",
"id_str": "421584490452893696",
"retweet_count": 0,
"in_reply_to_user_id": 183093247,
"favorited": false,
"user": {
"follow_request_sent": null,
"profile_use_background_image": true,
"default_profile_image": false,
"id": 2254546045,
"verified": false,
"profile_image_url_https": "https://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",
"profile_sidebar_fill_color": "171106",
"profile_text_color": "8A7302",
"followers_count": 87,
"profile_sidebar_border_color": "BCB302",
"id_str": "2254546045",
"profile_background_color": "0F0A02",
"listed_count": 1,
"profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",
"utc_offset": null,
"statuses_count": 9793,
"description": "Rafter. Rafting is what I do. Me aur mera Tablet. Technocrat of Future",
"friends_count": 231,
"location": "",
"profile_link_color": "473623",
"profile_image_url": "http://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",
"following": null,
"geo_enabled": false,
"profile_banner_url": "https://pbs.twimg.com/profile_banners/2254546045/1388065343",
"profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",
"name": "Jayy",
"lang": "en",
"profile_background_tile": false,
"favourites_count": 41,
"screen_name": "JzayyPsingh",
"notifications": null,
"url": null,
"created_at": "Fri Dec 20 05:46:00 +0000 2013",
"contributors_enabled": false,
"time_zone": null,
"protected": false,
"default_profile": false,
"is_translator": false
},
"geo": null,
"in_reply_to_user_id_str": "183093247",
"lang": "en",
"created_at": "Fri Jan 10 10:09:09 +0000 2014",
"filter_level": "medium",
"in_reply_to_status_id_str": null,
"place": null
}
As you can see in the following example, json.loads (and json.load) does not decode multiple json object.
>>> json.loads('{}')
{}
>>> json.loads('{}{}') # == json.loads(json.dumps({}) + json.dumps({}))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 368, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 3 - line 1 column 5 (char 2 - 4)
If you want to dump multiple dictionaries, wrap them in a list, dump the list (instead of dumping dictionaries multiple times)
>>> dict1 = {}
>>> dict2 = {}
>>> json.dumps([dict1, dict2])
'[{}, {}]'
>>> json.loads(json.dumps([dict1, dict2]))
[{}, {}]
Iterate over the file, loading each line as JSON in the loop:
tweets = []
for line in open('tweets.json', 'r'):
tweets.append(json.loads(line))
This avoids storing intermediate python objects. As long as you write one full tweet per append() call, this should work.
I came across this because I was trying to load a JSON file dumped from MongoDB. It was giving me an error
JSONDecodeError: Extra data: line 2 column 1
The MongoDB JSON dump has one object per line, so what worked for me is:
import json
data = [json.loads(line) for line in open('data.json', 'r')]
This may also happen if your JSON file is not just 1 JSON record.
A JSON record looks like this:
[{"some data": value, "next key": "another value"}]
It opens and closes with a bracket [ ], within the brackets are the braces { }. There can be many pairs of braces, but it all ends with a close bracket ].
If your json file contains more than one of those:
[{"some data": value, "next key": "another value"}]
[{"2nd record data": value, "2nd record key": "another value"}]
then loads() will fail.
I verified this with my own file that was failing.
import json
guestFile = open("1_guests.json",'r')
guestData = guestFile.read()
guestFile.close()
gdfJson = json.loads(guestData)
This works because 1_guests.json has one record []. The original file I was using all_guests.json had 6 records separated by newline. I deleted 5 records, (which I already checked to be bookended by brackets) and saved the file under a new name. Then the loads statement worked.
Error was
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 2 column 1 - line 10 column 1 (char 261900 - 6964758)
PS. I use the word record, but that's not the official name. Also, if your file has newline characters like mine, you can loop through it to loads() one record at a time into a json variable.
I just got the same error while my json file is like this
{"id":"1101010","city_id":"1101","name":"TEUPAH SELATAN"}
{"id":"1101020","city_id":"1101","name":"SIMEULUE TIMUR"}
And I found it malformed, so I changed it to:
{
"datas":[
{"id":"1101010","city_id":"1101","name":"TEUPAH SELATAN"},
{"id":"1101020","city_id":"1101","name":"SIMEULUE TIMUR"}
]
}
One-liner for your problem:
data = [json.loads(line) for line in open('tweets.json', 'r')]
If you want to solve it in a two-liner you can do it like this:
with open('data.json') as f:
data = [json.loads(line) for line in f]
The error is due to the \nsymbol if you use the read()method of the file descriptor... so don't bypass the problem by using readlines()& co but just remove such character!
import json
path = # contains for example {"c": 4} also on multy-lines
new_d = {'new': 5}
with open(path, 'r') as fd:
d_old_str = fd.read().replace('\n', '') # remove all \n
old_d = json.loads(d_old_str)
# update new_d (python3.9 otherwise new_d.update(old_d))
new_d |= old_d
with open(path2, 'w') as fd:
fd.write(json.dumps(new_d)) # save the dictionary to file (in case needed)
... and if you really really want to use readlines() here an alternative solution
new_d = {'new': 5}
with open('some_path', 'r') as fd:
d_old_str = ''.join(fd.readlines()) # concatenate the lines
d_old = json.loads(d_old_str)
# then as above
I think saving dicts in a list is not an ideal solution here proposed by #falsetru.
Better way is, iterating through dicts and saving them to .json by adding a new line.
Our 2 dictionaries are
d1 = {'a':1}
d2 = {'b':2}
you can write them to .json
import json
with open('sample.json','a') as sample:
for dict in [d1,d2]:
sample.write('{}\n'.format(json.dumps(dict)))
And you can read json file without any issues
with open('sample.json','r') as sample:
for line in sample:
line = json.loads(line.strip())
Simple and efficient
My json file was formatted exactly as the one in the question but none of the solutions here worked out. Finally I found a workaround on another Stackoverflow thread. Since this post is the first link in Google search, I put the that answer here so that other people come to this post in the future will find it more easily.
As it's been said there the valid json file needs "[" in the beginning and "]" in the end of file. Moreover, after each json item instead of "}" there must be a "},". All brackets without quotations! This piece of code just modifies the malformed json file into its correct format.
https://stackoverflow.com/a/51919788/2772087
If your data is from a source outside your control, use this
def load_multi_json(line: str) -> [dict]:
"""
Fix some files with multiple objects on one line
"""
try:
return [json.loads(line)]
except JSONDecodeError as err:
if err.msg == 'Extra data':
head = [json.loads(line[0:err.pos])]
tail = FrontFile.load_multi_json(line[err.pos:])
return head + tail
else:
raise err