how to convert os.stat_result to a JSON that is an object? - json

I know that json.dumps can be used to convert variables into a JSON representation. Sadly the conversion of python3's class os.stat_result is an string consisting of an array representing the values of the class instance.
>>> import json
>>> import os
>>> json.dumps(os.stat('/'))
'[16877, 256, 24, 1, 0, 0, 268, 1554977084, 1554976849, 1554976849]'
I would however much prefer to have it convert the os.stat_result being converted to an JSON being an object. How can I achieve this?
It seems that the trouble is that os.stat_result does not have a .__dict__ thing.
seeing the result of this:
>>> import os
>>> str(os.stat('/'))
'os.stat_result(st_mode=16877, st_ino=256, st_dev=24, st_nlink=1, st_uid=0, st_gid=0, st_size=268, st_atime=1554977084, st_mtime=1554976849, st_ctime=1554976849)'
makes me hope there is a swift way to turn an python class instance (e.g. `os.stat_result") into a JSON representation that is an object.
which while is JSON, but the results are

as gst mentioned, manually would be this:
def stat_to_json(fp: str) -> dict:
s_obj = os.stat(fp)
return {k: getattr(s_obj, k) for k in dir(s_obj) if k.startswith('st_')}

I would however much prefer to have it convert the os.stat_result being converted to an JSON being an object. How can I achieve this?
if by JSON you mean have a dict with keys st_mode, st_ino, etc.. then the answer is .. manually.

with a "prevaled" list of keys:
# http://thepythoncorner.com/dev/writing-a-fuse-filesystem-in-python/
def dict_of_lstat(lstat_res):
lstat_keys = ['st_atime', 'st_ctime', 'st_gid', 'st_mode', 'st_mtime', 'st_nlink', 'st_size', 'st_uid', 'st_blocks']
return dict((k, getattr(lstat_res, k)) for k in lstat_keys)
lstat_dict = dict_of_lstat(os.lstat(path))
def dict_of_statvfs(statvfs_res):
statvfs_keys = ['f_bavail', 'f_bfree', 'f_blocks', 'f_bsize', 'f_favail', 'f_ffree', 'f_files', 'f_flag', 'f_frsize', 'f_namemax']
return dict((k, getattr(statvfs_res, k)) for k in statvfs_keys)
statvfs_dict = dict_of_statvfs(os.statvfs(path))
longer but faster than a k.startswith('st_') filter

Related

Micropython: bytearray in json-file

i'm using micropython in the newest version. I also us an DS18b20 temperature sensor. An adress of theses sensor e.g. is "b'(b\xe5V\xb5\x01<:'". These is the string representation of an an bytearray. If i use this to save the adress in a json file, i run in some problems:
If i store directly "b'(b\xe5V\xb5\x01<:'" after reading the json-file there are no single backslahes, and i get b'(bxe5Vxb5x01<:' inside python
If i escape the backslashes like "b'(b\xe5V\xb5\x01<:'" i get double backslashes in python: b'(b\xe5V\xb5\x01<:'
How do i get an single backslash?
Thank you
You can't save bytes in JSON with micropython. As far as JSON is concerned that's just some string. Even if you got it to give you what you think you want (ie. single backslashes) it still wouldn't be bytes. So, you are faced with making some form of conversion, no-matter-what.
One idea is that you could convert it to an int, and then convert it back when you open it. Below is a simple example. Of course you don't have to have a class and staticmethods to do this. It just seemed like a good way to wrap it all into one, and not even need an instance of it hanging around. You can dump the entire class in some other file, import it in the necessary file, and just call it's methods as you need them.
import math, ujson, utime
class JSON(object):
#staticmethod
def convert(data:dict, convert_keys=None) -> dict:
if isinstance(convert_keys, (tuple, list)):
for key in convert_keys:
if isinstance(data[key], (bytes, bytearray)):
data[key] = int.from_bytes(data[key], 'big')
elif isinstance(data[key], int):
data[key] = data[key].to_bytes(1 if not data[key]else int(math.log(data[key], 256)) + 1, 'big')
return data
#staticmethod
def save(filename:str, data:dict, convert_keys=None) -> None:
#dump doesn't seem to like working directly with open
with open(filename, 'w') as doc:
ujson.dump(JSON.convert(data, convert_keys), doc)
#staticmethod
def open(filename:str, convert_keys=None) -> dict:
return JSON.convert(ujson.load(open(filename, 'r')), convert_keys)
#example with both styles of bytes for the sake of being thorough
json_data = dict(address=bytearray(b'\xFF\xEE\xDD\xCC'), data=b'\x00\x01\02\x03', date=utime.mktime(utime.localtime()))
keys = ['address', 'data'] #list of keys to convert to int/bytes
JSON.save('test.json', json_data, keys)
json_data = JSON.open('test.json', keys)
print(json_data) #{'date': 1621035727, 'data': b'\x00\x01\x02\x03', 'address': b'\xff\xee\xdd\xcc'}
You may also want to note that with this method you never actually touch any JSON. You put in a dict, you get out a dict. All the JSON is managed "behind the scenes". Regardless of all of this, I would say using struct would be a better option. You said JSON though so, my answer is about JSON.

JSON cannot handel numpy's float128 type

I need to dump numpy arrays of dtype float128 to JSON. For that purpose I wrote a custom array encoder, that properly handles arrays by calling tolist() on them. This works perfectly fine for all real valued dtypes, except for float128. See the following example:
import json
import numpy as np
class ArrayEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, np.ndarray):
out = {'__ndarray__': True,
'__dtype__': o.dtype.str,
'data': o.tolist()}
return out
return json.JSONEncoder.default(self, o)
arr64 = np.array([1, 2, 3], dtype='float64')
arr128 = np.array([1, 3, 4], dtype='float128')
json.dumps(arr64, cls=ArrayEncoder) # fine
json.dumps(arr128, cls=ArrayEncoder) # TypeError
TypeError: Object of type float128 is not JSON serializable
The purpose of the encoder is to provide data in a format JSON can handle. In this case the data is converted to a list of plain Python floats, which should not make any trouble. A possible solution would be to change the conversion line in the encoder to
class ArrayEncoder(json.JSONEncoder):
def default(self, o):
...
'data': o.astype('float64').tolist()
...
I am, however, interestend in cause of the problem. Why does JSON raise an error even though it uses an encoder that provides the data in a serializable format?
You have a custom encoder, but it only cares about np.array types - it converts then to a json object and their data as a Python list, and then defaults to normal lists.
The .tolist method you call makes your float128 array elements as standalone Float128 objects, which are not JSON encodable by default, and your encoder explicitly just wants to know about "isinstance(..., np.ndarray)".
You have two options: either check for float128 , so that your encoder is called recursivelly for each value, or convert then on output of np.ndarray either to plain float, or to a string:
class ArrayEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, np.ndarray):
out = {'__ndarray__': True,
'__dtype__': o.dtype.str,
'data': o.tolist()} # <- the other option is to map the results of .tolist here, and convert any float128 to str/float oon this step.
return out
elif isinstance(o, np.float128):
return float(o)
return json.JSONEncoder.default(self, o)

How to save a dictionary of objects?

I have a Python 3.5 program that creates an inventory of objects. I created a class of Trampolines (color, size, spring, etc.). I constantly will create new instances of the class and I then save a dictionary of them. The dictionary looks like this:
my_dict = {name: instance} and the types are like so {"string": "object"}
My issue is that I want to know how to save this inventory list so that I can start where I left off the last time I closed the program.
I don't want to use pickle because I'm trying to learn secure ways to do this for more important versions in the future.
I thought about using sqlite3, so any tips on how to do this easily would be appreciated.
My preferred solution would state how to do it with the json module. I tried it, but the error I got was:
__main__.Trampoline object at 0x00032432... is not JSON serializable
Edit:
Below is the code I used when I got the error:
out_file = open(input("What do you want to save it as? "), "w")
json.dump(my_dict, out_file, indent=4)
out_file.close()
End of Edit
I've done a good amount of research, and saw that there's also an issue with many of these save options that you can only do one object per 'save file', but that the work around to this is that you use a dictionary of objects, such as the one I made. Any info clarifying this would be great, too!
What you might be able to do is saving the instance's attributes to a CSV-file and then just create it when starting up. This might be a bit too much code and is possible not the best way. One obvious problem is that it doesn't work if you don't have the same amount of attributes as parameters, which should be possible to fix if necessary I believe. I just thought I might try and post and see if it helps :)
import json
class Trampoline:
def __init__(self, color, size, height, spring):
self.color = color
self.size = size
self.height = height
self.spring = spring
def __repr__(self):
return "Attributes: {}, {}, {}, {}".format(self.color, self.size, self.height, self.spring)
my_dict = {
"name1": Trampoline('red', 100, 2.3, True),
"name2": Trampoline('blue', 50, 2.1, False),
"name3": Trampoline('green', 25, 1.8, True),
"name5": Trampoline('white', 10, 2.6, False),
"name6": Trampoline('black', 0, 1.4, True),
"name7": Trampoline('purple', -33, 3.0, True),
"name8": Trampoline('orange', -999, 2.5, False),
}
def save(my_dict):
with open('save_file.txt', 'w') as file:
temp = {}
for name, instance in my_dict.items():
attributes = {}
for attribute_name, attribute_value in instance.__dict__.items():
attributes[attribute_name] = attribute_value
temp[name] = attributes
json.dump(temp, file)
def load():
with open('save_file.txt', 'r') as file:
my_dict = {}
x = json.load(file)
for name, attributes in x.items():
my_dict[name] = Trampoline(**attributes)
return my_dict
# CHECK IF IT WORKS!
save(my_dict)
my_dict = load()
print("\n".join(["{} | {}".format(name, instance) for name, instance in sorted(my_dict.items())]))
Here is an example of a class that handles datetime objects.
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
if obj.tzinfo:
obj = obj.astimezone(isodate.tzinfo.UTC).replace(tzinfo=None)
return obj.isoformat()[:23] + 'Z'
return json.JSONEncoder.default(self, obj)
when you encode to json the default function of the cls is called with object you passed. If you want to handle a type that is not part of the standard json.JSONEncoder.default you need to intercept it and return how you want it handled as a valid json type. In this example I turned the datetime into a str and returned that. If its not one of the types I want to special case, I just pass it along to the standard json.JSONEncoder.default handler.
To use this class you need to pass it in the cls param of json.dump or json.dumps:
json.dumps(obj, cls=CustomEncoder)
Decoding is done the same way but with json.JSONDecoder, json.load, and json.loads. However you can not match on type, so you will need to either add an 'hint' in encoding for decoding or know what type it needs to decode.
For a simple class, you can make an easy serializer as below. This will take all of the properties of your Trampoline object and put them into a dictionary and then into JSON.
class Trampoline(object):
...
def serialize(self):
return json.dumps(vars(self))
If your class is a bit more complicated, then write a more complicated serializer :)

Haskell: Parsing JSON data into a Map or a list of tuples?

Is there a way to automatically convert JSON data into Data.Map or just a list of tuples?
Say, if I have:
{Name : "Stitch", Age : 3, Friend: "Lilo"}
I'd like it to be converted into:
fromList [("Name","Stitch"), ("Age",3), ("Friend","Lilo")]
.. without defining a Stitch data type.
I am happy to parse integers into strings in the resulting map. I can just read them into integers later.
You can use aeson. See Decoding a mixed-type object in its documentation's tutorial:
>>> import qualified Data.ByteString.Lazy.Char8 as BS
>>> :m +Data.Aeson
>>> let foo = BS.pack "{\"Name\" : \"Stitch\", \"Age\" : 3, \"Friend\": \"Lilo\"}"
>>> decode foo :: Maybe Object
Just fromList [("Friend",String "Lilo"),("Name",String "Stitch"),("Age",Number 3.0)]
An Object is just a HashMap from Text keys to Value values, the Value type being a sum type representation of JS values.

Scala Convert a string into a map

What is the fastest way to convert this
{"a":"ab","b":"cd","c":"cd","d":"de","e":"ef","f":"fg"}
into mutable map in scala ? I read this input string from ~500MB file. That is the reason I'm concerned about speed.
If your JSON is as simple as in your example, i.e. a sequence of key/value pairs, where each value is a string. You can do in plain Scala :
myString.substring(1, myString.length - 1)
.split(",")
.map(_.split(":"))
.map { case Array(k, v) => (k.substring(1, k.length-1), v.substring(1, v.length-1))}
.toMap
That looks like a JSON file, as Andrey says. You should consider this answer. It gives some example Scala code. Also, this answer gives some different JSON libraries and their relative merits.
The fastest way to read tree data structures in XML or JSON is by applying streaming API: Jackson Streaming API To Read And Write JSON.
Streaming would split your input into tokens like 'beginning of an object' or 'beginning of an array' and you would need to build a parser for these token, which in some cases is not a trivial task.
Keeping it simple. If reading a json string from file and converting to scala map
import spray.json._
import DefaultJsonProtocol._
val jsonStr = Source.fromFile(jsonFilePath).mkString
val jsonDoc=jsonStr.parseJson
val map_doc=jsonDoc.convertTo[Map[String, JsValue]]
// Get a Map key value
val key_value=map_doc.get("key").get.convertTo[String]
// If nested json, re-map it.
val key_map=map_doc.get("nested_key").get.convertTo[Map[String, JsValue]]
println("Nested Value " + key_map.get("key").get)