Serializing Multiple API Fields into one. Django - json

I have a pre-defined API, like:
{
time : some_time,
height : {1: 154, 2: 300, 3: 24},
color : {1: 'red', 2: 'blue', 3: 'green'},
age : {1: 27, 2: 324, 3: 1},
... many, many more keys.
}
I have no control of this API, so cannot change its structure.
Each integer key inside the sub dictionaries are linked and part of one record. For example the object that is 154 in height, is also colour: red and age: 27.
I am aware one strategy to work with this is to have separate serialisers for each field.
class MySerializer(serializers.ModelSerializer):
# Nested serializers
height = HeightSerializer()
colour = ColourSerializer()
age = AgeSerializer()
etc, etc, etc
But that still gives me messy data to work with, that requires lots of update() logic in the serializer.
What I instead want to do is have one nested serializer that has access to the full request data, and can work with height, colour and age simultaneously and return me something like from the to_internal_value() method:
{
['record' : 1, 'height': 154, 'colour' : 'red', 'age' : 27],
['record' : 2, 'height': 300, 'colour' : 'blue', 'age' : 324],
['record' : 3, 'height': 24, 'colour' : 'green', 'age' : 2],
}
But unfortunately the height serializer only seems to have access to information on fields called height. I am aware I can user source="foo" in the init call, but then it only has access to a field called "foo". I want it to have access to all fields.
I noticed there is a source='*' option, but it doesn't work. My init method of the serializer never gets called unless there is a key "height" in the api call.
Any ideas how I can have a nested serialiser that has access to all the data in the request?
Thanks
Joey

Related

ecma script for in loop and hasOwnProperty have a strange behaviour

I would like to transform an object got from a NoSQL Database using a DTO, so, I inspect the object in a for..in loop to get only what I want to keep:
for (const attribute in result) {
if (result.hasOwnProperty(attribute)) {
console.log(`${attribute} belongs to object!`);
}
}
I wonder why :
I have to use hasOwnProperty method as I loop over the object to get attributes
my object have a 'nutriments' attribute, but... never consoled
Here's a portion of the original object :
...
nutriments:
{ sugars: 6.5,
'nova-group_serving': 4,
fiber_value: 2.5,
'nutrition-score-uk_100g': 1,
energy_value: 1160,
salt_100g: 1.08,
'nutrition-score-uk': 1,
fiber_100g: 2.5,
proteins: 8.5,
'nova-group_100g': 4,
carbohydrates_unit: 'g',
'saturated-fat_100g': 0.4,
'nutrition-score-fr_100g': 1,
salt_unit: 'g',
'saturated-fat_unit': 'g',
sugars_100g: 6.5,
sugars_value: 6.5,
'saturated-fat_value': 0.4,
carbohydrates_value: 49.2,
fat_unit: 'g',
fiber: 2.5,
proteins_value: 8.5,
fat_value: 4.3,
sugars_serving: 5.13,
sodium_value: 0.43200000000000005,
fiber_serving: 1.98,
sodium_unit: 'g',
energy_serving: 916,
sodium_serving: 0.34099999999999997,
proteins_unit: 'g',
carbohydrates: 49.2,
energy: 1160,
salt_value: 1.08,
sodium_100g: 0.43200000000000005,
'nova-group': 4,
'saturated-fat_serving': 0.316,
proteins_serving: 6.72,
'nutrition-score-fr': 1,
energy_100g: 1160,
energy_unit: 'kJ',
fiber_unit: 'g',
'carbon-footprint-from-known-ingredients_product': 416,
sugars_unit: 'g',
proteins_100g: 8.5,
'carbon-footprint-from-known-ingredients_100g': 75.6,
carbohydrates_serving: 38.9,
salt_serving: 0.8530000000000001,
fat_serving: 3.4,
salt: 1.08,
carbohydrates_100g: 49.2,
'saturated-fat': 0.4,
fat_100g: 4.3,
fat: 4.3,
'carbon-footprint-from-known-ingredients_serving': 59.7,
sodium: 0.43200000000000005 },
...
I edit my for in loop to trace "attributes" and the attribute "nutriments" was listed, but... result['nutriments'] is undefined and result.hasOwnProperty('nutriments') returns false...
for (const attribute in result) {
console.log(`Discovering ${attribute} belongs to object!`);
if (result.hasOwnProperty(attribute)) {
console.log(`${attribute} belongs to object!`);
}
}
This behaviour is observable for some other object attributes, but I can get the attribute value with a result.attributeName.
So, what can explain this behaviour ?
hasOwnProperty returns false for attributes that are inherited. My guess would be that the nutriments property is in fact inherited by this object. We use it to actually avoid some of the inherited properties.
Another possibility is that the object is either a Proxy or has some of its properties protected with .defineProperty, making them either not iterable or not "get-able".
More info would be needed for proper diagnosis / solution.
Sorry for this question... I found the solution exploring my own mongoose Schema, even if MongoDB returns the whole content of the document, this one is handled by the defined Schema, and... i forgot to add the "nutriments" property.

A pickle with jsonpickle (Python 3.7)

I have an issue with using jsonpickle. Rather, I believe it to be working correctly but it's not producing the output I want.
I have a class called 'Node'. In 'Node' are four ints (x, y, width, height) and a StringVar called 'NodeText'.
The problem with serialising a StringVar is that there's lots of information in there and for me it's just not necessary. I use it when the program's running, but for saving and loading it's not needed.
So I used a method to change out what jsonpickle saves, using the __getstate__ method for my Node. This way I can do this:
def __getstate__(self):
state = self.__dict__.copy()
del state['NodeText']
return state
This works well so far and NodeText isn't saved. The problem comes on a load. I load the file as normal into an object (in this case a list of nodes).
The problem loaded is this: the items loaded from json are not Nodes as defined in my class. They are almost the same (they have x, y, width and height) but because NodeText wasn't saved in the json file, these Node-like objects do not have that property. This then causes an error when I create a visual instance on screen of these Nodes because the StringVar is used for the tkinter Entry textvariable.
I would like to know if there is a way to load this 'almost node' into my actual Nodes. I could just copy every property one at a time into a new instance but this just seems like a bad way to do it.
I could also null the NodeText StringVar before saving (thus saving the space in the file) and then reinitialise it on loading. This would mean I'd have my full object, but somehow it seems like an awkward workaround.
If you're wondering just how much more information there is with the StringVar, my test json file has just two Nodes. Just saving the basic properties (x,y,width,height), the file is 1k. With each having a StringVar, that becomes 8k. I wouldn't care so much in the case of a small increase, but this is pretty huge.
Can I force the load to be to this Node type rather than just some new type that Python has created?
Edit: if you're wondering what the json looks like, take a look here:
{
"1": {
"py/object": "Node.Node",
"py/state": {
"ImageLocation": "",
"TextBackup": "",
"height": 200,
"uID": 1,
"width": 200,
"xPos": 150,
"yPos": 150
}
},
"2": {
"py/object": "Node.Node",
"py/state": {
"ImageLocation": "",
"TextBackup": "",
"height": 200,
"uID": 2,
"width": 100,
"xPos": 50,
"yPos": 450
}
}
}
Since the class name is there I assumed it would be an instantiation of the class. But when you load the file using jsonpickle, you get the dictionary and can inspect the loaded data and inspect each node. Neither node contains the property 'NodeText'. That is to say, it's not something with 'None' as the value - the attribute simple isn't there.
That's because jsonpickle doesn't know which fields are in your object normally, it restores only the fields passed from the state but the state doesn't field NodeText property. So it just misses it :)
You can add a __setstate__ magic method to achieve that property in your restored objects. This way you will be able to handle dumps with or without the property.
def __setstate__(self, state):
state.setdefault('NodeText', None)
for k, v in state.items():
setattr(self, k, v)
A small example
from pprint import pprint, pformat
import jsonpickle
class Node:
def __init__(self) -> None:
super().__init__()
self.NodeText = Node
self.ImageLocation = None
self.TextBackup = None
self.height = None
self.uID = None
self.width = None
self.xPos = None
self.yPos = None
def __setstate__(self, state):
state.setdefault('NodeText', None)
for k, v in state.items():
setattr(self, k, v)
def __getstate__(self):
state = self.__dict__.copy()
del state['NodeText']
return state
def __repr__(self) -> str:
return str(self.__dict__)
obj1 = Node()
obj1.NodeText = 'Some heavy description text'
obj1.ImageLocation = 'test ImageLocation'
obj1.TextBackup = 'test TextBackup'
obj1.height = 200
obj1.uID = 1
obj1.width = 200
obj1.xPos = 150
obj1.yPos = 150
print('Dumping ...')
dumped = jsonpickle.encode({1: obj1})
print(dumped)
print('Restoring object ...')
print(jsonpickle.decode(dumped))
outputs
# > python test.py
Dumping ...
{"1": {"py/object": "__main__.Node", "py/state": {"ImageLocation": "test ImageLocation", "TextBackup": "test TextBackup", "height": 200, "uID": 1, "width": 200, "xPos": 150, "yPos": 150}}}
Restoring object ...
{'1': {'ImageLocation': 'test ImageLocation', 'TextBackup': 'test TextBackup', 'height': 200, 'uID': 1, 'width': 200, 'xPos': 150, 'yPos': 150, 'NodeText': None}}

Couchbase : How to maintain arrays without duplicate elements?

We have a Couchbase store which has the Customer data.
Each customer has exactly one document in this bucket.
Daily transactions will result in making updates to this customer data.
Sample document. Let's focus on the purchased_product_ids array.
{
"customer_id" : 1000
"purchased_product_ids" : [1, 2, 3, 4, 5 ]
# in reality this is a big array - hundreds of elements
...
... many other elements ...
...
}
Existing purchased_product_ids :
[1, 2, 3, 4, 5]
products purchased today :
[1, 2, 3, 6] // 6 is a new entry, others existing already
Expected result after the update:
[1, 2, 3, 4, 5, 6]
I am using Subdocument API to avoid large data transfer between server and clients.
Option1 "arrayAppend" :
customerBucket.mutateIn(customerKey)
.arrayAppend("purchased_product_ids", JsonObject for [1,2,3,6] )
.execute();
It results in duplicate elements.
"purchased_product_ids" : [1, 2, 3, 4, 5, 1, 2, 3, 6]
Option2 "arrayAddUnique" :
customerBucket.mutateIn(customerKey)
.arrayAddUnqiue("purchased_product_ids", 1 )
.arrayAddUnqiue("purchased_product_ids", 2 )
.arrayAddUnqiue("purchased_product_ids", 3 )
.arrayAddUnqiue("purchased_product_ids", 6 )
.execute();
It throws exception for most of the times,
because those elements already existing.
Is there any better way to do this update ?
You could use N1QL, and the ARRAY_APPEND() and ARRAY_DISTINCT() functions.
UPDATE customer USE KEYS "foo"
SET purchased_product_ids = ARRAY_DISTINCT(ARRAY_APPEND(purchased_product_ids, 9))
Presumably this would be a prepared statement and the key itself and the new value would be supplied as parameters.
Also, if you want to add multiple elements to the array at once, ARRAY_CONCAT() would be a better choice. More here:
https://docs.couchbase.com/server/6.0/n1ql/n1ql-language-reference/arrayfun.html
Do you need purchased_product_ids to be ordered? If not you can convert it to a map, e.g.
{
"customer_id" : 1000
"purchased_product_ids" : {1: {}, 3: {}, 5: {}, 2: {}, 4: {}}
}
and then write to that map with subdoc, knowing you won't be conflicting (assuming product IDs are unique):
customerBucket.mutateIn(customerKey)
.upsert("purchased_product_ids.1", JsonObject.create()) // already exists
.upsert("purchased_product_ids.6", JsonObject.create()) // new product
.execute();
which will result in:
{
"customer_id" : 1000
"purchased_product_ids" : {1: {}, 3: {}, 6: {}, 5: {}, 2: {}, 4: {}}
}
(I've used JsonObject.create() as a placeholder here in case you need to associate additional information for each customer-order paid, but you could equally just write null. If you do need purchased_product_ids to be ordered, you can write the timestamp of the order, e.g. 1: {date: <TIMESTAMP>}, and then order it in code when you fetch.)

How to define a function that adds the values in HashMaps in Python?

The question is : create a function which will calculate the total stock worth in the cafe. You will need to remember to loop through the appropriate maps and lists to do this.
What I have so far :
menu = ("Coffee", "Tea", "Cake", "Cookies")
stock = {
"Coffee" : 10,
"Tea" : 17,
"Cake" : 15,
"Cookies" : 5,
}
price = {
"Coffee" : 'R 12',
"Tea" : 'R 11',
"Cake" : 'R 20',
"Cookies" : 'R 8',
}
def totalstock(stock):
Now I'm stuck, I know there should be a loop and a sum function, but I don't know how to convert the strings to ints so I can add them?
In this case your price dictionary doesn't just have numbers so you'll have to separate the R from the number. Example:
coffee_price = int(price['Coffee'].split(' ')[1])
To explain, take the string at price['Coffee'] and split it, giving a list with 2 values. Return the second value to the int() function to be converted to an integer and stored in coffee_price.

ImmutableJs - compare objects but for one property

I am converting a shopping basket to an immutable structure.
Is there an easy way with immutablejs to see if an immutable object already exists within an immutable list EXCEPT for one object property 'quantity' which could be different? List example:
[{
id: 1,
name: 'fish and chips',
modifiers: [
{
id: 'mod1',
name: 'Extra chips'
}
],
quantity: 2
},{
id: 2,
name: 'burger and chips',
modifiers: [
{
id: 'mod1',
name: 'No salad'
}
],
quantity: 1
}]
Now, say I had another object to put in the list. But I want to check if this exact item with modifiers exists in the list already? I could just do list.findIndex(item => item === newItem) but because of the possible different quantity property then it wont work. Is there a way to === check apart from one property? Or any way to do this without having to loop through every property (aside from quantity) to see if they are the same?
Currently, I have an awful nested loop to go through every item and check every property to see if it is the same.
Well this should work-
list.findIndex(item => item.delete("quantity").equals(newItem.delete("quantity"))
The equals method does deep value comparison. So once you delete the quantity, you are comparing all values that matter.
PS: please ignore code formatting, I am on SO app.
PPS: the above code is not optimal, you should compare a pre-trimmed newItem inside the arrow function instead of trimming it there.