Mongo Embedded Document is not JSON Serializable - Python - json

I am currently using Graphene with MongoEngine. The mongo db schemas are as follows
class DocumentAModel(Document):
id = StringField(required=True)
documentB = MapField(EmbeddedDocumentField(DocumentBModel)
class DocumentBModel(EmbeddedDocument):
id = StringField(required=True)
value = IntField()
A sample documentA would be as following
{
id: "id_1",
documentB: {
0: {
id: "b_1",
value: 1
},
1: {
id: "b_2",
value: 11
}
}
And correspondingly, their Graphene types are
class DocumentB(MongoengineObjectType):
class Meta:
model = DocumentBModel
class DocumentA(MongoengineObjectType):
class Meta:
model = DocumentAModel
Finally, the query looks like the following
class Query(graphene.ObjectType):
all_document_a = graphene.List(DocumentA)
def resolve_all_document_a(self, info):
return list(DocumentAModel.objects.all())
However, when I query allDocumentA to get document B, i get the error
Object of type DocumentBModel is not JSON serializable
I am not sure where to marshal Document B to json.
If I change DocumentB from MapField(EmbeddedDocumentField(DocumentBModel) to DictField(), it works without issue. But is there a way to use MapField?
Thanks

MapField is Similar to a DictField, except the ‘value’ of each item must match the specified field type, and keys must be String type.
So in your case DocumentAModel documentB field is MapField and type of value is DocumentBModel. So for DocumentBModel you need to provide id and value fields. You are creating this using graphene but the model mapping will be same for normal(REST) DRF API and GraphQL API. MapField has some validations like key should be String type and value should be type which you have mentioned in model, but in case of Dictfield, it does not needed such type validations, its just normal Python Dictionary field.
Check the code snippet. And change the Graphene query and schema as per requirement.
Check the following code:
class DocumentBModel(fields.EmbeddedDocument):
id = fields.StringField(required=True)
value = fields.IntField()
class DocumentAModel(Document):
name = fields.StringField(required=True)
documentB = fields.MapField(fields.EmbeddedDocumentField(DocumentBModel))
Django shell
$ python manage.py shell
>>>
>>> B_obj1 = DocumentBModel(**{'id': 'b_1', 'value': 1})
>>> B_obj2 = DocumentBModel(**{'id': 'b_2', 'value': 2})
>>> data_obj = DocumentAModel.objects.create(**{"name":"akash", "documentB":{"0":B_obj1, "1":B_obj2}})
>>> data_obj._data
{'id': ObjectId('5ebd1d2cf549becd5a462924'), 'name': 'akash', 'documentB': {'0': <DocumentBModel: DocumentBModel object>, '1': <DocumentBModel: DocumentBModel object
>}}
Database entry:
{
"_id" : ObjectId("5ebd1d2cf549becd5a462924"),
"name" : "akash",
"documentB" : {
"0" : {
"id" : "b_1",
"value" : 1
},
"1" : {
"id" : "b_2",
"value" : 2
}
}
}

I think the issue is because graphql requires the response object to have a definite structure. Having a dict or a map as an attribute "confuses" graphql.
In the example above, the query could be
query {
documentA {
id
documentB {
0 {
id
value
}
1 {
id
value
}
}
}
}
but documentB doesnt have any attributes 0 and 1. So it just have to be a list of dicts. In which case, the query will become
query {
documentA {
id
documentB {
id
value
}
}
}

Related

How do I get circe to decode nested json with kebab-case attribute names

I started with the accepted answer to SO question 53573659 which has a nested list of attrs and uses the auto-parser to get the data into case classes. I want to be able to handle the same data but with the nested fields having kebab-case rather than camel case.
Here is the same input JSON with the kebab-case fields
val sampleKebab="""{
"parent" : {
"name" : "title",
"items" : [
{
"foo" : "foo1",
"attrs" : {
"attr-a" : "attrA1",
"attr-b" : "attrB1"
}
},
{
"foo" : "foo2",
"attrs" : {
"attr-a" : "attrA2",
"attr-b" : "attrB2",
"attr-c" : "attrC2"
}
}
]
}
}"""
I can decode the attrs data by itself using the following example
import io.circe.derivation.deriveDecoder
import io.circe.{Decoder, derivation}
import io.circe.generic.auto._
import io.circe.parser._
val attrKebabExample = """{
"attr-a": "attrA2",
"attr-b": "attrB2",
"attr-c": "attrC2"
}"""
case class AttrsKebab(attrA: String, attrB: String)
implicit val decoder: Decoder[AttrsKebab] = deriveDecoder(derivation.renaming.kebabCase)
val attrKebabData = decode[AttrsKebab](attrKebabExample)
attrKebabData decodes to
Either[io.circe.Error,AttrsKebab] = Right(AttrsKebab(attrA2,attrB2))
When I try to tie this decoder into the case class hierarchy from the original question, it exposes some glue that I am missing to hold it all together
case class ItemKebab(foo: String, attrs : AttrsKebab)
case class ParentKebab(name: String, items: List[ItemKebab])
case class DataKebab(parent : ParentKebab)
case class Data(parent : Parent)
val dataKebab=decode[DataKebab](sample)
In this case, dataKebab contains a DecodingFailure
Either[io.circe.Error,DataKebab] = Left(DecodingFailure(Attempt to decode value on failed cursor, List(DownField(attr-a), DownField(attrs), DownArray, DownField(items), DownField(parent))))
My guess is that either the decoder I defined is being ignored, or I need to explicitly define more of the decode process, but I'm looking for some help to find what the solution might be.

Parsing nested JSON values with Lift-JSON

Scala 2.12 here trying to use Lift-JSON to parse a config file. I have the following myapp.json config file:
{
"health" : {
"checkPeriodSeconds" : 10,
"metrics" : {
"stores" : {
"primary" : "INFLUX_DB",
"fallback" : "IN_MEMORY"
}
}
}
}
And the following MyAppConfig class:
case class MyAppConfig()
My myapp.json is going to evolve and potentially become very large with lots of nested JSON structures inside of it. I don't want to have to create Scala objects for each JSON object and then inject that in MyAppConfig like so:
case class Stores(primary : String, fallback : String)
case class Metrics(stores : Stores)
case class Health(checkPeriodSeconds : Int, metrics : Metrics)
case class MyAppConfig(health : Health)
etc. The reason for this is I'll end up with "config object sprawl" with dozens upon dozens of case classes that are only in existence to satisfy serialization from JSON into Scala-land.
Instead, I'd like to use Lift-JSON to read the myapp.json config file, and then have MyAppConfig just have helper functions that read/parse values out of the JSON on the fly:
import net.liftweb.json._
// Assume we instantiate MyAppConfig like so:
//
// val json = Source.fromFile(configFilePath)
// val myAppConfig : MyAppConfig = new MyAppConfig(json.mkString)
//
class MyAppConfig(json : String) {
implicit val formats = DefaultFormats
def primaryMetricsStore() : String = {
// Parse "INFLUX_DB" value from health.metrics.stores.primary
}
def checkPeriodSeconds() : Int = {
// Parse 10 value from health.checkPeriodSeconds
}
}
This way I can cherry pick which configs I want to expose (make readable) to my application. I'm just not following the Lift API docs to see how this strategy is possible, they all seem to want me to go with creating tons of case classes. Any ideas?
Case classes are not mandatory for extracting data from JSON. You can query the parsed tree and transfrom data according to your needs. The values from the example can be extracted as follows:
import net.liftweb.json._
class MyAppConfig(json : String) {
private implicit val formats = DefaultFormats
private val parsed = parse(json)
def primaryMetricsStore() : String = {
(parsed \ "health" \ "metrics" \ "stores" \ "primary").extract[String]
}
def checkPeriodSeconds() : Int = {
(parsed \ "health" \ "checkPeriodSeconds").extract[Int]
}
}
The original doc provides all the details.

Django Rest Framework serialize query set into dictionary with custom field

I'm trying to get this kind of JSON:
"Physical": {
"Strength": 1,
"Dexterity": 1,
"Stamina": 1
},
But with my Custom serializer:
class EnumField(serializers.Field):
"""
Enum objects are serialized into " 'label' : value " notation
"""
def to_representation(self, obj):
return {"{0}".format(obj.all()[0].__str__()): obj.all()[0].current_value}
class EnumListField(serializers.DictField):
child = EnumField()
And this on my model:
#property
def physical_attributes(self):
return [self.attributes.filter(attribute=attribute) for attribute
in AttributeAbility.objects.physical()]
Outputs this:
"mental_attributes": [
{
"Intelligence": 1
},
{
"Wits": 0
},
{
"Resolve": 0
}
],
What do I need to do to my field, to look like my first JSON? I don't think DictField exists anymore, which is what a few questions on SO suggested.
Your property returns a list, and you want key:value pairs, so treat them as a dictionary:
def to_representation(self, obj):
return {str(item.get()): item.get().value for item in obj}
Obviously replace .value with whatever value you want, and if the str() representation is not what you want for the key, replace that.

updating Json object

I have a json object that I need to update. The original object is a list that looks like this:
[
{
"firstName":"Jane",
"lastName":"Smith"
},
{
"firstName":"Jack",
"lastName":"Brown"
}
]
For each element in the list, we have an extra field, "age", that needs to be added at run-time, so the result should look like the following:
[
{
"firstName":"Jane",
"lastName":"Smith",
"age": "21"
},
{
"firstName":"Jack",
"lastName":"Brown",
"age": "34"
}
]
Any suggestions how to do this so the result is still json?
Thanks.
request.body.asJson.map {
jm => (jm.as[JsObject] ++ Json.obj("age" -> 123))
}
I would recommended deserializing the JSON array you receive into a List of case classes, then having some function fill in the missing attributes based on the current attributes of the case class, and finally serializing them as JSON and serving the response.
Let's make a Person case class with the fields that will be missing as Option:
import play.api.libs.json.Json
case class Person(firstName: String, lastName: String, age: Option[Int])
object Person {
implicit val format: Format[Person] = Json.format[Person]
def addAge(person: Person): Person = {
val age = ... // however you determine the age
person.copy(age = Some(age))
}
}
Within the companion object for Person I've also defined a JSON serializer/deserializer using the format macro, and a stub for a function that will find a person's age then copy it back into the person and return it.
Deep within the web service call you might then have something like this:
val jsArray = ... // The JsValue from somewhere
jsArray.validate[List[Person]].fold(
// Handle the case for invalid incoming JSON
error => InternalServerError("Received invalid JSON response from remote service."),
// Handle a deserialized array of List[Person]
people => {
Ok(
// Serialize as JSON, requires the implicit `format` defined earlier.
Json.toJson(
// Map each Person to themselves, adding the age
people.map(person => Person.addAge(person))
)
)
}
)
This method is much safer, otherwise you'll have to extract values from the array one by one and concatenate objects, which is very awkward. This will also allow you to easily handle errors when the JSON you receive is missing fields you're expecting.

Jackson Scala JSON Deserialization to case classes

I have a JSON which has following form:
{
"inventory": [
{
"productType": "someProduct1",
"details": {
"productId": "Some_id",
"description": "some description"
}
},
{
"productType": "someProduct2",
"details": {
"productId": "Some_id",
"description":{"someKey":"somevalue"}
}
}
]
}
The case classes that I want the above json to deserialize look like following:
case class Inventory(products:List[Product])
case class Product(productType:String,details:ProductDetails)
abstract class ProductDetails
case class ProductDetailsSimple(productId:String,description:String) extends ProductDetails
case class ProductDetailsComplex(productId:String,description:Map[String,String]) extends ProductDetails
I am using jackson-scala module to deserialize the above JSON string as follows:
val mapper = new ObjectMapper() with ScalaObjectMapper
mapper.registerModule(DefaultScalaModule)
mapper.readValue(jsonBody, classOf[Inventory])
The error I get is as follows:
"Unexpected token (END_OBJECT), expected FIELD_NAME: missing property '#details' that is to contain type id (for class ProductDetails)\n at [Source: java.io.StringReader#12dfbabd; line: 9, column: 5]"
I have been through jackson documentation on Polymorphic deserialization and have tried combinations as mentioned but with no luck.
I would like to understand what I am doing wrong here, which needs correction with respect to deserialization using jackson module.
I think there's a few separate problems to address here, so I've listed three separate approaches.
TL;DR
Either use Jackson polymorphism correctly or, in your case, go to a simpler approach and remove the need for the polymorphism. See my code on github.
1. Custom Deserializer
Your formatted JSON is:
{ inventory:
[ { productType: 'someProduct1',
details:
{ productId: 'Some_id',
description: 'some description' } },
{ productType: 'someProduct2',
details:
{ productId: 'Some_id',
description: { someKey: 'somevalue' }
}
}
]
}
The field productType is misplaced, in my opinion, but if this format is a strict requirement then you could write your own deserializer that looks at the productType field and instantiates a different concrete class.
I don't think this would be the best solution so I didn't write example code, but I like the Joda date-time package as a reference for custom serialize/deserialize
2. Jackson Polymorphism
You've separated Product from ProductDetails with a type field:
case class Product(productType:String,details:ProductDetails)
abstract class ProductDetails
I think you've confused how Jackson's polymorphic data type handling works and complicated your class design as a result.
Perhaps your business rules require that a product has a "type", in which case I'd name it "kind" or some other non-code label, and put it into what you've called ProductDetails.
But if "type" was included in an attempt to get type polymorphism working, then it isn't the right way.
I've included the below as a working example of Jackson polymorphism in Scala:
/**
* The types here are close to the original question types but use
* Jackson annotations to mark the polymorphic JSON treatment.
*/
import scala.Array
import com.fasterxml.jackson.annotation.JsonSubTypes.Type
import com.fasterxml.jackson.annotation.{JsonSubTypes, JsonTypeInfo}
#JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.PROPERTY,
property = "type")
#JsonSubTypes(Array(
new Type(value = classOf[ProductDetailsSimple], name = "simple"),
new Type(value = classOf[ProductDetailsComplex], name = "complex")
))
abstract class Product
case class ProductDetailsSimple(productId: String, description: String) extends Product
case class ProductDetailsComplex(productId: String, description: Map[String, String]) extends Product
case class PolymorphicInventory(products: List[Product])
Note that I removed the Product vs ProductDetails distinction, so an Inventory now just as a list of Product. I left the names ProductDetailsSimple and ProductDetailsComplex though I think they should be renamed.
Example usage:
val inv = PolymorphicInventory(
List(
ProductDetailsSimple(productId="Some_id", description="some description"),
ProductDetailsComplex(productId="Some_id", description=Map("someKey" -> "somevalue"))
)
)
val s = jsonMapper.writerWithDefaultPrettyPrinter().writeValueAsString(inv)
println("Polymorphic Inventory as JSON: "+s)
Output:
Polymorphic Inventory as JSON: {
"products" : [ {
"type" : "simple",
"productId" : "Some_id",
"description" : "some description"
}, {
"type" : "complex",
"productId" : "Some_id",
"description" : {
"someKey" : "somevalue"
}
} ]
}
3. Remove the polymorphism
I suggest that polymorphism in this case isn't needed at all, and that the error is in trying to make "description" either a single string or a key/value map when they are really fields with distinct intentions.
Perhaps there is a data legacy issue involved (in which case see the custom deser suggestion), but if the data is in your control, I vote for "go simpler":
case class Product(productId: String,
description: String="",
attributes: Map[String, String]=Map.empty)
case class PlainInventory(products: List[Product])
I's more "scala-rific" to use Option to indicate the absence of a value, so:
case class Product(productId: String,
description: Option[String]=None,
attributes: Option[Map[String, String]]=None)
Example usage:
val inv = PlainInventory(
List(
Product(productId="Some_id", description=Some("some description")),
Product(productId="Some_id", attributes=Some(Map("someKey" -> "somevalue")))
)
)
val s = jsonMapper.writerWithDefaultPrettyPrinter().writeValueAsString(inv)
println("Plain Inventory as JSON: "+s)
Output:
Plain Inventory as JSON: {
"products" : [ {
"productId" : "Some_id",
"description" : "some description"
}, {
"productId" : "Some_id",
"attributes" : {
"someKey" : "somevalue"
}
} ]
}
Working minimal code on github.