Let's assume I have a case class with the following setup:
case class Place(id:java.util.UUID, name:String)
I can write a (working!) serializer for this type as follows:
class placeSerializer extends CustomSerializer[Place]( format => (
{
case JObject(JField("id", JString(s)) :: JField("name",JString(x)) :: Nil ) =>
Place(UUID.fromString(s), x)
},
{
case x:Place =>
JObject(
JField("id", JString(x.id.toString())) ::
JField("name", JString(x.name)) :: Nil)
}
)
)
But assuming my case class eventually has a lot more fields, this could lead to me enumerating the entire structure of the object with the AST, creating something that's very verbose just to encode primitives.
json4s appears to have field serializers that can act only on specific fields, with boilerplate methods included to easily transform names and discard fields. These, however, have the following signature for their serialize and deserialize partial functions:
case class FieldSerializer[A: Manifest](
serializer: PartialFunction[(String, Any), Option[(String, Any)]] = Map(),
deserializer: PartialFunction[JField, JField] = Map()
)
Since JField (the type that representes a key -> val from the json) is its own type and not a subclass of JValue, how can I combine these two types of serializers to properly encode the id key by its name to a UUID, while maintaining the default handling of the other fields (which are primitive datatypes).
Essentially I'd like a format chain that understands the field within Place is a UUID, without having to specify AST structure for all the fields that DefaultFormats can already handle.
What I'm looking for specifically is to mimic a pattern similar to the JSONEncoder and JSONDecoder interfaces in python, which can use the key name as well as value type to determine how to handle the marshalling for the field.
There is now a UUID serializer provided in the extras package of json4s. It will most likely be available in version 3.2.11 (which has not been released as of this writing).
You'll be able to do something like this:
import org.json4s.ext.JavaTypesSerializers
implicit val json4sFormats = Serialization.formats(NoTypeHints) ++ JavaTypesSerializers.all
This was taken from the tests for this feature's PR.
The trick is to not write a serializer for your type, but for the type that you're using inside (in this case java.util.UUID)
Then you can add that serializer to the toolbox and from then any type using UUID will work exactly like types using DefaultSerializer supported fields did:
case object UUIDSerialiser extends CustomSerializer[UUID](format => (
{
case JString(s) => UUID.fromString(s)
case JNull => null
},
{
case x: UUID => JString(x.toString)
}
)
)
implicit val json4sFormats = Serialization.formats(NoTypeHints) + UUIDSerialiser
Update link to the PR
Update 2 the PR was merged, and now, in case of UUID you can use:
import org.json4s.ext.JavaTypesSerializers
implicit val json4sFormats = Serialization.formats(NoTypeHints) ++ JavaTypesSerializers.all
Related
I am using an external service and I am expecting to receive a json containing a field with three kinds of value:
A Double
null
The field could be absent
I need to deserialize the json into a case class and somewhere else in the code I need to serialize it to a json with the same fields.
I have an implicit reads and write:
implicit lazy val aReads: Reads[A] = (
"foo".readNullable[Double]
)(A.apply _)
implicit lazy aWrites: OWrites[A] = (
"foo".write[Option[Double]]
)
and the case class:
case class A(
foo: Option[Double]
)
As you can imagine the problem is that I am not able to "catch" when the value is absent and if a use "foo".writeNullable[Double] as writers I am not able to catch when it's null (it will be always absent). How can I solve this issue?
What you need is actually a data type which reflects three states:
Present
Null
Non Existing
There is actually a nice encoding of this here, which is semantically equivalent to:
sealed trait Tristate[+A]
case class Present[+A](a: A) extends Tristate[A]
case object Absent extends Tristate[Nothing] // this can represent your "null" state
case object NonExisting extends Tristate[Nothing]
The small library linked has nice combinators over Tristate such as map, flatMap, filter, etc.
And then, you can derive a Play decoder/encoder which puts the object in the correct state serializes appropriately.
How to configure the spray-json parsing on parsing options?
Similarly as Jackson Parsing Features.
For example, I am parsing a json that has a field that my case class has not, and it is breaking:
spray.json.DeserializationException: Object is missing required member 'myfield'
UPDATE :
A simple example:
case class MyClass(a: String, b: Long);
and try to parse an incomplete json like
val data = "{a: \"hi\"}"
with a spray-json format like:
jsonFormat2(MyClass.apply)
// ...
data.parseJson.convertTo[MyClass]
(simplified code).
But the question goes further, I want to ask about configuration options like in other parsers. More examples:
Be able to ignore fields that exist in the JSON but not in the case class.
Ways of managing nulls or nonexistent values.
etc.
SprayJson allows you to define custom parsers like so:
case class Foo(a: String, b: Int)
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
override def read(json: JsValue): Foo = {
json.asJsObject.getFields("name", "id") match {
case Seq(JsString(name), id) =>
Foo(name, id.convertTo[Int])
}
}
override def write(obj: Foo): JsValue = obj.toJson
}
This allows you to parse any arbitrary payload and pull out the fields "name" and "id" - other fields are ignored. If those fields are not guaranteed you can add something like:
case Seq(JsString(name), JsNull) =>
Foo(name, 0)
You should look at what's available in JsValue.scala - in particular JsArray may come in handy if you're getting payloads with anonymous arrays (i.e. the root is [{...}] instead of {"field":"value"...})
Spray Json doesn't support default parameters. So You cannot have a case class like
case class MyClass(a: String, b: Int = 0)
and then parse json like {"a":"foo"}
However if you make the second parameter as Option. then it works.
import spray.json._
case class MyClass(a: String, b: Option[Int] = None)
object MyProtocol extends DefaultJsonProtocol {
implicit val f = jsonFormat2(MyClass)
}
import MyProtocol.f
val mc1 = MyClass("foo", Some(10))
val strJson = mc1.toJson.toString
val strJson2 = """{"a": "foo"}"""
val mc2 = strJson2.parseJson.convertTo[MyClass]
println(mc2)
I'm working with JSON4S and it handles missing fields correctly, when the corresponding object's field is an option.
I'm using
implicit val formats =
Serialization.formats(NoTypeHints) +
new org.json4s.ext.EnumNameSerializer(E)
and read[T](json).
It's perfect, except one thing. I'd like to designate between missing and null fields. What I'd like to have for each field of my T is to have something like a Triple instead of Option, where this triple would be either a Some(t), Missing or Nullified in analogue to how Option works. I have no problem in defining such a structure, but unfortunatelly I'm not so familiar with how JSON4S works, or how could I (maybe) "intercept" the parsing of a field to achieve such a value-extraction.
As an alternative, it also would be great if the parser would set the corresponding field of T to null if the field is field: null instead of setting it to None. This would not feel Scalaish I think.
You should probably implement a custom serializer for T. I would use the following format because it allows for more flexibility and order independent input:
import org.json4s.CustomSerializer
import org.json4s.JsonAST._
import org.json4s.JsonDSL._
case class Animal(name: String, nrLegs: Int)
class AnimalSerializer extends CustomSerializer[Animal](format => ( {
case jsonObj: JObject =>
val name = (jsonObj \ "name") //JString
val nrLegs = (jsonObj \ "nrLegs") //JInt
Animal(name.extract[String], nrLegs.extract[Int])
}, {
case animal: Animal =>
("name" -> animal.name) ~
("nrLegs" -> animal.nrLegs)
}
))
To handle null/empty values take a look at the JSValue trait. For null values you should match with JNull and for non present values with JNothing.
For the validate method on request.body it matches the attribute name and value type of the json object to those defined in the model definition. Now if I were to add an extra attribute to the json object and try to validate it, it passes as a JsSuccess when it shouldn't.
{
"Name": "Bob",
"Age": 20,
"Random_Field_Not_Defined_in_Models": "Test"
}
My Person Class is defined as follows
case class Person(name: String, age: Int)
I'm assuming you've been using the built-in Reads[T] or Format[T] converters that Play gives you via Json.reads[T], e.g.:
import play.api.libs.json._
val standardReads = Json.reads[Person]
While these are super-handy, if you need additional validation, you'll have to define a custom Reads[Person] class; but fortunately we can still leverage the built-in JSON-to-case-class macro to do the basic checking and conversion, and then add an extra layer of custom checks if things seem OK:
val standardReads = Json.reads[Person]
val strictReads = new Reads[Person] {
val expectedKeys = Set("name", "age")
def reads(jsv:JsValue):JsResult[Person] = {
standardReads.reads(jsv).flatMap { person =>
checkUnwantedKeys(jsv, person)
}
}
private def checkUnwantedKeys(jsv:JsValue, p:Person):JsResult[Person] = {
val obj = jsv.asInstanceOf[JsObject]
val keys = obj.keys
val unwanted = keys.diff(expectedKeys)
if (unwanted.isEmpty) {
JsSuccess(p)
} else {
JsError(s"Keys: ${unwanted.mkString(",")} found in the incoming JSON")
}
}
}
Note how we utilize standardReads first, to make sure we're dealing with something that can be converted to a Person. No need to reinvent the wheel here.
We use flatMap to effectively short-circuit the conversion if we get a JsError from standardReads - i.e. we only call checkUnwantedKeys if needed.
checkUnwantedKeys just uses the fact that a JsObject is really just a wrapper around a Map, so we can easily check the names of the keys against a whitelist.
Note that you could also write that flatMap using a for-comprehension, which starts to look a lot cleaner if you need even more checking stages:
for {
p <- standardReads.reads(jsv)
r1 <- checkUnexpectedFields(jsv, p)
r2 <- checkSomeOtherStuff(jsv, r1)
r3 <- checkEvenMoreStuff(jsv, r2)
} yield r3
If you want to avoid too much boilerplate it is possible to make a more generic solution using a little bit of scala reflection:
import play.api.libs.json._
import scala.reflect.runtime.universe._
def checkedReads[T](underlyingReads: Reads[T])(implicit typeTag: TypeTag[T]): Reads[T] = new Reads[T] {
def classFields[T: TypeTag]: Set[String] = typeOf[T].members.collect {
case m: MethodSymbol if m.isCaseAccessor => m.name.decodedName.toString
}.toSet
def reads(json: JsValue): JsResult[T] = {
val caseClassFields = classFields[T]
json match {
case JsObject(fields) if (fields.keySet -- caseClassFields).nonEmpty =>
JsError(s"Unexpected fields provided: ${(fields.keySet -- caseClassFields).mkString(", ")}")
case _ => underlyingReads.reads(json)
}
}
}
Then you can specify your reads instances as:
implicit val reads = checkedReads(Json.reads[Person])
This leverages a fair bit of Scala type magic and also the reflection library (that lets you look at fields on classes).
Rather than relying on a fixed set of fields the classFields method gets all of the fields dynamically for the case class (type param T). It looks at all of the members and collects only the case class accessors (otherwise we'd pick up methods like toString). It returns a Set[String] of field names.
You'll notice that the checkedReads takes an implicit TypeTag[T]. This is supplied by the compiler at compile time and used by the typeOf method.
The remaining code is fairly self explanatory. If the incoming json matches our first case (it is a JsObject and there are fields not on the case class) then we return a JsError. Otherwise we pass it on to the underlying reader.
Consider this example using Play's JSON API (play.api.libs.json):
case class FooJson(
// lots of other fields omitted
location: Option[LocationJson]
)
object FooJson {
implicit val writes = Json.writes[FooJson]
}
and
case class LocationJson(latitude: Double, longitude: Double)
object LocationJson {
implicit val writes = Json.writes[LocationJson]
}
If location is None, the resulting JSON won't have location field at all. This is fine and understadable. But if I wanted for some reason (say, to make my API more self-documenting), how can I explicitly output null in JSON?
{
"location": null
}
I also tried defining the field as location: LocationJson and passing option.orNull to it, but it does not work (scala.MatchError: null at play.api.libs.json.OWrites$$anon$2.writes). For non-custom types such as String or Double, this approach would produce null in JSON output.
So, while using Json.writes[FooJson] as shown above (or something equally simple, i.e. not having to write a custom Writes implementation), is there a clean way to include nulls in JSON?
What I'm asking is analogous to JsonInclude.Include.ALWAYS in the Jackson library (also Jackson's default behaviour). Similarly in Gson this would be trivial
(new GsonBuilder().serializeNulls().create()).
Play 2.4.4
Greg Methvin, a Play committer, wrote this answer to me in a related GitHub issue:
The JSON macros only support one way of encoding optional values,
which is to omit None values from the JSON. This is not a bug but
rather a limitation of the implementation. If you want to include
nulls you're unfortunately going to have to implement your own Writes.
I do think we should try to provide more configurability for the
macros though.
In this case, I'll let Play exclude this field when the value is null, even if it slightly sacrifices API consistency and self-documentability. It's still such a minor thing (in this particular API) that it doesn't warrant uglifying the code as much as a custom Writes would take for a case class with a dozen values.
I'm hoping they do make this more configurable in future Play versions.
Hello from the future.
As of Play 2.7, a fairly simple solution was introduced for automated JSON codecs. We can introduce the appropriate implicit value for JsonConfiguration in the scope for the Format/Reads/Writes. The following configuration will write nulls for empty Options instead of omitting the fields entirely.
import play.api.libs.json._
implicit val config = JsonConfiguration(optionHandlers = OptionHandlers.WritesNull)
implicit val residentWrites = Json.writes[Resident]
Reference
Here's a way to do it:
object MyWrites extends DefaultWrites{
override def OptionWrites[T](implicit fmt: Writes[T]): Writes[Option[T]] = new Writes[Option[T]] {
override def writes(o: Option[T]): JsValue = {
o match {
case Some(a) => Json.toJson(a)(fmt)
case None => JsNull
}
}
}
}
This will overwrite the default implementation which will not create an element. I used this in your sample code:
case class FooJson(
// ...
location: Option[LocationJson]
)
case class LocationJson(latitude: Double, longitude: Double)
object LocationJson {
implicit val writes = Json.writes[LocationJson]
}
implicit val fooJsonWriter: Writes[FooJson] = new Writes[FooJson] {
override def writes(o: FooJson): JsValue = {
JsObject(Seq(
"location" -> Json.toJson(o.location)
// Additional fields go here.
))
}
}
Json.toJson(FooJson(None))
And got this result res0: play.api.libs.json.JsValue = {"location":null}.
if we have null values then we have to add the option with members in case class which will resolve the issue
case class response(
name:String,
age: option[int]
)
object response {
implicit val format = Json.format[response]
}
Here the option is the answer for us. and if we are the JSON response for age is coming as null and this will handle the solution for us.