For example suppose I have
case class Test(a: String, b: String)
...
implicit val testFormat = jsonFormat2(Test.apply)
and a json with an extra c field:
val test = "{\"a\": \"A\", \"b\": \"B\", \"c\": \"C\"}"
then I want to find a way (config/param/whatever) to make the following line throw and exception:
test.parseJson.convertTo[Test]
It's very hard to work this out from reading the source code and github documentation.
I didn't see anything in the library that provides that functionality, so I created a wrapper that does a quick check on the number of fields present before delegating the read call to the supplied formatter.
case class StrictRootFormat[T](format: RootJsonFormat[T], size: Int) extends RootJsonFormat[T] {
def read(json: JsValue): T = {
if(json.asJsObject.fields.size > size) deserializationError("JSON has too many fields: \n " + json.toString())
else format.read(json)
}
def write(obj: T): JsValue = format.write(obj)
}
Usage:
implicit val testFormat = StrictRootFormat(jsonFormat2(Test.apply), 2)
You could enhance the read implementation so that you don't need to supply the "size" argument.
Related
I'm experimenting with Scala Play and don't get why Null json Serialization doesn't work out of the box. I wrote a simple class to encapsulate data in response (ServiceResponse) which return a parametrical nullable field data and also made a Writes[Null] implicit, what am I missing? The compiler suggests to write a Writes[ServiceResponse[Null]] which works, but feels cumbersome and I wonder if there is a more concise way of solving the problem. Below is the code and error.
// <!-- EDIT -->
// The data I want to transfer
case class Person (name: String, age: Int)
object Person {
def apply(name: String, age: Int): Person = new Person(name, age)
implicit val reader: Reads[Person] = Json.reads[Person]
implicit val writer: OWrites[Person] = Json.writes[Person]
}
// <!-- END EDIT -->
case class ServiceResponse[T] (data: T, errorMessage: String, debugMessage: String)
object ServiceResponse {
def apply[T](data: T, errorMessage: String, debugMessage: String): ServiceResponse[T] =
new ServiceResponse(data, errorMessage, debugMessage)
implicit def NullWriter: Writes[Null] = (_: Null) => JsNull
implicit def writer[T](implicit fmt: Writes[T]) = Json.writes[ServiceResponse[T]]
}
// <!-- EDIT -->
// Given the above code I would expect
// the call ` Json.toJson(ServiceResponse(null, "Error in deserializing json", null))`
// to produce the following Json: `{ "data": null, "errorMessage": "Error in deserializing json", "debugMessage": null }`
No Json serializer found for type models.ServiceResponse[Null]. Try to implement an implicit Writes or Format for this type.
In C:\...\EchoController.scala:19
15 val wrapAndEcho = Action(parse.json) {
16 request =>
17 request.body.validate[Person] match {
18 case JsSuccess(p, _) => Ok(Json.toJson(echoService.wrapEcho(p)))
19 case JsError(errors) => BadRequest(Json.toJson(ServiceResponse(null,
20 "Error in deserializing json", null)))
21 }
22 }
EDIT
Tried to use Option instead (a more Scala-ish solution, indeed) but the structure of the code is the same and reports the same error
object ServiceResponse {
def apply[T](data: Option[T], errorMessage: Option[String], debugMessage: Option[String]): ServiceResponse[T] =
new ServiceResponse(data, errorMessage, debugMessage)
implicit def optionWriter[T](implicit fmt: Writes[T]) = new Writes[Option[T]] {
override def writes(o: Option[T]): JsValue = o match {
case Some(value) => fmt.writes(value)
case None => JsNull
}
}
implicit def writer[T](implicit fmt: Writes[Option[T]]) = Json.writes[ServiceResponse[T]]
}
I think I need some structural change, but I can't figure what. Also tried changing the type of fmt to Writes[T] (being T the only real variable type) and got a new Error.
diverging implicit expansion for type play.api.libs.json.Writes[T1]
[error] starting with object StringWrites in trait DefaultWrites
[error] case JsError(errors) => BadRequest(Json.toJson(ServiceResponse(None,
...
Thought I found a reasonable solution, but still getting error in the expansion of the implicit :( . I really need a hint.
object ServiceResponse {
def apply[T](data: Option[T], errorMessage: Option[String], debugMessage: Option[String]): ServiceResponse[T] =
new ServiceResponse(data, errorMessage, debugMessage)
implicit def writer[T](implicit fmt: Writes[T]): OWrites[ServiceResponse[T]] = (o: ServiceResponse[T]) => JsObject(
Seq(
("data", o.data.map(fmt.writes).getOrElse(JsNull)),
("errorMessage", o.errorMessage.map(JsString).getOrElse(JsNull)),
("debugMessage", o.debugMessage.map(JsString).getOrElse(JsNull))
)
)
}
NOTE
I think I got the problem. Since null is a possible value of an instance of Person I was expecting it to be handled by Writes[Person]. The fact is that Writes is defined as invariant in its type paramater (it's trait Writes[A] not trait Writes[+A]) so Writes[Null] does not match the definition of the implicit parameter as would happen if it was defined as Writes[+A] (Which in turns would be wrong violating Liskow substitution principle; it could have been Writes[-A], but this would have not solved the problem either, as we are trying to use the subtype Null of Person). Summing up: there is no shorter way to handle a ServiceResponse with a null data field than writing a specific implementation of Writes[ServiceResponse[Null]], which is neither a super nor a sub type of Write[ServiceResponse[Person]]. (An approach could be a union type, but I think it's an overkill). 90% sure of my reasoning, correct me if I'm wrong :)
To explain it better, the ServiceResponse case class takes a type parameter T, which can be anything. And play-json can only provide JSON formats for standard scala types, for custom types you need to define the JSON formatter.
import play.api.libs.json._
case class ServiceResponse[T](
data: T,
errorMessage: Option[String],
debugMessage: Option[String]
)
def format[T](implicit format: Format[T]) = Json.format[ServiceResponse[T]]
implicit val formatString = format[String]
val serviceResponseStr = ServiceResponse[String]("Hello", None, None)
val serviceResponseJsValue = Json.toJson(serviceResponseStr)
val fromString =
Json.fromJson[ServiceResponse[String]](serviceResponseJsValue)
serviceResponseJsValue
fromString
serviceResponseJsValue.toString
Json.parse(serviceResponseJsValue.toString).as[ServiceResponse[String]]
In the above example, you can see that I wanted to create a ServiceResponse with data being a string, so I implement a format string which's necessary for Json.toJson, as well as Json.fromJson to have the readers and writers implemented for the type T. Since T being String and is a standard type, play-json by default is resolving the same.
I have added the scastie snippet, which will help you understand the same better, and you can play around with the same.
<script src="https://scastie.scala-lang.org/shankarshastri/spqJ1FQLS7ym1vm1ugDFEA/8.js"></script>
The above explanation suffices a use-case wherein in case of None, the key won't even be present as part of the json, but the question clearly calls out for having key: null, in case if data is not found.
import play.api.libs.json._
implicit val config =
JsonConfiguration(optionHandlers = OptionHandlers.WritesNull)
case class ServiceResponse[T](
data: Option[T],
errorMessage: Option[String],
debugMessage: Option[String]
)
def format[T](implicit format: Format[T]) = Json.format[ServiceResponse[T]]
implicit val formatString = format[String]
val serviceResponseStr = ServiceResponse[String](None, None, None)
val serviceResponseJsValue = Json.toJson(serviceResponseStr)
val fromString =
Json.fromJson[ServiceResponse[String]](serviceResponseJsValue)
serviceResponseJsValue
fromString
serviceResponseJsValue.toString
Json.parse(serviceResponseJsValue.toString).as[ServiceResponse[String]]
Bringing the below line in the scope, will ensure to write nulls for optional.
implicit val config =
JsonConfiguration(optionHandlers = OptionHandlers.WritesNull)
<script src="https://scastie.scala-lang.org/shankarshastri/rCUmEqXLTeuGqRNG6PPLpQ/6.js"></script>
How to configure the spray-json parsing on parsing options?
Similarly as Jackson Parsing Features.
For example, I am parsing a json that has a field that my case class has not, and it is breaking:
spray.json.DeserializationException: Object is missing required member 'myfield'
UPDATE :
A simple example:
case class MyClass(a: String, b: Long);
and try to parse an incomplete json like
val data = "{a: \"hi\"}"
with a spray-json format like:
jsonFormat2(MyClass.apply)
// ...
data.parseJson.convertTo[MyClass]
(simplified code).
But the question goes further, I want to ask about configuration options like in other parsers. More examples:
Be able to ignore fields that exist in the JSON but not in the case class.
Ways of managing nulls or nonexistent values.
etc.
SprayJson allows you to define custom parsers like so:
case class Foo(a: String, b: Int)
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
override def read(json: JsValue): Foo = {
json.asJsObject.getFields("name", "id") match {
case Seq(JsString(name), id) =>
Foo(name, id.convertTo[Int])
}
}
override def write(obj: Foo): JsValue = obj.toJson
}
This allows you to parse any arbitrary payload and pull out the fields "name" and "id" - other fields are ignored. If those fields are not guaranteed you can add something like:
case Seq(JsString(name), JsNull) =>
Foo(name, 0)
You should look at what's available in JsValue.scala - in particular JsArray may come in handy if you're getting payloads with anonymous arrays (i.e. the root is [{...}] instead of {"field":"value"...})
Spray Json doesn't support default parameters. So You cannot have a case class like
case class MyClass(a: String, b: Int = 0)
and then parse json like {"a":"foo"}
However if you make the second parameter as Option. then it works.
import spray.json._
case class MyClass(a: String, b: Option[Int] = None)
object MyProtocol extends DefaultJsonProtocol {
implicit val f = jsonFormat2(MyClass)
}
import MyProtocol.f
val mc1 = MyClass("foo", Some(10))
val strJson = mc1.toJson.toString
val strJson2 = """{"a": "foo"}"""
val mc2 = strJson2.parseJson.convertTo[MyClass]
println(mc2)
I have a lot of different external JSON entities that I want to parse to different internal case classs via json4s (scala). Everything works fine via the extract function from json4s. I have implemented a parse function which takes a type and a json string and parses the string to the type / the case class. To map the correct json string to the correct case class I have implemented a pattern matching function, which looks like this
entityName match {
case "entity1" => JsonParser.parse[Entity1](jsonString)
case "entity2" => JsonParser.parse[Entity2](jsonString)
....
I don't like the repetition here and would like to do this mapping via a map like this:
val mapping = Map(
"entity1" -> Entity1,
"entity2" -> Entity2
....
With this map in place I could implement the JsonParser.parse function only once like this
JsonParser.parse[mapping(entityName)](jsonString)
This is not working, because the map is referencing to the object and not to the class type. I also tried classOf[Entity1], but this also is not working. Is there a way to do this?
Thanks!
The way you want your JsonParser.parse to work is not possible in Scala. Scala is a strongly and statically typed language. It means that the compiler should know the types of the values at the compile time to be able to verify that you accesss only valid fields and methods on them and/or pass them as valid parameters to methods. Assuming your classes are
case class Entity1(value:Int, unique1:Int)
case class Entity2(value:String, unique2:String)
and you write
val parsed = JsonParser.parse[mapping("entity1")](jsonString)
how the compiler could know the type of parsed to know the type of parsed.value or to know that parsed.unique1 is a valid field while parsed.unique2 is not? The best type compiler could assign to such parsed is something very generic like Any. Of course you can downcast that Any to the specific type later but this means you still have to specify that type explicitly in the asInstanceOf which kind of defeats the whole purpose. Still, if somehow returning Any is OK for you, you may try to do something like this:
import org.json4s.jackson.JsonMethods
implicit val formats = org.json4s.DefaultFormats // or whatever Formats you actually need
val typeMap: Map[String, scala.reflect.Manifest[_]] = Map(
"String" -> implicitly[scala.reflect.Manifest[String]],
"Int" -> implicitly[scala.reflect.Manifest[Int]]
)
def parseAny(typeName: String, jsonString: String): Any = {
val jValue = JsonMethods.parse(jsonString)
jValue.extract(formats, typeMap(typeName))
}
and then do something like this:
def testParseByTypeName(typeName: String, jsonString: String): Unit = {
try {
val parsed = parseAny(typeName, jsonString)
println(s"parsed by name $typeName => ${parsed.getClass} - '$parsed'")
} catch {
case e => println(e)
}
}
def test() = {
testParseByTypeName("String", "\"abc\"")
testParseByTypeName("Int", "123")
}
P.S. If your entityName doesn't come from the outside (i.e. you don't analyze data to find out actual type), you don't actually need it at all. It is enough to use type (without a need for match/case) such as:
def parse[T](jsonString: String)(implicit mf: scala.reflect.Manifest[T]): T = {
val jValue = JsonMethods.parse(jsonString)
jValue.extract[T]
}
def testParse[T](prefix: String, jsonString: String)(implicit mf: scala.reflect.Manifest[T]): Unit = {
try {
val parsed = parse[T](jsonString)
println(s"$prefix => ${parsed.getClass} - '$parsed'")
} catch {
case e => println(e)
}
}
def test() = {
testParse[String]("parse String", "\"abc\"")
testParse[Int]("parse Int", "123")
}
Following idea from #SergGr, as a snippet to paste on Ammonite REPL:
{
import $ivy.`org.json4s::json4s-native:3.6.0-M2`
import org.json4s.native.JsonMethods.parse
import org.json4s.DefaultFormats
import org.json4s.JValue
case class Entity1(name : String, value : Int)
case class Entity2(name : String, value : Long)
implicit val formats = DefaultFormats
def extract[T](input : JValue)(implicit m : Manifest[T]) = input.extract[T]
val mapping: Map[String, Manifest[_]] = Map(
"entity1" -> implicitly[Manifest[Entity1]],
"entity2" -> implicitly[Manifest[Entity2]]
)
val input = parse(""" { "name" : "abu", "value" : 1 } """)
extract(input)(mapping("entity1")) //Entity1("abu", 1)
extract(input)(mapping("entity2")) //Entity2("abu", 1L)
}
For the validate method on request.body it matches the attribute name and value type of the json object to those defined in the model definition. Now if I were to add an extra attribute to the json object and try to validate it, it passes as a JsSuccess when it shouldn't.
{
"Name": "Bob",
"Age": 20,
"Random_Field_Not_Defined_in_Models": "Test"
}
My Person Class is defined as follows
case class Person(name: String, age: Int)
I'm assuming you've been using the built-in Reads[T] or Format[T] converters that Play gives you via Json.reads[T], e.g.:
import play.api.libs.json._
val standardReads = Json.reads[Person]
While these are super-handy, if you need additional validation, you'll have to define a custom Reads[Person] class; but fortunately we can still leverage the built-in JSON-to-case-class macro to do the basic checking and conversion, and then add an extra layer of custom checks if things seem OK:
val standardReads = Json.reads[Person]
val strictReads = new Reads[Person] {
val expectedKeys = Set("name", "age")
def reads(jsv:JsValue):JsResult[Person] = {
standardReads.reads(jsv).flatMap { person =>
checkUnwantedKeys(jsv, person)
}
}
private def checkUnwantedKeys(jsv:JsValue, p:Person):JsResult[Person] = {
val obj = jsv.asInstanceOf[JsObject]
val keys = obj.keys
val unwanted = keys.diff(expectedKeys)
if (unwanted.isEmpty) {
JsSuccess(p)
} else {
JsError(s"Keys: ${unwanted.mkString(",")} found in the incoming JSON")
}
}
}
Note how we utilize standardReads first, to make sure we're dealing with something that can be converted to a Person. No need to reinvent the wheel here.
We use flatMap to effectively short-circuit the conversion if we get a JsError from standardReads - i.e. we only call checkUnwantedKeys if needed.
checkUnwantedKeys just uses the fact that a JsObject is really just a wrapper around a Map, so we can easily check the names of the keys against a whitelist.
Note that you could also write that flatMap using a for-comprehension, which starts to look a lot cleaner if you need even more checking stages:
for {
p <- standardReads.reads(jsv)
r1 <- checkUnexpectedFields(jsv, p)
r2 <- checkSomeOtherStuff(jsv, r1)
r3 <- checkEvenMoreStuff(jsv, r2)
} yield r3
If you want to avoid too much boilerplate it is possible to make a more generic solution using a little bit of scala reflection:
import play.api.libs.json._
import scala.reflect.runtime.universe._
def checkedReads[T](underlyingReads: Reads[T])(implicit typeTag: TypeTag[T]): Reads[T] = new Reads[T] {
def classFields[T: TypeTag]: Set[String] = typeOf[T].members.collect {
case m: MethodSymbol if m.isCaseAccessor => m.name.decodedName.toString
}.toSet
def reads(json: JsValue): JsResult[T] = {
val caseClassFields = classFields[T]
json match {
case JsObject(fields) if (fields.keySet -- caseClassFields).nonEmpty =>
JsError(s"Unexpected fields provided: ${(fields.keySet -- caseClassFields).mkString(", ")}")
case _ => underlyingReads.reads(json)
}
}
}
Then you can specify your reads instances as:
implicit val reads = checkedReads(Json.reads[Person])
This leverages a fair bit of Scala type magic and also the reflection library (that lets you look at fields on classes).
Rather than relying on a fixed set of fields the classFields method gets all of the fields dynamically for the case class (type param T). It looks at all of the members and collects only the case class accessors (otherwise we'd pick up methods like toString). It returns a Set[String] of field names.
You'll notice that the checkedReads takes an implicit TypeTag[T]. This is supplied by the compiler at compile time and used by the typeOf method.
The remaining code is fairly self explanatory. If the incoming json matches our first case (it is a JsObject and there are fields not on the case class) then we return a JsError. Otherwise we pass it on to the underlying reader.
For the validate method on request.body it matches the attribute name and value type of the json object to those defined in the model definition. Now if I were to add an extra attribute to the json object and try to validate it, it passes as a JsSuccess when it shouldn't.
{
"Name": "Bob",
"Age": 20,
"Random_Field_Not_Defined_in_Models": "Test"
}
My Person Class is defined as follows
case class Person(name: String, age: Int)
I'm assuming you've been using the built-in Reads[T] or Format[T] converters that Play gives you via Json.reads[T], e.g.:
import play.api.libs.json._
val standardReads = Json.reads[Person]
While these are super-handy, if you need additional validation, you'll have to define a custom Reads[Person] class; but fortunately we can still leverage the built-in JSON-to-case-class macro to do the basic checking and conversion, and then add an extra layer of custom checks if things seem OK:
val standardReads = Json.reads[Person]
val strictReads = new Reads[Person] {
val expectedKeys = Set("name", "age")
def reads(jsv:JsValue):JsResult[Person] = {
standardReads.reads(jsv).flatMap { person =>
checkUnwantedKeys(jsv, person)
}
}
private def checkUnwantedKeys(jsv:JsValue, p:Person):JsResult[Person] = {
val obj = jsv.asInstanceOf[JsObject]
val keys = obj.keys
val unwanted = keys.diff(expectedKeys)
if (unwanted.isEmpty) {
JsSuccess(p)
} else {
JsError(s"Keys: ${unwanted.mkString(",")} found in the incoming JSON")
}
}
}
Note how we utilize standardReads first, to make sure we're dealing with something that can be converted to a Person. No need to reinvent the wheel here.
We use flatMap to effectively short-circuit the conversion if we get a JsError from standardReads - i.e. we only call checkUnwantedKeys if needed.
checkUnwantedKeys just uses the fact that a JsObject is really just a wrapper around a Map, so we can easily check the names of the keys against a whitelist.
Note that you could also write that flatMap using a for-comprehension, which starts to look a lot cleaner if you need even more checking stages:
for {
p <- standardReads.reads(jsv)
r1 <- checkUnexpectedFields(jsv, p)
r2 <- checkSomeOtherStuff(jsv, r1)
r3 <- checkEvenMoreStuff(jsv, r2)
} yield r3
If you want to avoid too much boilerplate it is possible to make a more generic solution using a little bit of scala reflection:
import play.api.libs.json._
import scala.reflect.runtime.universe._
def checkedReads[T](underlyingReads: Reads[T])(implicit typeTag: TypeTag[T]): Reads[T] = new Reads[T] {
def classFields[T: TypeTag]: Set[String] = typeOf[T].members.collect {
case m: MethodSymbol if m.isCaseAccessor => m.name.decodedName.toString
}.toSet
def reads(json: JsValue): JsResult[T] = {
val caseClassFields = classFields[T]
json match {
case JsObject(fields) if (fields.keySet -- caseClassFields).nonEmpty =>
JsError(s"Unexpected fields provided: ${(fields.keySet -- caseClassFields).mkString(", ")}")
case _ => underlyingReads.reads(json)
}
}
}
Then you can specify your reads instances as:
implicit val reads = checkedReads(Json.reads[Person])
This leverages a fair bit of Scala type magic and also the reflection library (that lets you look at fields on classes).
Rather than relying on a fixed set of fields the classFields method gets all of the fields dynamically for the case class (type param T). It looks at all of the members and collects only the case class accessors (otherwise we'd pick up methods like toString). It returns a Set[String] of field names.
You'll notice that the checkedReads takes an implicit TypeTag[T]. This is supplied by the compiler at compile time and used by the typeOf method.
The remaining code is fairly self explanatory. If the incoming json matches our first case (it is a JsObject and there are fields not on the case class) then we return a JsError. Otherwise we pass it on to the underlying reader.