JSON deserializer for Anorm - json

First of all, I'm pretty new to Play 2 Scala. I'm trying to write a convert my model object to / from JSON.
As per this blog http://mandubian.com/2012/10/01/unveiling-play-2-dot-1-json-api-part2-writes-format-combinators/
This is what I have tried
case class Facility(id:Pk[Int],name:String)
object Facility{
implicit val facilityWriter = (
(__ \ "id").write[Pk[Int]] and
(__ \ "name").write[String]
)(unlift(Facility.unapply))
Then it gave me an error saying that no JSON deserializer found for Pk[Int]
So Ive tried something like this (after a bit of googling around)
implicit object PkFormat extends Format[Pk[Int]] {
def reads(json:JsValue): Pk[Int] = Id(json.as[Int])
def writes(id:Pk[Int]):JsNumber = JsNumber(id.get)
}
I don't understand what exactly is happening, and coudlnt find an example on how to serialize / deserialize anorm.

The JSON serializer/deserializer supports all the basic values that are covered by the JSON specification. If you want to serialize a custom type you have to tell the serializer how to do that.
Play's JSON serializer uses a Scala (originally Haskell) pattern called type class. In a nutshell, it allows polymorphism without subclassing. This is achieved by bringing an implicit value in scope, i.e. to handle a new type, you define an implicit value/method/object. In your concrete example, you define a type class instance for Pk[Int].
You could convert the Pk[Int] manually in your code, or as in many other frameworks implement the conversion in the Pk class directly, but the type class approach is cleaner (because JSON conversion is a separate concern) and easier to reuse (now you can convert a Pk[Int] anywhere you want even if the Pk class itself doesn't support it, imagine extending a closed-source system).
As to your code, it should just work fine, just make sure you have the necessary imports in scope:
import play.api.libs.json._
import play.api.libs.json.util._
import play.api.libs.json.Writes._
import play.api.libs.functional.syntax._

Related

how to deserialize a json string that contains ## with scala'

As the title already explains, I would like to deserialize a json string that contains a key that starts with ##. With the ## my standard approach using case classes sadly does not work anymore.
val test = """{"##key": "value"}"""
case class Test(##key: String) // not possible
val gson = new GsonBuilder().create()
val res = gson.fromJson(test, classOf[Test])
How can work with the ## withtout preprocessing the input json string?
The simplest answer is to quote the field name:
case class Test(`##key`: String)
I experimented a bit but it seems that GSON doesn't interoperate well with Scala case classes (or the other way around, I guess it's a matter of perspective). I tried playing around with scala.beans.BeanProperty but it doesn't seem like it makes a difference.
A possible way to go is to use a regular class and the SerializedName annotation, as in this example:
import com.google.gson.{FieldNamingPolicy, GsonBuilder}
import com.google.gson.annotations.SerializedName
final class Test(k: String) {
#SerializedName("##key") val key = k
override def toString(): String = s"Test($key)"
}
val test = """{"##key": "foobar"}"""
val gson = new GsonBuilder().create()
val res = gson.fromJson(test, classOf[Test])
println(res)
You can play around with this code here on Scastie.
You can read more on SerializedName (as well as other naming-related GSON features) here on the user guide.
I'm not a Scala programmer, I just used javap and reflection to check what the Scala compiler generated and slightly "learnt" how some Scala internals work.
It does not work for you because of several reasons:
The Scala compiler puts case class elements annotations to the constructor parameters, whereas Gson #SerializedName can only work with fields and methods:
// does not work as expected
case class Test(#SerializedName("##key") `##key`: String)
From the plain Java perspective:
final Constructor<Test> constructor = Test.class.getDeclaredConstructor(String.class);
System.out.println(constructor);
System.out.println(Arrays.deepToString(constructor.getParameterAnnotations()));
public Test(java.lang.String)
[[#com.google.gson.annotations.SerializedName(alternate=[], value=##key)]]
Not sure why the Scala compiler does not replicate the annotations directly to the fields, but the Java language does not allow annotating parameters with the #SerializedName annotation causing a compilation error (JVM does not treats it as a failure either).
The field name is actually encoded in the class file.
From the Java perspective:
final Field field = Test.class.getDeclaredField("$at$atkey"); // the real name of the `##key` element
System.out.println(field);
System.out.println(Arrays.deepToString(field.getDeclaredAnnotations()));
private final java.lang.String Test.$at$atkey <- this is how the field can be accessed from Java
[] <- no annotations by default
Scala allows moving annotations to fields and this would make your code work accordingly to how Gson #SerializedName is designed (of course, no Scala in mind):
import scala.annotation.meta.field
...
case class Test(#(SerializedName#field)("##key") `##key`: String)
Test(value)
If for some/any reason you must use Gson and can't annotate each field with #SerializedName, then you can implement a custom type adapter, but I'm afraid that you have to have deep knowledge in how Scala works.
If I understand what Scala does, it annotates every generated class with the #ScalaSignature annotation.
The annotation provides the bytes() method that returns a payload that's most likely can be used to detect whether the annotated type is a case class, and probably how its members are declared.
I didn't find such a parser/decoder, but if you find one, you can do the following in Gson:
register a type adapter factory that checks whether it can handle it (basically, analyzing the #ScalaSignature annotation, I believe);
if it can, then create a type adapter that is aware of all case class fields, their names possibly handling the #SerializedName yourself, as you can neither extend Gson ReflectiveTypeAdapterFactory nor inject a name remapping strategy;
take transient fields (for good) and other exclusion strategies (for completeness) into account;
read/write each non-excluded field.
Too much work, right? So I see two easy options here: either use a Scala-aware JSON tool like other people are suggesting, or annotate each field that have such a special name.

Parsing JSON based off of schema with recursive fields in Scala

I have a json-schema (https://json-schema.org) with recursive fields, and I would like to programmatically parse json that adheres to the schema in Scala.
One option is to use Argus (https://github.com/aishfenton/Argus), but the only issue is that it uses Scala macros, so a solution that uses this library isn't supported by IntelliJ.
What's the recommended way to perform a task like this in Scala, preferably something that plays well with IntelliJ?
Circe is a great library for working with JSON. The following example uses semi automatic decoding. Circe also has guides for automatic decoding and for using custom codecs.
import io.circe.Decoder
import io.circe.parser.decode
import io.circe.generic.semiauto.deriveDecoder
object Example {
case class MyClass(name: String, code: Int, sub: MySubClass)
case class MySubClass(value: Int)
implicit val myClassDecoder: Decoder[MyClass] = deriveDecoder
implicit val mySubClassDecoder: Decoder[MySubClass] = deriveDecoder
def main(args: Array[String]): Unit = {
val input = """{"name": "Bob", "code": 200, "sub": {"value": 42}}"""
println(decode[MyClass](input).fold(_ => "parse failed", _.toString))
}
}
Have you look at https://github.com/circe/circe , it is pretty good to parse Json with typed format.
I don't know what you mean with recursive fields. But there's lots of different libraries for parsing json. You could use lift-json
https://github.com/lift/framework/tree/master/core/json
Which seems popular, at least from what I've seen here on Stackoverflow. But I personally am very comfortable with and prefer play.json
https://www.playframework.com/documentation/2.6.x/ScalaJson#Json
(Also, I use IntelliJ and work in the Play-framework)
If you really don't want to use any special libraries, someone tried to do that here
How to parse JSON in Scala using standard Scala classes?

Implicit serializing case class to Json with play.api.libs.json

This is my case class Response api model that must be transformed to json(Play 2.5):
import play.api.libs.json.{Json, OFormat}
case class ResponseModel(content: NestedCaseClassModel)
object ResponseModel {
implicit val format: OFormat[ResponseModel] = Json.format
}
case class NestedCaseClassModel(value: String)
object NestedCaseClassModel {
implicit val format: OFormat[NestedCaseClassModel] = Json.format
}
The problem is:
When I have very deeply nested response structure, I need implement companion object for each case class in my model, and add implicit format, that looking very similar.
I looking for a mechanism that permit to me write it one time and use it for any case class. I can't switch form play.api.libs.json library.
Any idea about it?
Ok, I'm going to tell you how to do what you want to do, and then I'm going to tell you why you shouldn't do that.
If you want a format automatically generated for you, then you just need to implement an implicit macro. We need to differentiate which types we want to have a format automatically generated for, if you just make it for Any, then it will override things like String which would be bad. So, we'll define a trait that all our case classes will implement:
trait ImplicitJsonFormat
And now we implement our implicit macro for it:
import play.api.libs.json._
import scala.language.experimental.macros
trait JsonImplicits {
// This works for Play 2.5, in Play 2.6 it becomes JsMacroImpl.implicitConfigFormatImpl
implicit def implicitJsonFormat[A <: ImplicitJsonFormat]: OFormat[A] = macro JsMacroImpl.formatImpl[A]
}
And so now anything that you want a format automatically generated for, you just need to extends ImplicitJsonFormat, and ensure whatever needs the implicit format has had JsonImplicits mixed in:
import play.api.libs.json.{Json, OFormat}
case class ResponseModel(content: NestedCaseClassModel)
object ResponseModel extends JsonImplicits {
implicit val format: OFormat[ResponseModel] = Json.format
}
case class NestedCaseClassModel(value: String) extends ImplicitJsonFormat
And there you have it, NestedCaseClassModel has its format automatically generated. Of course, you could also automatically generate the format for ResponseModel too.
But you really shouldn't do this. Why? Sometimes being explicit has value. These json structures aren't just incidental things with no relevance. They form the protocol of your REST API, or the protocol of whatever you're using. That's generally something that you want to be explicit about, and consistent throughout your code base. By explicitly defining the format on each companion type, you have this consistent place to go and look to for what the format is. And when you need to customise it, you can replace your macro with a manual format declaration, and you haven't changed anything about your approach to declaring the formats, the answer still lies in go and look at the format field on the companion object.
Sure, when you create a project up front, declaring all these formats may be a little tedious. But it's one of those things that's simple to do, easy to get right, and once it's done it's done. As your codebase evolves and progresses, you're probably going to have other things in the companion objects anyway, and you will find yourself needing to evolve the formats, and move away from the macros as you migrate your codebase and the schema of your protocol in different directions.

Which JSON library to use when storing case objects?

I need to serialize akka events to json. Based on
"What JSON library to use in Scala?" I tried several libraries. Since my serializer should know nothing of all my concrete events, the events consisting of case classes and case objects should be serialised using reflection. json4s seems to match my requirements best.
class Json4sEventAdapter(system: ExtendedActorSystem) extends EventAdapter {
implicit val formats = Serialization.formats(FullTypeHints(List(classOf[Evt])))
override def toJournal(event: Any): Any = event match {
case e: AnyRef =>
write(e).getBytes(Charsets.UTF_8)}
override def fromJournal(event: Any, manifest: String): EventSeq = event match {
case e: Array[Byte] => {
EventSeq.single(read[Evt](new String(e.map(_.toChar))))}}
The problem using json4s is, that no matter which implementation is used Deserialization of objects produces different instances.
Since we heavily use pattern matching for the case object this breaks all our existing code.
So my question is: which JSON library could be used with scala and akka persistence when storing case objects?
Is there even one library that handles deserialization of case objects via reflection correctly? - or does anyone have a good workaround?
I can't comment on Json4s, since I've never used it, but I do know that this is a non-issue in play-json. You would do something like:
import play.api.libs.json._
sealed trait MyEventBase
case object MyEvent extends MyEventBase
implicit val myEventBaseFormat: Format[MyEventBase] = Format(Reads.StringReads.collect(ValidationError("must be the string `MyEvent`") {
case "MyEvent" => MyEvent
}, Writes.pure("MyEvent"))
In this case, the serialization is to a bare string, and so I piggyback on the built-in StringReads to assert that the item should be deserializable to a string, and then use collect to narrow that down to the specific string. But the basic idea is that you provide the specific value you want back from deserialization in your Reads instance. Here, it's the singleton case object. So, whenever you deserialize a MyEventBase resulting in a MyEvent, you'll definitely get that same instance back.
In the real world, MyEventBase probably has other subtypes, and so you structure your Writes instance to create some form of type tag for serialization that your Reads instance can key off of to deserialize to the proper subtype. Like, you might serialize to a JSON object instead of a bare string, and that object would have a type field that identifies the subtype. Or just use something like Play JSON Extensions to automatically synthesize a reasonable Format for your sealed trait.
I highly recommend you to have a look at Stamina. It's been implemented to solve most of the usual issues you will encounter with akka-persistence.
It provides a json serialiser (based on spray-json and shapeless) which supports versioning, auto-migrating at read time as well as a testkit to ensure all older versions of persistent events are still readable.

deserialize using Play framework ScalaJsonGenerics vs Jerkson

I seem to be confused when using the play framework on how to deserialize json correctly. With jerkson it looks like you just have to define a case class which then automatically deserializes a json string (stolen from the jerkson docs).
case class Person(id: Long, name: String)
parse[Person]("""{"id":1,"name":"Coda"}""") //=> Person(1,"Coda")
But, with play framework you have to write a lot of boiler plate code to do the same thing. For instance from their documentation.
case class Foo(name: String, entry: Int)
object Foo {
implicit object FopReads extends Format[Foo] {
def reads(json: JsValue) = Foo(
(json \ "name").as[String],
(json \ "entry").as[Int])
def writes(ts: Foo) = JsObject(Seq(
"name" -> JsString(ts.name),
"entry" -> JsNumber(ts.entry)))
}
}
This seems like a lot more work, so i assume i'm either not using it correctly or don't quite understand the advantage of doing it this way. Is there a short cut so that I don't have to write all of this code? If not, should I just be using jerkson in my Action to parse a incoming json string? It seems as though asText is returning a blank string, even when asJson works just fine...which leads me to believe I am definitely doing something wrong.
Thanks
I think there are two answers to your question.
For somewhat less boiler plate, you can use the the Play support for handling case classes. Here's an example for a case class with three fields:
implicit val SampleSetFormat: Format[SampleSet] = productFormat3("sensorId", "times", "values")(SampleSet)(SampleSet.unapply)
I agree that there is more annoying boiler plate, the main reason the Play folks seem to use this approach so they can determine the correct serializer entirely at compile time. So no cost of reflecting as in Jerkson.
I am a total noob with Play and Jerkson, but wholeheartedly recommend doing the least boilerplate approach (using Jerkson lib within each Action). I find that it is philosophically more in line with Scala to do so and it works fine.