deserialize using Play framework ScalaJsonGenerics vs Jerkson - json

I seem to be confused when using the play framework on how to deserialize json correctly. With jerkson it looks like you just have to define a case class which then automatically deserializes a json string (stolen from the jerkson docs).
case class Person(id: Long, name: String)
parse[Person]("""{"id":1,"name":"Coda"}""") //=> Person(1,"Coda")
But, with play framework you have to write a lot of boiler plate code to do the same thing. For instance from their documentation.
case class Foo(name: String, entry: Int)
object Foo {
implicit object FopReads extends Format[Foo] {
def reads(json: JsValue) = Foo(
(json \ "name").as[String],
(json \ "entry").as[Int])
def writes(ts: Foo) = JsObject(Seq(
"name" -> JsString(ts.name),
"entry" -> JsNumber(ts.entry)))
}
}
This seems like a lot more work, so i assume i'm either not using it correctly or don't quite understand the advantage of doing it this way. Is there a short cut so that I don't have to write all of this code? If not, should I just be using jerkson in my Action to parse a incoming json string? It seems as though asText is returning a blank string, even when asJson works just fine...which leads me to believe I am definitely doing something wrong.
Thanks

I think there are two answers to your question.
For somewhat less boiler plate, you can use the the Play support for handling case classes. Here's an example for a case class with three fields:
implicit val SampleSetFormat: Format[SampleSet] = productFormat3("sensorId", "times", "values")(SampleSet)(SampleSet.unapply)
I agree that there is more annoying boiler plate, the main reason the Play folks seem to use this approach so they can determine the correct serializer entirely at compile time. So no cost of reflecting as in Jerkson.

I am a total noob with Play and Jerkson, but wholeheartedly recommend doing the least boilerplate approach (using Jerkson lib within each Action). I find that it is philosophically more in line with Scala to do so and it works fine.

Related

how to deserialize a json string that contains ## with scala'

As the title already explains, I would like to deserialize a json string that contains a key that starts with ##. With the ## my standard approach using case classes sadly does not work anymore.
val test = """{"##key": "value"}"""
case class Test(##key: String) // not possible
val gson = new GsonBuilder().create()
val res = gson.fromJson(test, classOf[Test])
How can work with the ## withtout preprocessing the input json string?
The simplest answer is to quote the field name:
case class Test(`##key`: String)
I experimented a bit but it seems that GSON doesn't interoperate well with Scala case classes (or the other way around, I guess it's a matter of perspective). I tried playing around with scala.beans.BeanProperty but it doesn't seem like it makes a difference.
A possible way to go is to use a regular class and the SerializedName annotation, as in this example:
import com.google.gson.{FieldNamingPolicy, GsonBuilder}
import com.google.gson.annotations.SerializedName
final class Test(k: String) {
#SerializedName("##key") val key = k
override def toString(): String = s"Test($key)"
}
val test = """{"##key": "foobar"}"""
val gson = new GsonBuilder().create()
val res = gson.fromJson(test, classOf[Test])
println(res)
You can play around with this code here on Scastie.
You can read more on SerializedName (as well as other naming-related GSON features) here on the user guide.
I'm not a Scala programmer, I just used javap and reflection to check what the Scala compiler generated and slightly "learnt" how some Scala internals work.
It does not work for you because of several reasons:
The Scala compiler puts case class elements annotations to the constructor parameters, whereas Gson #SerializedName can only work with fields and methods:
// does not work as expected
case class Test(#SerializedName("##key") `##key`: String)
From the plain Java perspective:
final Constructor<Test> constructor = Test.class.getDeclaredConstructor(String.class);
System.out.println(constructor);
System.out.println(Arrays.deepToString(constructor.getParameterAnnotations()));
public Test(java.lang.String)
[[#com.google.gson.annotations.SerializedName(alternate=[], value=##key)]]
Not sure why the Scala compiler does not replicate the annotations directly to the fields, but the Java language does not allow annotating parameters with the #SerializedName annotation causing a compilation error (JVM does not treats it as a failure either).
The field name is actually encoded in the class file.
From the Java perspective:
final Field field = Test.class.getDeclaredField("$at$atkey"); // the real name of the `##key` element
System.out.println(field);
System.out.println(Arrays.deepToString(field.getDeclaredAnnotations()));
private final java.lang.String Test.$at$atkey <- this is how the field can be accessed from Java
[] <- no annotations by default
Scala allows moving annotations to fields and this would make your code work accordingly to how Gson #SerializedName is designed (of course, no Scala in mind):
import scala.annotation.meta.field
...
case class Test(#(SerializedName#field)("##key") `##key`: String)
Test(value)
If for some/any reason you must use Gson and can't annotate each field with #SerializedName, then you can implement a custom type adapter, but I'm afraid that you have to have deep knowledge in how Scala works.
If I understand what Scala does, it annotates every generated class with the #ScalaSignature annotation.
The annotation provides the bytes() method that returns a payload that's most likely can be used to detect whether the annotated type is a case class, and probably how its members are declared.
I didn't find such a parser/decoder, but if you find one, you can do the following in Gson:
register a type adapter factory that checks whether it can handle it (basically, analyzing the #ScalaSignature annotation, I believe);
if it can, then create a type adapter that is aware of all case class fields, their names possibly handling the #SerializedName yourself, as you can neither extend Gson ReflectiveTypeAdapterFactory nor inject a name remapping strategy;
take transient fields (for good) and other exclusion strategies (for completeness) into account;
read/write each non-excluded field.
Too much work, right? So I see two easy options here: either use a Scala-aware JSON tool like other people are suggesting, or annotate each field that have such a special name.

Which is the better way of jsonformat in Spray JSON

I have a case class in scala
case class Employee(designation: Int, name: String)
Now I want to define a JSON format for it in Spray.
As I know there are two ways of it.
implicit lazy val employeeProtocol: RootJsonFormat[Employee] =
jsonFormat2(Employee.apply)
or
implicit lazy val employeeProtocol: RootJsonFormat[Employee] =
jsonFormat(Employee, "designation", "name")
which of the above is a better approach ?? Is there a difference between in them in terms of performance ?
There are trade-offs here, of course.
Do you have a schema (either explicit or implied) for your JSON values? If so, are the object keys different from your case class member names? If your answer to both of these questions is "yes", then you're stuck using the more explicit jsonFormat version. If your answer to the first question is "yes" but the second is "no", you might still want to use the more explicit version, just because it's a little less magical.
There are good reasons to prefer the jsonFormat2(Employee.apply) version, though. Suppose you've got the other version:
import spray.json._, DefaultJsonProtocol._
case class Employee(designation: Int, name: String)
implicit lazy val employeeProtocol: RootJsonFormat[Employee] =
jsonFormat(Employee, "designation", "name")
…and then someone comes along next month and refactors the case class but doesn't notice the instance:
case class Employee(name: String, designation: Int)
implicit lazy val employeeProtocol: RootJsonFormat[Employee] =
jsonFormat(Employee, "designation", "name")
Congratulations: you have a program that compiles just fine but fails in potentially very confusing ways.
Alternatively, you might just not care what your JSON looks like and don't want to have to maintain member names in two places. In either of these cases, the more concise jsonFormat version is more robust in the face of refactoring.
In terms of performance, the two versions are effectively identical, and in fact jsonFormat2 just calls jsonFormat after using runtime reflection to extract member names from the target type. While this runtime reflection does have a (tiny) cost, this extraction will only happen a single time in the execution of your program (assuming you're using a val or a lazy val to define the instance), and the two will perform exactly the same when it comes to actually decoding JSON.

Parsing JSON based off of schema with recursive fields in Scala

I have a json-schema (https://json-schema.org) with recursive fields, and I would like to programmatically parse json that adheres to the schema in Scala.
One option is to use Argus (https://github.com/aishfenton/Argus), but the only issue is that it uses Scala macros, so a solution that uses this library isn't supported by IntelliJ.
What's the recommended way to perform a task like this in Scala, preferably something that plays well with IntelliJ?
Circe is a great library for working with JSON. The following example uses semi automatic decoding. Circe also has guides for automatic decoding and for using custom codecs.
import io.circe.Decoder
import io.circe.parser.decode
import io.circe.generic.semiauto.deriveDecoder
object Example {
case class MyClass(name: String, code: Int, sub: MySubClass)
case class MySubClass(value: Int)
implicit val myClassDecoder: Decoder[MyClass] = deriveDecoder
implicit val mySubClassDecoder: Decoder[MySubClass] = deriveDecoder
def main(args: Array[String]): Unit = {
val input = """{"name": "Bob", "code": 200, "sub": {"value": 42}}"""
println(decode[MyClass](input).fold(_ => "parse failed", _.toString))
}
}
Have you look at https://github.com/circe/circe , it is pretty good to parse Json with typed format.
I don't know what you mean with recursive fields. But there's lots of different libraries for parsing json. You could use lift-json
https://github.com/lift/framework/tree/master/core/json
Which seems popular, at least from what I've seen here on Stackoverflow. But I personally am very comfortable with and prefer play.json
https://www.playframework.com/documentation/2.6.x/ScalaJson#Json
(Also, I use IntelliJ and work in the Play-framework)
If you really don't want to use any special libraries, someone tried to do that here
How to parse JSON in Scala using standard Scala classes?

Parsing json in Kotlin

I'm trying to parse Json in Kotlin. I'm having a lot of trouble, it seems that a lot of people learn Kotlin after Java... Not me, I'm a Python guy. I got a Kotlin Jupyter Notebook running fairly quickly (https://github.com/ligee/kotlin-jupyter), after that I managed to pull information from the bittrex api like so:
import java.net.URL
val result = URL("https://bittrex.com/api/v1.1/public/getmarkets").readText()
It took me a long time to find that I needed to add import java.net.URL, this always seems to be implicit in all code examples. Anyway, this give me a response in json (the "result parameter"):
{"success":true,"message":"","result":[{"MarketCurrency":"LTC","BaseCurrency":"BTC","MarketCurrencyLong":"Litecoin","BaseCurrencyLong":"Bitcoin","MinTradeSize":0.01469482,"MarketName":"BTC-LTC","IsActive":true,"Created":"2014-02-13T00:00:00","Notice":null,"IsSponsored":null,"LogoUrl":"https://bittrexblobstorage.blob.core.windows.net/public/6defbc41-582d-47a6-bb2e-d0fa88663524.png"},{"MarketCurrency":"DOGE","BaseCurrency":"BTC","MarketCurrencyLong":"Dogecoin","BaseCurrencyLong":"Bitcoin","MinTradeSize":274.72527473,"MarketName":"BTC-DOGE","IsActive":true,"Created":"2014-02-13T00:00:00","Notice":null,"IsSponsored":null,"LogoUrl":"https://bittrexblobstorage.blob.core.windows.net/public/a2b8eaee-2905-4478-a7a0-246f212c64c6.png"},{"MarketCurrency ...
Now, in Python I'd just add .json() to the "result" parameter and I can then address the json fields as a dictionary with multiple levels, like
result["success"]
Would give me:
true
Is there something like that for Kotlin? I have tried Klaxon https://github.com/cbeust/klaxon, again it took me a lot of time to realize that I have to do import com.beust.klaxon.string, it is not mentioned on the website for example, so a side question is: How do you know what you need to import when you find code examples? It seems like everybody just knows... But I digress.
My main question is: How can I address the separate fields of the Json and get them into separate variables?
Highest regards.
There are many JSON parsers out there. Your example was a Kotlin explicit one and that is not mandatory for Kotlin because there are also many basic Java parsers, which you can use just as fine in Kotlin.
For your imports. Obviously you need to import the classes you want to use and IDE's like IntelliJ handle the imports for you automatically. That means that you won't have to type any import statements, but they are added automatically when referencing those classes.
I think that nowadays some libraries just expect that you do not handle the imports yourself and thus do not assist you on finding the right imports.
My suggestion for a parser is Fuel.
The library is optimized for Kotlin as well. Your problem would be solved with this simple code snippet with the help of Fuel:
"https://bittrex.com/api/v1.1/public/getmarkets".httpGet().responseJson { _, response, result ->
if (response.responseMessage == "OK" && response.statusCode == 200) {
val yourResult = result.get().obj().getBoolean("success")
}
}
Something you may or may not know is that Kotlin is 100% compatible with Java, thus all the Java json parsers work well with Kotlin. I highly recommend gson. It's small (~200kb), fast, and pretty simple to use.
If this code is running in a server, jackson is pretty standard. It's the most performant json parser for java at the moment, but it's very heavy. It will take some more complicated configuration though, and I think it might require some Kotlin specific modules.
I haven't tried it yet, as it hasn't officially been released, but Kotlin offers a plugin for generating json serialization code. That will probably eventually be the standard way for Kotlin to serialize / deserialize as it should theoretically be the most performant.
The best and quick practice is instead of manually checking for each key, generate native Kotlin "data classes" using tools e.g. https://json2kotlin.com
So your API response turns into the following couple of data classes corresponding to the JSON structure:
data class Json4Kotlin_Base (
val success : Boolean,
val message : String,
val result : List<Result>
)
and
data class Result (
val marketCurrency : String,
val baseCurrency : String,
val marketCurrencyLong : String,
val baseCurrencyLong : String,
val minTradeSize : Double,
val marketName : String,
val isActive : Boolean,
val isRestricted : Boolean,
val created : String,
val notice : String,
val isSponsored : String,
val logoUrl : String
)
When you get the result, you simple map the JSON response to these data classes. The video here shows how to do it step by step.

Which JSON library to use when storing case objects?

I need to serialize akka events to json. Based on
"What JSON library to use in Scala?" I tried several libraries. Since my serializer should know nothing of all my concrete events, the events consisting of case classes and case objects should be serialised using reflection. json4s seems to match my requirements best.
class Json4sEventAdapter(system: ExtendedActorSystem) extends EventAdapter {
implicit val formats = Serialization.formats(FullTypeHints(List(classOf[Evt])))
override def toJournal(event: Any): Any = event match {
case e: AnyRef =>
write(e).getBytes(Charsets.UTF_8)}
override def fromJournal(event: Any, manifest: String): EventSeq = event match {
case e: Array[Byte] => {
EventSeq.single(read[Evt](new String(e.map(_.toChar))))}}
The problem using json4s is, that no matter which implementation is used Deserialization of objects produces different instances.
Since we heavily use pattern matching for the case object this breaks all our existing code.
So my question is: which JSON library could be used with scala and akka persistence when storing case objects?
Is there even one library that handles deserialization of case objects via reflection correctly? - or does anyone have a good workaround?
I can't comment on Json4s, since I've never used it, but I do know that this is a non-issue in play-json. You would do something like:
import play.api.libs.json._
sealed trait MyEventBase
case object MyEvent extends MyEventBase
implicit val myEventBaseFormat: Format[MyEventBase] = Format(Reads.StringReads.collect(ValidationError("must be the string `MyEvent`") {
case "MyEvent" => MyEvent
}, Writes.pure("MyEvent"))
In this case, the serialization is to a bare string, and so I piggyback on the built-in StringReads to assert that the item should be deserializable to a string, and then use collect to narrow that down to the specific string. But the basic idea is that you provide the specific value you want back from deserialization in your Reads instance. Here, it's the singleton case object. So, whenever you deserialize a MyEventBase resulting in a MyEvent, you'll definitely get that same instance back.
In the real world, MyEventBase probably has other subtypes, and so you structure your Writes instance to create some form of type tag for serialization that your Reads instance can key off of to deserialize to the proper subtype. Like, you might serialize to a JSON object instead of a bare string, and that object would have a type field that identifies the subtype. Or just use something like Play JSON Extensions to automatically synthesize a reasonable Format for your sealed trait.
I highly recommend you to have a look at Stamina. It's been implemented to solve most of the usual issues you will encounter with akka-persistence.
It provides a json serialiser (based on spray-json and shapeless) which supports versioning, auto-migrating at read time as well as a testkit to ensure all older versions of persistent events are still readable.