I need to parse an object that contains a property "triggers" which is an List<Trigger>. This list can contain 2 type of triggers: Custom and Event.
Here are my Trigger classes :
#JsonClass(generateAdapter = true)
open class Trigger(open val type: String,
open val source: String,
open val tags: Properties? = mutableMapOf())
#JsonClass(generateAdapter = true)
data class CustomTrigger(override val type: String,
override val source: String,
override val tags: Properties?,
//some other fields
) : Trigger(type, source, tags)
#JsonClass(generateAdapter = true)
data class EventTrigger(override val type: String,
override val source: String,
override val tags: Properties?,
//some other fields
) : Trigger(type, source, tags)
My object that I receive from server looks like this :
#JsonClass(generateAdapter = true)
data class Rule(val id: String,
val triggers: MutableList<Trigger>,
//some other fields
)
Using generated adapter on parsing I get on triggers only the fields from Trigger class. I need to implement a logic to parse an EventTrigger is type is "event" or an CustomTrigger if type is "custom".
How can I do this with Moshi?
Do I need to write a manual parser for my Rule object?
Any idea is welcome. Thank you
Take a look at the PolymorphicJsonAdapterFactory.
Moshi moshi = new Moshi.Builder()
.add(PolymorphicJsonAdapterFactory.of(HandOfCards.class, "hand_type")
.withSubtype(BlackjackHand.class, "blackjack")
.withSubtype(HoldemHand.class, "holdem"))
.build();
Note that it needs the optional moshi-adapters dependency.
This example from Moshi helped me to solve the parsing problem :
https://github.com/square/moshi#another-example
Related
I'm trying to develop a system which alows serializing/deserializing of JSON for multiple types of classes in Kotlin. For deserialization I'm usung klaxon, but I also want to use it for serializstion. I've done some research on that, but didn't get a concluseve answer.
So, can I do that? If so, how can it be done? Or should I use other library for this purpose?
Here's my code
package com.pineapple.threadio
import com.beust.klaxon.Klaxon
import com.beust.klaxon.TypeAdapter
import com.beust.klaxon.TypeFor
import kotlin.reflect.KClass
// Frame types
#TypeFor(field = "id", adapter = FrameTypeAdapter::class)
open class BasicFrame(val id: String)
class Ping : BasicFrame("0x0000")
class TransactionRequest : BasicFrame("0x0001")
class TransactionAccept : BasicFrame("0x0002")
class TransactionDeny(val deny_reason: String) : BasicFrame("0x0003")
// Frame processing
class Frame(
#TypeFor(field = "id", adapter = FrameTypeAdapter::class)
val id: String,
val frame: BasicFrame
)
class FrameTypeAdapter : TypeAdapter<BasicFrame> {
override fun classFor(id: Any): KClass<out BasicFrame> = when (id as String) {
"0x0000" -> Ping::class
"0x0001" -> TransactionRequest::class
"0x0002" -> TransactionAccept::class
"0x0003" -> TransactionDeny::class
else -> throw IllegalArgumentException("Unknown frame ID: $id")
}
}
// Actual parsing, straight from klaxon's docs
val frame = Klaxon().parseArray<Frame>(json)
#TypeFor(field = "id", adapter = FrameTypeAdapter::class) annotation should be placed on BasicFrame class. It's redundant on places where you put it.
I want to deserialize NASA asteroids that I get from an API call in json format like this:
data class Asteroid(
val id: Int,
val name: String = "",
val meanDiameter: Int,
)
class Deserializer : ResponseDeserializable<Asteroid> {
override fun deserialize(content: String) = Gson().fromJson(content, Asteroid::class.java)
}
How can I ignore the first top items links and page and only deserialize near_earth_objects in my Asteroid data class? And how can I access the nested items inside of near_earth_objects?
You can just ignore them.
data class NearEarthObjects(#SerializedName("near_earth_objects") val nearEarthObjects: List<Objects>)
data class Objects(val id: String, val name: String)
If you then fetch the json you can just do this:
Gson().fromJson(yourJson, NearEarthObjects::class.java)
And you will get a list of all the objects name and id.
How to configure the spray-json parsing on parsing options?
Similarly as Jackson Parsing Features.
For example, I am parsing a json that has a field that my case class has not, and it is breaking:
spray.json.DeserializationException: Object is missing required member 'myfield'
UPDATE :
A simple example:
case class MyClass(a: String, b: Long);
and try to parse an incomplete json like
val data = "{a: \"hi\"}"
with a spray-json format like:
jsonFormat2(MyClass.apply)
// ...
data.parseJson.convertTo[MyClass]
(simplified code).
But the question goes further, I want to ask about configuration options like in other parsers. More examples:
Be able to ignore fields that exist in the JSON but not in the case class.
Ways of managing nulls or nonexistent values.
etc.
SprayJson allows you to define custom parsers like so:
case class Foo(a: String, b: Int)
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
override def read(json: JsValue): Foo = {
json.asJsObject.getFields("name", "id") match {
case Seq(JsString(name), id) =>
Foo(name, id.convertTo[Int])
}
}
override def write(obj: Foo): JsValue = obj.toJson
}
This allows you to parse any arbitrary payload and pull out the fields "name" and "id" - other fields are ignored. If those fields are not guaranteed you can add something like:
case Seq(JsString(name), JsNull) =>
Foo(name, 0)
You should look at what's available in JsValue.scala - in particular JsArray may come in handy if you're getting payloads with anonymous arrays (i.e. the root is [{...}] instead of {"field":"value"...})
Spray Json doesn't support default parameters. So You cannot have a case class like
case class MyClass(a: String, b: Int = 0)
and then parse json like {"a":"foo"}
However if you make the second parameter as Option. then it works.
import spray.json._
case class MyClass(a: String, b: Option[Int] = None)
object MyProtocol extends DefaultJsonProtocol {
implicit val f = jsonFormat2(MyClass)
}
import MyProtocol.f
val mc1 = MyClass("foo", Some(10))
val strJson = mc1.toJson.toString
val strJson2 = """{"a": "foo"}"""
val mc2 = strJson2.parseJson.convertTo[MyClass]
println(mc2)
I am using Json4s classes inside of a Spark 2.2.0 closure. The "workaround" for a failure to serialize DefaultFormats is to include their definition inside every closure executed by Spark that needs them. I believe I have done more than I needed to below but still get the serialization failure.
Using Spark 2.2.0, Scala 2.11, Json4s 3.2.x (whatever is in Spark) and also tried using Json4s 3.5.3 by pulling it into my job using sbt. In all cases I used the workaround shown below.
Does anyone know what I'm doing wrong?
logger.info(s"Creating an RDD for $actionName")
implicit val formats = DefaultFormats
val itemProps = df.rdd.map[(ItemID, ItemProps)](row => { <--- error points to this line
implicit val formats = DefaultFormats
val itemId = row.getString(0)
val correlators = row.getSeq[String](1).toList
(itemId, Map(actionName -> JArray(correlators.map { t =>
implicit val formats = DefaultFormats
JsonAST.JString(t)
})))
})
I have also tried another suggestion, which is to set the DefaultFormats implicit in the class constructor area and not in the closure, no luck anywhere.
The JVM error trace is from Spark complaining that the task is not serializable and pointing to the line above (last line in my code anyway) then the root cause is explained with:
Serialization stack:
- object not serializable (class: org.json4s.DefaultFormats$, value: org.json4s.DefaultFormats$#7fdd29f3)
- field (class: com.actionml.URAlgorithm, name: formats, type: class org.json4s.DefaultFormats$)
- object (class com.actionml.URAlgorithm, com.actionml.URAlgorithm#2dbfa972)
- field (class: com.actionml.URAlgorithm$$anonfun$udfLLR$1, name: $outer, type: class com.actionml.URAlgorithm)
- object (class com.actionml.URAlgorithm$$anonfun$udfLLR$1, <function3>)
- field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$4, name: func$4, type: interface scala.Function3)
- object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$4, <function1>)
- field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF, name: f, type: interface scala.Function1)
- object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF, UDF(input[2, bigint, false], input[3, bigint, false], input[5, bigint, false]))
- element of array (index: 1)
- array (class [Ljava.lang.Object;, size 3)
- field (class: org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10, name: references$1, type: class [Ljava.lang.Object;)
- object (class org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10, <function2>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 128 more
I have another example. You can try it by using spark-shell. I hope it can help you.
import org.json4s._
import org.json4s.jackson.JsonMethods._
def getValue(x: String): (Int, String) = {
implicit val formats: DefaultFormats.type = DefaultFormats
val obj = parse(x).asInstanceOf[JObject]
val id = (obj \ "id").extract[Int]
val name = (obj \ "name").extract[String]
(id, name)
}
val rdd = sc.parallelize(Array("{\"id\":0, \"name\":\"g\"}", "{\"id\":1, \"name\":\"u\"}", "{\"id\":2, \"name\":\"c\"}", "{\"id\":3, \"name\":\"h\"}", "{\"id\":4, \"name\":\"a\"}", "{\"id\":5, \"name\":\"0\"}"))
rdd.map(x => getValue(x)).collect
Interesting. One typical problem is that you run into serialization issues with the implicit val formats, but as you define them inside your loop this should be ok.
I know that this is bit hacky, but you could try the following:
using #transient implicit val
maybe do a minimal test whether JsonAST.JString(t) is serializable
I made a case class to store some of my data. The case class looks like the following:
case class Job(id: Option[Int], title: String, description: Option[String],
start: Date, end: Option[Date], customerId: Int)
I was using the following formatter to write/read my JSON objects:
implicit val jobFormat = jsonFormat6(Job.apply)
I've got some problems with the write/read because I need to add a field to the JSON (but not to the case class), for example: "test": "test". I tried to write a custom read/write with the following code:
implicit object jobFormat extends RootJsonFormat[Job] {
override def read(json: JsValue): JobRow = ???
override def write(job: Job): JsValue = ??
}
I couldn't get the working code, could somebody help me with this problem?
Thanks in advance!
What jsonFormat6 does is to create you autogenerated RootJsonFormat[Job] object. You can create your custom instances with extending RootJsonFormat[Job]. In this case, you need to create custom instance that decorates autogenerated one and adds logic on write method.
The code will look like this:
implicit object JobFormat extends RootJsonFormat[Job] {
// to use autogenerated jobFormat
val jobFormat = jsonFormat6(Job.apply)
// leave read at it is
override def read(json: JsValue): JobRow =
jobFormat.read(json)
// Change write to add your custom logic
override def write(job: Job): JsValue = {
val json = jobFormat.write(job).asJsonObject
JsObject(json.fields + ("test" -> JsString("test")))
}
}
PS: I haven't compiled code, however, overall implementation will look like this.