I have two different maps that are created in my project. I have used type alias for both.
type alias1 = Map[String, ScalaObject]
type alias2 = Map[String, String]
I have a match-case situation where I want to differentiate between the two because different operations need to happen on both.
val obj: T = fromJson[T](jacksonMapper, json)
obj match {
case _: alias1 => operation1()
case _: alias2 => operation2()
case _ => obj
}
Any idea how to differentiate the two?
ADT would be to create a sealed trait with some case classes for each case; so one for a Map[String, String] and other for Map[String, ScalaObject] and you can pattern match on the case classes.
So like this:
sealed trait MyType extends Product with Serializable
final case class ObjMap(data: Map[String, ScalaObject]) extends MyType
final case class StrMap(data: Map[String, String]) extends MyType
val obj: MyType = ???
obj match {
case ObjMap(_) => operation1()
case StrMap(_) => operation2()
}
The Typeclass approach may be too complex for this.
The two key concepts to understand why the two maps cannot be differentiated at runtime are
type erasure where compile-time parameteriseed type Map[String, String] becomes runtime class Map
pattern matching translates to runtime isInstanceOf and asInstanceOf calls
Consider the following simplified example
case class Foo[T](i: T)
val fooInt = Foo[Int](42)
val fooStr = Foo[String]("")
fooInt.isInstanceOf[Foo[String]]
// val res0: Boolean = true
Note how isInstanceOf could not check at runtime what was the compile-time type parameter T and so it cannot differentiate between fooInt and fooStr. Effectively the best we can do is something like
fooInt.isInstanceOf[Foo[_]]
where the underscore _ serves to communicate the fact of type erasure.
Next consider how the following pattern match
(fooInt: Any) match { case str: Foo[String] => "oops :(" }
// val res1: String = oops :(
effectively becomes something like
if (fooInt.isInstanceOf[Foo[String]]) "oops :("
which again due to type erasure incorrectly evaluates to "oops :(".
An alternative approach is to try to do as much as possible at compile-time before type arguments are discarded. Typeclasses can be thought of as kind of compile-time pattern matching that happens before type erasure.
Related
How to configure the spray-json parsing on parsing options?
Similarly as Jackson Parsing Features.
For example, I am parsing a json that has a field that my case class has not, and it is breaking:
spray.json.DeserializationException: Object is missing required member 'myfield'
UPDATE :
A simple example:
case class MyClass(a: String, b: Long);
and try to parse an incomplete json like
val data = "{a: \"hi\"}"
with a spray-json format like:
jsonFormat2(MyClass.apply)
// ...
data.parseJson.convertTo[MyClass]
(simplified code).
But the question goes further, I want to ask about configuration options like in other parsers. More examples:
Be able to ignore fields that exist in the JSON but not in the case class.
Ways of managing nulls or nonexistent values.
etc.
SprayJson allows you to define custom parsers like so:
case class Foo(a: String, b: Int)
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
override def read(json: JsValue): Foo = {
json.asJsObject.getFields("name", "id") match {
case Seq(JsString(name), id) =>
Foo(name, id.convertTo[Int])
}
}
override def write(obj: Foo): JsValue = obj.toJson
}
This allows you to parse any arbitrary payload and pull out the fields "name" and "id" - other fields are ignored. If those fields are not guaranteed you can add something like:
case Seq(JsString(name), JsNull) =>
Foo(name, 0)
You should look at what's available in JsValue.scala - in particular JsArray may come in handy if you're getting payloads with anonymous arrays (i.e. the root is [{...}] instead of {"field":"value"...})
Spray Json doesn't support default parameters. So You cannot have a case class like
case class MyClass(a: String, b: Int = 0)
and then parse json like {"a":"foo"}
However if you make the second parameter as Option. then it works.
import spray.json._
case class MyClass(a: String, b: Option[Int] = None)
object MyProtocol extends DefaultJsonProtocol {
implicit val f = jsonFormat2(MyClass)
}
import MyProtocol.f
val mc1 = MyClass("foo", Some(10))
val strJson = mc1.toJson.toString
val strJson2 = """{"a": "foo"}"""
val mc2 = strJson2.parseJson.convertTo[MyClass]
println(mc2)
I have such model: two enumerations and one case class with two fields of these enums types:
// see later, why objects are implicit
implicit object Fruits extends Enumeration {
val Apple = Value("apple")
val Orange = Value("orange")
}
implicit object Vegetables extends Enumeration {
val Potato = Value("potato")
val Cucumber = Value("cucumber")
val Tomato = Value("tomato")
}
type Fruit = Fruits.Value
type Vegetable = Vegetables.Value
case class Pair(fruit: Fruit, vegetable: Vegetable)
I want to parse/generate JSONs to/from Pairs with spray-json. I don't want to declare separate JsonFormats for fruits and vegetables. So, I'd like to do something like this:
import spray.json._
import spray.json.DefaultJsonProtocol._
// enum is implicit here, that's why we needed implicit objects
implicit def enumFormat[A <: Enumeration](implicit enum: A): RootJsonFormat[enum.Value] =
new RootJsonFormat[enum.Value] {
def read(value: JsValue): enum.Value = value match {
case JsString(s) =>
enum.withName(s)
case x =>
deserializationError("Expected JsString, but got " + x)
}
def write(obj: enum.Value) = JsString(obj.toString)
}
// compilation error: couldn't find implicits for JF[Fruit] and JF[Vegetable]
implicit val pairFormat = jsonFormat2(Pair)
// expected value:
// spray.json.JsValue = {"fruit":"apple","vegetable":"potato"}
// but actually doesn't even compile
Pair(Fruits.Apple, Vegetables.Potato).toJson
Sadly, enumFormat doesn't produce implicit values for jsonFormat2. If I write manually two implicit declarations before pairFormat for fruits and vegetables formats, then json marshalling works:
implicit val fruitFormat: RootJsonFormat[Fruit] = enumFormat(Fruits)
implicit val vegetableFormat: RootJsonFormat[Vegetable] = enumFormat(Vegetables)
implicit val pairFormat = jsonFormat2(Pair)
// {"fruit":"apple","vegetable":"potato"}, as expected
Pair(Fruits.Apple, Vegetables.Potato).toJson
So, two questions:
How to get rid of these fruitFormat and vegetableFormat declarations?
Ideally it would be great to not make enumeration objects implicit, while keeping enumFormat function generic. Is there a way to achieve this? Maybe, using scala.reflect package or something like that.
You just have to replace enum.Value with A#Value.
Looking at spray-json #200 you can find an example of a well defined implicit enumFormat, slightly modified to leverage implicit enu retrieval:
implicit def enumFormat[T <: Enumeration](implicit enu: T): RootJsonFormat[T#Value] =
new RootJsonFormat[T#Value] {
def write(obj: T#Value): JsValue = JsString(obj.toString)
def read(json: JsValue): T#Value = {
json match {
case JsString(txt) => enu.withName(txt)
case somethingElse => throw DeserializationException(s"Expected a value from enum $enu instead of $somethingElse")
}
}
}
You can't, Scala doesn't allow implicit chaining, because it would lead to combinatorial explosions that would make the compiler too slow.
See https://docs.scala-lang.org/tutorials/FAQ/chaining-implicits.html
Scala does not allow two such implicit conversions taking place, however, so one cannot got from A to C using an implicit A to B and another implicit B to C.
You'll have to explicitly produce a JsonFormat[T] for each T you want to use.
I'm trying to wrap my head around Circe.
So, here's the model I've been given:
object Gender extends Enumeration {
type Gender = Value
val Male, Female, Unisex, Unknown = Value
}
case class Product(id: String, gender: Gender.Value)
I want to
a) encode the following example as JSON
val product = Product(id = "1234", gender = Gender.Female)
b) map the resulting JSON back onto the Product case class.
My own attempt didn't get me far:
object JsonProtocol {
implicit val productDecoder: Decoder[Product] = deriveDecoder
implicit val productEncoder: Encoder[Product] = deriveEncoder
}
result is a compile time error
Error:(52, 49) could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[A]
implicit val productDecoder: Decoder[Product] = deriveDecoder
^
I've no idea why this exception is thrown and what the solution could look like. Maybe it's the usage of the Enumeration type?
Try defining your own encoders and decoders for the enum using:
Decoder.enumDecoder[E <: Enumeration](enum: E)
Encoder.enumEncoder[E <: Enumeration](enum: E)
something like:
object JsonProtocol {
implicit val genderDecoder: Decoder[Gender.Value] = Decoder.enumDecoder(Gender)
implicit val genderEncoder: Encoder[Gender.Value] = Encoder.enumEncoder(Gender)
implicit val productDecoder: Decoder[Product] = deriveDecoder
implicit val productEncoder: Encoder[Product] = deriveEncoder
}
These are needed because the automatic/semiautomatic derivers only work for hierarchies of sealed traits and case classes as far as I know. The reason you see that error is because the derived codecs for Product will implicitly require encoders/decoders for the types of each of it's parameters. An encoder/decoder for String is a standard part of Circe, but you'll probably need to create ones for your own enumerations.
The accepted answer is deprecated (circe 0.12.0).
Circe provides now these functions:
Decoder.decodeEnumeration[E <: Enumeration](enum: E)
Encoder.encodeEnumeration[E <: Enumeration](enum: E)
With the example:
implicit val genderDecoder: Decoder[Gender.Value] = Decoder.decodeEnumeration(Gender)
implicit val genderEncoder: Encoder[Gender.Value] = Encoder.encodeEnumeration(Gender)
Have a look at enumeratum if you want to use enumerations with circe. You could then try something like this:
import enumeratum._
sealed trait Gender extends EnumEntry
case object Gender extends CirceEnum[Gender] with Enum[Gender] {
case object Male extends Gender
case object Female extends Gender
case object Unisex extends Gender
case object Unknown extends Gender
val values = findValues
}
Gender.values.foreach { gender =>
assert(gender.asJson == Json.fromString(gender.entryName))
}
This should work with circe's automatic derivation for use with your case classes.
For Scala 3 there is no solution today within Circe.
However there is a library that works nicely: circe-tagged-adt-codec
Here an example that works for me (the rest I do with semiautomatic derivation from Circe):
enum TestOverrideType derives JsonTaggedAdt.PureEncoder:
case Exists, NotExists, IsEquals, HasSize
I have case classes that only contain Strings or collections of Strings and want to convert them to JSON objects with corresponding field names plus an additional field to denote the type.
sealed trait Item
case class ProductX(a: String, b: String) extends Item
case class ProductY(a: String, b: String) extends Item
case class CollX(els: List[String]) extends Item
case class CollY(els: List[String]) extends Item
Most libs have special support for case classes but there never seems to be a way to use that in conjunction with an additional type field to disambiguate the isomorphic cases, and I'd have to fall back to the more low-level descriptions.
So far I've tried spray-json and argonaut and I wind up with way more boilerplate than my simple usage scenario would justify:
// spray-json
implicit object CollXJsonFormat extends RootJsonFormat[CollX] {
def write(ss: CollX) = JsObject(
"type" -> JsString("CollX"),
"els" -> JsArray(ss.els.map(JsString(_)): _*)
)
def read(value: JsValue) = {
value.asJsObject.getFields("type", "els") match {
case Seq(JsString("CollX"), JsArray(els)) =>
CollX(els.toList.map({
case JsString(s) => s
case _ => throw new DeserializationException("JsString expected")
}))
case _ => throw new DeserializationException("CollX expected")
}
}
}
For argonaut I couldn't even figure out how to match the type field since DecodeResult has no filter method:
// argonaut
implicit def CollXCodec: CodecJson[CollX] =
CodecJson(
(px: CollX) =>
("type" := "CollX") ->:
("els" := px.els) ->:
jEmptyObject,
c => for {
t <- (c --\ "type").as[String]
els <- (c --\ "els").as[List[String]]
} yield CollX(els))
Is there another lib that can handle this better or is there some feature in one of these libs I've overlooked that would significantly reduce the boilerplate?
I use: net.liftweb with very good results.
You can create json this way.
For example if you have:
case class ABjsontemplate(a: String, b: String)
You can create the json like this:
net.liftweb.json.Serialization.write(ABjsontemplate("a", "b"))
in SBT:
libraryDependencies += "net.liftweb" %% "lift-json" % "3.0-M1"
I hope it can help.
Is there any way of pattern matching objects where the objects may be Set[Foo] or Set[Bar] when the matching object can be any Object.
Given the below code, trying to pattern match on Set[Bar] will result in a match of Set[Foo] because of type erasure.
import play.api.libs.json._
import scala.collection.immutable.HashMap
case class Foo(valOne: Int, valTwo: Double)
object Foo {
implicit val writesFoo = Json.writes[Foo]
}
case class Bar(valOne: String)
object Bar {
implicit val writesBar = Json.writes[Bar]
}
case class TestRequest(params: Map[String, Object])
object TestRequest {
import play.api.libs.json.Json.JsValueWrapper
implicit val writeAnyMapFormat = new Writes[Map[String, Object]] {
def writes(map: Map[String, Object]): JsValue = {
Json.obj(map.map {
case (s, a) => {
val ret: (String, JsValueWrapper) = a match {
case _: String => s -> JsString(a.asInstanceOf[String])
case _: java.util.Date => s -> JsString(a.asInstanceOf[String])
case _: Integer => s -> JsString(a.toString)
case _: java.lang.Double => s -> JsString(a.toString)
case None => s -> JsNull
case foo: Set[Foo] => s -> Json.toJson(a.asInstanceOf[Set[Foo]])
case bar: Set[Bar] => s -> Json.toJson(a.asInstanceOf[Set[Bar]])
case str: Set[String] => s -> Json.toJson(a.asInstanceOf[Set[String]])
}
ret
}}.toSeq: _*)
}
}
implicit val writesTestRequest = Json.writes[TestRequest]
}
object MakeTestRequest extends App {
val params = HashMap[String, Object]("name" -> "NAME", "fooSet" -> Set(Foo(1, 2.0)), "barSet" -> Set(Bar("val1")))
val testRequest = new TestRequest(params)
println(Json.toJson(testRequest))
}
Trying to serialise the TestRequest will result in:
Exception in thread "main" java.lang.ClassCastException: Bar cannot be cast to Foo
Delegating the pattern matching of Sets to another method in an attempt to get the TypeTag,
case _ => s -> matchSet(a)
results in the type, unsurprisingly, of Object.
def matchSet[A: TypeTag](set: A): JsValue = typeOf[A] match {
case fooSet: Set[Foo] if typeOf[A] =:= typeOf[Foo] => Json.toJson(set.asInstanceOf[Set[Foo]])
case barSet: Set[Bar] if typeOf[A] =:= typeOf[Bar] => Json.toJson(set.asInstanceOf[Set[Bar]])
}
The runtime error being:
Exception in thread "main" scala.MatchError: java.lang.Object (of class scala.reflect.internal.Types$ClassNoArgsTypeRef)
A workaround could be to check the instance of the first element in the Set but this seems inefficient and ugly. Could also match on the key eg fooSet or barSet but if the keys are the same name eg both called set, then this wouldn't work.
In 2.11 s there any way to get at the type/class the Set has been created with?
You could use Shapeless typeable. Note that this is still not 100% safe (e.g. empty lists of different types cannot be distinguished at runtime, because that information literally doesn't exist); under the hood it's doing things like checking the types of the elements using reflection, just with a nicer interface on top.
In general it's better to carry the type information around explicitly, e.g. by using a a case class (or a shapeless HMap) rather than an untyped Map. Less good, but still better than nothing, is using wrapper case classes for the different types of Set that are possible, so that each one is a different type at runtime.
(Also half the point of the pattern match is to avoid the asInstanceOf; you should use e.g. foo rather than a.asInstanceOf[Set[Foo]])