Encoding Scala None to JSON value using circe - json

Suppose I have the following case classes that need to be serialized as JSON objects using circe:
#JsonCodec
case class A(a1: String, a2: Option[String])
#JsonCodec
case class B(b1: Option[A], b2: Option[A], b3: Int)
Now I need to encode val b = B(None, Some(A("a", Some("aa")), 5) as JSON but I want to be able to control whether it is output as
{
"b1": null,
"b2": {
"a1": "a",
"a2": "aa"
},
"b3": 5
}
or
{
"b2": {
"a1": "a",
"a2": "aa"
},
"b3": 5
}
Using Printer's dropNullKeys config, e.g. b.asJson.noSpaces.copy(dropNullKeys = true) would result in omitting Nones from output whereas setting it to false would encode Nones as null (see also this question). But how can one control this setting on a per field basis?

The best way to do this is probably just to add a post-processing step to a semi-automatically derived encoder for B:
import io.circe.{ Decoder, JsonObject, ObjectEncoder }
import io.circe.generic.JsonCodec
import io.circe.generic.semiauto.{ deriveDecoder, deriveEncoder }
#JsonCodec
case class A(a1: String, a2: Option[String])
case class B(b1: Option[A], b2: Option[A], b3: Int)
object B {
implicit val decodeB: Decoder[B] = deriveDecoder[B]
implicit val encodeB: ObjectEncoder[B] = deriveEncoder[B].mapJsonObject(
_.filter {
case ("b1", value) => !value.isNull
case _ => true
}
)
}
And then:
scala> import io.circe.syntax._
import io.circe.syntax._
scala> B(None, None, 1).asJson.noSpaces
res0: String = {"b2":null,"b3":1}
You can adjust the argument to the filter to remove whichever null-valued fields you want from the JSON object (here I'm just removing b1 in B).
It's worth noting that currently you can't combine the #JsonCodec annotation and an explicitly defined instance in the companion object. This isn't an inherent limitation of the annotation—we could check the companion object for "overriding" instances during the macro expansion, but doing so would make the implementation substantially more complicated (right now it's quite simple). The workaround is pretty simple (just use deriveDecoder explicitly), but of course we'd be happy to consider an issue requesting support for mixing and matching #JsonCodec and explicit instances.

Circe have added a method dropNullValues on Json that uses what Travis Brown mentioned above.
def dropNulls[A](encoder: Encoder[A]): Encoder[A] =
encoder.mapJson(_.dropNullValues)
implicit val entityEncoder: Encoder[Entity] = dropNulls(deriveEncoder)

Related

Parsing JSON with Scala play framework - handling variable array elements?

I'm trying to parse a json string of search results of the following form:
"""
{
"metadata": [
"blah",
"unimportant"
],
"result": [
{
"type": "one",
"title": "this",
"weird": "attribute unique to this type",
"other": 7
},
{
"type": "two",
"title": "that",
"another_weird": "a different attribute unique to this second type",
"another": "you get the idea"
},
{
"type": "one",
"title": "back to this type, which has the same fields as the other element of this type",
"weird": "yep",
"other": 8
}
...
]
}
"""
There is a known, fixed number of result element types (given by the type field), each of which correspond to a unique schema. For a given search request, there can be any number of each type returned in any order (up to some fixed total).
Writing out the case classes and Reads implicits for each type is easy enough, but my question is around the best way to handle the variability in the result sequence... pattern matching seems the obvious choice, but I'm just not sure where or how with the play framework. Thanks in advance!
EDIT: Per the suggestion in the comments, I gave it a go by attempting to read the result sequence as subtypes of a common base trait, but that didn't quite work. The following compiles, but doesn't parse the example json correctly. Any additional suggestions are welcome.
sealed trait Parent {def title: String}
case class One(`type`: String, title: String, weird: String, other: Int) extends Parent
case class Two(`type`: String, title: String, another_weird: String, another: String) extends Parent
case class SearchResponse(result: Seq[Parent], metadata: Seq[String])
implicit val oneReader = Json.reads[One]
implicit val twoReader = Json.reads[Two]
implicit val parentReader = Json.reads[Parent]
implicit val searchReader = (
(__ \ "result").read[Seq[Parent]] and (__ \ "metadata").read[Seq[String]]
)(SearchResponse.apply _)
val searchResult = Json.fromJson[SearchResponse](json)
print(searchResult)
Define implicit JsonConfiguration
import play.api.libs.json._
sealed trait Parent {
def title: String
}
case class One(title: String, weird: String, other: Int) extends Parent
case class Two(title: String, another_weird: String, another: String) extends Parent
case class SearchResponse(result: Seq[Parent], metadata: Seq[String])
implicit val cfg = JsonConfiguration(
discriminator = "type",
typeNaming = JsonNaming(_.toLowerCase)
)
implicit val oneReader = Json.reads[One]
implicit val twoReader = Json.reads[Two]
implicit val parentReader = Json.reads[Parent]
implicit val searchReader = Json.reads[SearchResponse]
val searchResult = Json.fromJson[SearchResponse](json)
println(searchResult)
// JsSuccess(SearchResponse(List(One(this,attribute unique to this type,7), Two(that,a different attribute unique to this second type,you get the idea), One(back to this type, which has the same fields as the other element of this type,yep,8)),List(blah, unimportant)),)
https://www.playframework.com/documentation/2.8.x/ScalaJsonAutomated#Custom-Naming-Strategies

Capturing unused fields while decoding a JSON object with circe

Suppose I have a case class like the following, and I want to decode a JSON object into it, with all of the fields that haven't been used ending up in a special member for the leftovers:
import io.circe.Json
case class Foo(a: Int, b: String, leftovers: Json)
What's the best way to do this in Scala with circe?
(Note: I've seen questions like this a few times, so I'm Q-and-A-ing it for posterity.)
There are a couple of ways you could go about this. One fairly straightforward way would be to filter out the keys you've used after decoding:
import io.circe.{ Decoder, Json, JsonObject }
implicit val decodeFoo: Decoder[Foo] =
Decoder.forProduct2[Int, String, (Int, String)]("a", "b")((_, _)).product(
Decoder[JsonObject]
).map {
case ((a, b), all) =>
Foo(a, b, Json.fromJsonObject(all.remove("a").remove("b")))
}
Which works as you'd expect:
scala> val doc = """{ "something": false, "a": 1, "b": "abc", "0": 0 }"""
doc: String = { "something": false, "a": 1, "b": "abc", "0": 0 }
scala> io.circe.jawn.decode[Foo](doc)
res0: Either[io.circe.Error,Foo] =
Right(Foo(1,abc,{
"something" : false,
"0" : 0
}))
The disadvantage of this approach is that you have to maintain code to remove the keys you've used separately from their use, which can be error-prone. Another approach is to use circe's state-monad-powered decoding tools:
import cats.data.StateT
import cats.instances.either._
import io.circe.{ ACursor, Decoder, Json }
implicit val decodeFoo: Decoder[Foo] = Decoder.fromState(
for {
a <- Decoder.state.decodeField[Int]("a")
b <- Decoder.state.decodeField[String]("b")
rest <- StateT.inspectF((_: ACursor).as[Json])
} yield Foo(a, b, rest)
)
Which works the same way as the previous decoder (apart from some small differences in the errors you'll get if decoding fails):
scala> io.circe.jawn.decode[Foo](doc)
res1: Either[io.circe.Error,Foo] =
Right(Foo(1,abc,{
"something" : false,
"0" : 0
}))
This latter approach doesn't require you to change the used fields in multiple places, and it also has the advantage of looking a little more like any other decoder you'd write manually in circe.

Rename JSON fields with circe

I want to have different names of fields in my case classes and in my JSON, therefore I need a comfortable way of renaming in both, encoding and decoding.
Does someone have a good solution ?
You can use Custom key mappings via annotations. The most generic way is the JsonKey annotation from io.circe.generic.extras._. Example from the docs:
import io.circe.generic.extras._, io.circe.syntax._
implicit val config: Configuration = Configuration.default
#ConfiguredJsonCodec case class Bar(#JsonKey("my-int") i: Int, s: String)
Bar(13, "Qux").asJson
// res5: io.circe.Json = JObject(object[my-int -> 13,s -> "Qux"])
This requires the package circe-generic-extras.
Here's a code sample for Decoder (bit verbose since it won't remove the old field):
val pimpedDecoder = deriveDecoder[PimpClass].prepare {
_.withFocus {
_.mapObject { x =>
val value = x("old-field")
value.map(x.add("new-field", _)).getOrElse(x)
}
}
}
implicit val decodeFieldType: Decoder[FieldType] =
Decoder.forProduct5("nth", "isVLEncoded", "isSerialized", "isSigningField", "type")
(FieldType.apply)
This is a simple way if you have lots of different field names.
https://circe.github.io/circe/codecs/custom-codecs.html
You can use the mapJson function on Encoder to derive an encoder from the generic one and remap your field name.
And you can use the prepare function on Decoder to transform the JSON passed to a generic Decoder.
You could also write both from scratch, but it may be a ton of boilerplate, those solutions should both be a handful of lines max each.
The following function can be used to rename a circe's JSON field:
import io.circe._
object CirceUtil {
def renameField(json: Json, fieldToRename: String, newName: String): Json =
(for {
value <- json.hcursor.downField(fieldToRename).focus
newJson <- json.mapObject(_.add(newName, value)).hcursor.downField(fieldToRename).delete.top
} yield newJson).getOrElse(json)
}
You can use it in an Encoder like so:
implicit val circeEncoder: Encoder[YourCaseClass] = deriveEncoder[YourCaseClass].mapJson(
CirceUtil.renameField(_, "old_field_name", "new_field_name")
)
Extra
Unit tests
import io.circe.parser._
import org.specs2.mutable.Specification
class CirceUtilSpec extends Specification {
"CirceUtil" should {
"renameField" should {
"correctly rename field" in {
val json = parse("""{ "oldFieldName": 1 }""").toOption.get
val resultJson = CirceUtil.renameField(json, "oldFieldName", "newFieldName")
resultJson.hcursor.downField("oldFieldName").focus must beNone
resultJson.hcursor.downField("newFieldName").focus must beSome
}
"return unchanged json if field is not found" in {
val json = parse("""{ "oldFieldName": 1 }""").toOption.get
val resultJson = CirceUtil.renameField(json, "nonExistentField", "newFieldName")
resultJson must be equalTo json
}
}
}
}

Parse a json array of object to their appropriate case class

I have a json array of settings like so:
[
{
"name": "Company Name",
"key": "company_name",
"default": "Foo"
}, {
"name": "Deposit Weeks",
"key": "deposit_weeks",
"default": 6
}, {
"name": "Is VAT registered",
"key": "vat_registered",
"default": false
}
]
I want to parse this into a Seq of Setting objects. I have tried to define my json format by using a trait and defining the different case classes according to the data type in the json object:
sealed trait Setting
case class StringSetting(name: String, key: String, default: String) extends Setting
case class IntSetting(name: String, key: String, default: Int) extends Setting
case class BoolSetting(name: String, key: String, default: Boolean) extends Setting
Now I try to parse the json:
val json = Json.parse(jsonStr)
implicit val jsonFormat: Format[Setting] = Json.format[Setting]
val result = Try(json.as[Seq[Setting]])
Here I get a compile error:
Error:(19, 61) No unapply or unapplySeq function found
implicit val jsonFormat: Format[Setting] = Json.format[Setting]
Is there a way to map each setting to its appropriate case class?
The naive approach would be to provide Reads[Setting](if your aim just to convert json to object) so that JSON deserializer able to build the right variant of Setting.
import play.api.libs.json._
import play.api.libs.functional.syntax._
implicit val settingReads: Reads[Setting] = (__ \ "default").read[String].map[Setting](StringSetting) |
(__ \ "default").read[Int].map[Setting](IntSetting) |
(__ \ "default").read[Boolean].map[Setting](BoolSetting)
However, this would not work if you have same type for 'default' in different sub classes. In this case JSON deserializer unable to distinguish
between those two case classes.
Another approach is to use play json variant library.
import julienrf.variants.Variants
sealed trait Setting
case class StringSetting(name: String, key: String, default: String) extends Setting
case class IntSetting(name: String, key: String, default: Int) extends Setting
case class BoolSetting(name: String, key: String, default: Boolean) extends Setting
object Setting {
implicit val format: Format[Setting] = Variants.format[Setting]
}
Variant.format provides both read and writes for Setting. Make sure that assignment of 'implicit val format' should happen after all possible subclass has been declared.
For more information regarding play json variant library click here

Argonaut: Generic method to encode/decode array of objects

I am trying to implement a generic pattern with which to generate marshallers and unmarshallers for an Akka HTTP REST service using Argonaut, handling both entity and collection level requests and responses. I have no issues in implementing the entity level as such:
case class Foo(foo: String)
object Foo {
implicit val FooJsonCodec = CodecJson.derive[Foo]
implicit val EntityEncodeJson = FooJson.Encoder
implicit val EntityDecodeJson = FooJson.Decoder
}
I am running into issues attempting to provide encoders and decoders for the following:
[
{ "foo": "1" },
{ "foo": "2" }
]
I have attempted adding the following to my companion:
object Foo {
implicit val FooCollectionJsonCodec = CodecJson.derive[HashSet[Foo]]
}
However, I am receiving the following error:
Error:(33, 90) value jencode0L is not a member of object argonaut.EncodeJson
I see this method truly does not exist but is there any other generic method to generate my expected result. I'm strongly avoiding using an additional case class to describe the collection since I am using reflection heavily in my use case.
At this point, I'd even be fine with a manually constructed Encoder and Decoder, however, I've found no documentation on how to construct it with the expected structure.
Argonaut have predefined encoders and decoders for Scala's immutable lists, sets, streams and vectors. If your type is not supported explicitly, as in the case of java.util.HashSet, you can easily add EncodeJson and DecodeJson for the type:
import argonaut._, Argonaut._
import scala.collection.JavaConverters._
implicit def hashSetEncode[A](
implicit element: EncodeJson[A]
): EncodeJson[java.util.HashSet[A]] =
EncodeJson(set => EncodeJson.SetEncodeJson[A].apply(set.asScala.toSet))
implicit def hashSetDecode[A](
implicit element: DecodeJson[A]
): DecodeJson[java.util.HashSet[A]] =
DecodeJson(cursor => DecodeJson.SetDecodeJson[A]
.apply(cursor)
.map(set => new java.util.HashSet(set.asJava)))
// Usage:
val set = new java.util.HashSet[Int]
set.add(1)
set.add(3)
val jsonSet = set.asJson // [1, 3]
jsonSet.jdecode[java.util.HashSet[Int]] // DecodeResult(Right([1, 3]))
case class A(set: java.util.HashSet[Int])
implicit val codec = CodecJson.derive[A]
val a = A(set)
val jsonA = a.asJson // { "set": [1, 3] }
jsonA.jdecode[A] // DecodeResult(Right(A([1, 3])))
Sample is checked on Scala 2.12.1 and Argonaut 6.2-RC2, but as far as I know it shouldn't depend on some latest changes.
Approach like this works with any linear or unordered homogenous data structure that you want to represent as JSON array. Also, this is preferable to creating a CodecJson: latter can be inferred automatically from JsonEncode and JsonDecode, but not vice versa. This way, your set will serialize and deserialize both when used independently or within other data type, as shown in example.
I don't use Argonaut but use spray-json and suspect solution can be similar.
Have you tried something like this ?
implicit def HashSetJsonCodec[T : CodecJson] = CodecJson.derive[Set[T]]
if it doesn't work I'd probably try creating more verbose implicit function like
implicit def SetJsonCodec[T: CodecJson](implicit codec: CodecJson[T]): CodecJson[Set[T]] = {
CodecJson(
{
case value => JArray(value.map(codec.encode).toList)
},
c => c.as[JsonArray].flatMap {
case arr: Json.JsonArray =>
val items = arr.map(codec.Decoder.decodeJson)
items.find(_.isError) match {
case Some(error) => DecodeResult.fail[Set[T]](error.toString(), c.history)
case None => DecodeResult.ok[Set[T]](items.flatMap(_.value).toSet[T])
}
}
)
}
PS. I didn't test this but hopefully it leads you to the right direction :)