Rename JSON fields with circe - json

I want to have different names of fields in my case classes and in my JSON, therefore I need a comfortable way of renaming in both, encoding and decoding.
Does someone have a good solution ?

You can use Custom key mappings via annotations. The most generic way is the JsonKey annotation from io.circe.generic.extras._. Example from the docs:
import io.circe.generic.extras._, io.circe.syntax._
implicit val config: Configuration = Configuration.default
#ConfiguredJsonCodec case class Bar(#JsonKey("my-int") i: Int, s: String)
Bar(13, "Qux").asJson
// res5: io.circe.Json = JObject(object[my-int -> 13,s -> "Qux"])
This requires the package circe-generic-extras.

Here's a code sample for Decoder (bit verbose since it won't remove the old field):
val pimpedDecoder = deriveDecoder[PimpClass].prepare {
_.withFocus {
_.mapObject { x =>
val value = x("old-field")
value.map(x.add("new-field", _)).getOrElse(x)
}
}
}

implicit val decodeFieldType: Decoder[FieldType] =
Decoder.forProduct5("nth", "isVLEncoded", "isSerialized", "isSigningField", "type")
(FieldType.apply)
This is a simple way if you have lots of different field names.
https://circe.github.io/circe/codecs/custom-codecs.html

You can use the mapJson function on Encoder to derive an encoder from the generic one and remap your field name.
And you can use the prepare function on Decoder to transform the JSON passed to a generic Decoder.
You could also write both from scratch, but it may be a ton of boilerplate, those solutions should both be a handful of lines max each.

The following function can be used to rename a circe's JSON field:
import io.circe._
object CirceUtil {
def renameField(json: Json, fieldToRename: String, newName: String): Json =
(for {
value <- json.hcursor.downField(fieldToRename).focus
newJson <- json.mapObject(_.add(newName, value)).hcursor.downField(fieldToRename).delete.top
} yield newJson).getOrElse(json)
}
You can use it in an Encoder like so:
implicit val circeEncoder: Encoder[YourCaseClass] = deriveEncoder[YourCaseClass].mapJson(
CirceUtil.renameField(_, "old_field_name", "new_field_name")
)
Extra
Unit tests
import io.circe.parser._
import org.specs2.mutable.Specification
class CirceUtilSpec extends Specification {
"CirceUtil" should {
"renameField" should {
"correctly rename field" in {
val json = parse("""{ "oldFieldName": 1 }""").toOption.get
val resultJson = CirceUtil.renameField(json, "oldFieldName", "newFieldName")
resultJson.hcursor.downField("oldFieldName").focus must beNone
resultJson.hcursor.downField("newFieldName").focus must beSome
}
"return unchanged json if field is not found" in {
val json = parse("""{ "oldFieldName": 1 }""").toOption.get
val resultJson = CirceUtil.renameField(json, "nonExistentField", "newFieldName")
resultJson must be equalTo json
}
}
}
}

Related

How to know which field is missed while parsing json in Json4s

I have made a generic method which parses json to case class and it also works fine. But if tries to parse big json which have one or two mandatory field then I am not able to figure out which particular mandatory f ield is missing. I am only able to handle it with IllegalArgumentException. Is there a way to handle to know which is field is missing while parsing Json by using json4s.
Here is my code ->
object JsonHelper {
implicit val formats: DefaultFormats = DefaultFormats
def write[T <: AnyRef](value: T): String = jWrite(value)
def parse(value: String): JValue = jParser(value)
}
And this is the method I am using to parse Json and handle failed case ->
def parseJson[M](json: String)(implicit m: Manifest[M]): Either[ErrorResponse, M] = {
try
Right(JsonHelper.parse(json).extract[M])
catch {
case NonFatal(th) =>
th.getCause.getCause match {
case e: java.lang.IllegalArgumentException =>
error(s"Invalid JSON - $json", e)
Left(handle(exception = EmptyFieldException(e.getMessage.split(":").last)))
case _ =>
error(s"Invalid JSON - $json", th)
Left(handle(exception = new IllegalArgumentException("Invalid Json", th)))
}
}
}
Like for a Json ->
{
"name": "Json"
}
And case class ->
case class(name: String, profession: String)
if I try to parse above json into case class currently I am getting Invalid JSON - IllegalArgumentException. But is there a way that the exception tells which is field is missing like in above example "profession" is missing.
Is there a way to handle to know which is field is missing while parsing Json by using json4s.
Maybe you have more complicated setting, but for example for
import org.json4s._
import org.json4s.jackson.JsonMethods._
val str = """{
| "name": "Json"
|}""".stripMargin
val json = parse(str) // JObject(List((name,JString(Json))))
implicit val formats: Formats = DefaultFormats
case class MyClass(name: String, profession: String)
json.extract[MyClass]
it produces
org.json4s.MappingException: No usable value for profession
Did not find value which can be converted into java.lang.String
at org.json4s.reflect.package$.fail(package.scala:56)
at ...
with the name of missing field and if the class is just case class MyClass(name: String) then this produces MyClass(Json).
If the class is case class MyClass(name: String, profession: Option[String]) then this produces MyClass(Json,None).
So normally IllegalArgumentException should be followed by Caused by: org.json4s.MappingException with the field name. I guess now you're swallowing json4s MappingException somewhere. Maybe in th.getCause.getCause match .... It's hard to say without MCVE.

Parsing nested JSON values with Lift-JSON

Scala 2.12 here trying to use Lift-JSON to parse a config file. I have the following myapp.json config file:
{
"health" : {
"checkPeriodSeconds" : 10,
"metrics" : {
"stores" : {
"primary" : "INFLUX_DB",
"fallback" : "IN_MEMORY"
}
}
}
}
And the following MyAppConfig class:
case class MyAppConfig()
My myapp.json is going to evolve and potentially become very large with lots of nested JSON structures inside of it. I don't want to have to create Scala objects for each JSON object and then inject that in MyAppConfig like so:
case class Stores(primary : String, fallback : String)
case class Metrics(stores : Stores)
case class Health(checkPeriodSeconds : Int, metrics : Metrics)
case class MyAppConfig(health : Health)
etc. The reason for this is I'll end up with "config object sprawl" with dozens upon dozens of case classes that are only in existence to satisfy serialization from JSON into Scala-land.
Instead, I'd like to use Lift-JSON to read the myapp.json config file, and then have MyAppConfig just have helper functions that read/parse values out of the JSON on the fly:
import net.liftweb.json._
// Assume we instantiate MyAppConfig like so:
//
// val json = Source.fromFile(configFilePath)
// val myAppConfig : MyAppConfig = new MyAppConfig(json.mkString)
//
class MyAppConfig(json : String) {
implicit val formats = DefaultFormats
def primaryMetricsStore() : String = {
// Parse "INFLUX_DB" value from health.metrics.stores.primary
}
def checkPeriodSeconds() : Int = {
// Parse 10 value from health.checkPeriodSeconds
}
}
This way I can cherry pick which configs I want to expose (make readable) to my application. I'm just not following the Lift API docs to see how this strategy is possible, they all seem to want me to go with creating tons of case classes. Any ideas?
Case classes are not mandatory for extracting data from JSON. You can query the parsed tree and transfrom data according to your needs. The values from the example can be extracted as follows:
import net.liftweb.json._
class MyAppConfig(json : String) {
private implicit val formats = DefaultFormats
private val parsed = parse(json)
def primaryMetricsStore() : String = {
(parsed \ "health" \ "metrics" \ "stores" \ "primary").extract[String]
}
def checkPeriodSeconds() : Int = {
(parsed \ "health" \ "checkPeriodSeconds").extract[Int]
}
}
The original doc provides all the details.

Argonaut: Generic method to encode/decode array of objects

I am trying to implement a generic pattern with which to generate marshallers and unmarshallers for an Akka HTTP REST service using Argonaut, handling both entity and collection level requests and responses. I have no issues in implementing the entity level as such:
case class Foo(foo: String)
object Foo {
implicit val FooJsonCodec = CodecJson.derive[Foo]
implicit val EntityEncodeJson = FooJson.Encoder
implicit val EntityDecodeJson = FooJson.Decoder
}
I am running into issues attempting to provide encoders and decoders for the following:
[
{ "foo": "1" },
{ "foo": "2" }
]
I have attempted adding the following to my companion:
object Foo {
implicit val FooCollectionJsonCodec = CodecJson.derive[HashSet[Foo]]
}
However, I am receiving the following error:
Error:(33, 90) value jencode0L is not a member of object argonaut.EncodeJson
I see this method truly does not exist but is there any other generic method to generate my expected result. I'm strongly avoiding using an additional case class to describe the collection since I am using reflection heavily in my use case.
At this point, I'd even be fine with a manually constructed Encoder and Decoder, however, I've found no documentation on how to construct it with the expected structure.
Argonaut have predefined encoders and decoders for Scala's immutable lists, sets, streams and vectors. If your type is not supported explicitly, as in the case of java.util.HashSet, you can easily add EncodeJson and DecodeJson for the type:
import argonaut._, Argonaut._
import scala.collection.JavaConverters._
implicit def hashSetEncode[A](
implicit element: EncodeJson[A]
): EncodeJson[java.util.HashSet[A]] =
EncodeJson(set => EncodeJson.SetEncodeJson[A].apply(set.asScala.toSet))
implicit def hashSetDecode[A](
implicit element: DecodeJson[A]
): DecodeJson[java.util.HashSet[A]] =
DecodeJson(cursor => DecodeJson.SetDecodeJson[A]
.apply(cursor)
.map(set => new java.util.HashSet(set.asJava)))
// Usage:
val set = new java.util.HashSet[Int]
set.add(1)
set.add(3)
val jsonSet = set.asJson // [1, 3]
jsonSet.jdecode[java.util.HashSet[Int]] // DecodeResult(Right([1, 3]))
case class A(set: java.util.HashSet[Int])
implicit val codec = CodecJson.derive[A]
val a = A(set)
val jsonA = a.asJson // { "set": [1, 3] }
jsonA.jdecode[A] // DecodeResult(Right(A([1, 3])))
Sample is checked on Scala 2.12.1 and Argonaut 6.2-RC2, but as far as I know it shouldn't depend on some latest changes.
Approach like this works with any linear or unordered homogenous data structure that you want to represent as JSON array. Also, this is preferable to creating a CodecJson: latter can be inferred automatically from JsonEncode and JsonDecode, but not vice versa. This way, your set will serialize and deserialize both when used independently or within other data type, as shown in example.
I don't use Argonaut but use spray-json and suspect solution can be similar.
Have you tried something like this ?
implicit def HashSetJsonCodec[T : CodecJson] = CodecJson.derive[Set[T]]
if it doesn't work I'd probably try creating more verbose implicit function like
implicit def SetJsonCodec[T: CodecJson](implicit codec: CodecJson[T]): CodecJson[Set[T]] = {
CodecJson(
{
case value => JArray(value.map(codec.encode).toList)
},
c => c.as[JsonArray].flatMap {
case arr: Json.JsonArray =>
val items = arr.map(codec.Decoder.decodeJson)
items.find(_.isError) match {
case Some(error) => DecodeResult.fail[Set[T]](error.toString(), c.history)
case None => DecodeResult.ok[Set[T]](items.flatMap(_.value).toSet[T])
}
}
)
}
PS. I didn't test this but hopefully it leads you to the right direction :)

Play 2.x Json transform json keys to camelCase from underscore case

I want to transform a json with underscore case keys to camel case keys.
"{\"first_key\": \"first_value\", \"second_key\": {\"second_first_key\":\"second_first_value\"}}"
to
"{\"firstKey\": \"first_value\", \"secondKey\": {\"secondFirstKey\":\"second_first_value\"}}"
This is partial code:
val CamelCaseRegex = new Regex("(_.)")
val jsonTransformer = (__).json.update(
//converts json camel_case field names to Scala camelCase field names
)
val jsonRet = Json.parse(jsonStr).transform(jsonTransformer)
I have tried several ways in the update method without success.
While it would be nice to do this with just the native Play library, it's a good use-case for Mandubian's Play Json Zipper extension libraries.
Here's a quick go at this (not exhaustively tested). First you need to add the resolver and library to your build:
resolvers += "mandubian maven bintray" at "http://dl.bintray.com/mandubian/maven"
libraryDependencies ++= Seq(
"com.mandubian" %% "play-json-zipper" % "1.2"
)
Then you could try something like this:
import play.api.libs.json._
import play.api.libs.json.extensions._
// conversion function borrowed from here:
// https://gist.github.com/sidharthkuruvila/3154845
def underscoreToCamel(name: String) = "_([a-z\\d])".r.replaceAllIn(name, {m =>
m.group(1).toUpperCase
})
// Update the key, otherwise ignore them...
// FIXME: The None case shouldn't happen here so maybe we
// don't need it...
def underscoreToCamelCaseJs(json: JsValue) = json.updateAllKeyNodes {
case (path, js) => JsPathExtension.hasKey(path) match {
case Some(key) => underscoreToCamel(key) -> js
case None => path.toJsonString -> js
}
}
Which on this input:
val testJson = Json.obj(
"some_str" -> JsString("foo_bar"),
"some_obj" -> Json.obj(
"some_field" -> Json.arr("foo", "bar")
),
"an_int" -> JsNumber(1)
)
...produces:
{
"someStr" : "foo_bar",
"someObj" : {
"someField" : [ "foo", "bar" ]
},
"anInt" : 1
}
For your requirement, you can use this library play-json-naming. It easily converts snake_case(underscore case) json to camelCase and vice versa.
https://github.com/tototoshi/play-json-naming
If you're using play >= 2.6:
final case class Blah(foo: String)
object Blah {
import play.api.libs.json._
implicit val config = JsonConfiguration(SnakeCase)
implicit val blahReader:OFormat[QueueMsgEnvelope] = Json.format[Blah]
}
https://www.playframework.com/documentation/2.6.x/ScalaJsonAutomated#Custom-Naming-Strategies

immutable Map (de)serialization to/from Play JSON

I have following (simplified) structure:
case class MyKey(key: String)
case class MyValue(value: String)
Let's assume that I have Play JSON formatters for both case classes.
As an example I have:
val myNewMessage = collection.immutable.Map(MyKey("key1") -> MyValue("value1"), MyKey("key2") -> MyValue("value2"))
As a result of following transformation
play.api.libs.json.Json.toJson(myNewMessage)
I'm expecting something like:
{ "key1": "value1", "key2": "value2" }
I have tried writing the formatter, but somehow I can not succeed:
implicit lazy val mapMyKeyMyValueFormat: Format[collection.immutable.Map[MyKey, MyValue]] = new Format[collection.immutable.Map[MyKey, MyValue]] {
override def writes(obj: collection.immutable.Map[MyKey, MyValue]): JsValue = Json.toJson(obj.map {
case (key, value) ⇒ Json.toJson(key) -> Json.toJson(value)
})
override def reads(json: JsValue): JsResult[collection.immutable.Map[MyKey, MyValue]] = ???
}
I have no idea how to write proper reads function. Is there any simpler way of doing it? I'm also not satisfied with my writes function.
Thx!
The reason the writes method is not working is because you're transforming the Map[MyKey, MyValue] into a Map[JsValue, JsValue], but you can't serialize that to JSON. The JSON keys need to be strings, so you need some way of transforming MyKey to some unique String value. Otherwise you'd be trying to serialize something like this:
{"key": "keyName"} : {"value": "myValue"}
Which is not valid JSON.
If MyKey is as simple as stated in your question, this can work:
def writes(obj: Map[MyKey, MyValue]): JsValue = Json.toJson(obj.map {
case (key, value) => key.key -> Json.toJson(value)
}) // ^ must be a String
Play will then know how to serialize a Map[String, MyValue], given the appropriate Writes[MyValue].
But I'm not certain that's what you want. Because it produces this:
scala> Json.toJson(myNewMessage)
res0: play.api.libs.json.JsValue = {"key1":{"value":"value1"},"key2":{"value":"value2"}}
If this is the output you want:
{ "key1": "value1", "key2": "value2" }
Then your Writes should look more like this:
def writes(obj: Map[MyKey, MyValue]): JsValue = {
obj.foldLeft(JsObject(Nil)) { case (js, (key, value)) =>
js ++ Json.obj(key.key -> value.value)
}
}
Which produces this:
scala> writes(myNewMessage)
res5: play.api.libs.json.JsValue = {"key1":"value1","key2":"value2"}
Reads are easy so long as the structure of MyKey and MyValue are the same, otherwise I have no idea what you'd want it to do. It's very dependent on the actual structure you want. As is, I would suggest leveraging existing Reads[Map[String, String]] and transforming it to the type you want.
def reads(js: JsValue): JsResult[Map[MyKey, MyValue]] = {
js.validate[Map[String, String]].map { case kvMap =>
kvMap.map { case (key, value) => MyKey(key) -> MyValue(value) }
}
}
It's hard to see much else without knowing the actual structure of the data. In general I stay away from having to serialize and deserialize Maps.