Using snake case without explicit Configuration dependency in Circe - json

As specified in the documentation it is possible to convert snake case to a camel case that is idiomatic in Scala. I tried it and it worked fine. Here is it:
implicit lazy val configuration: Configuration = Configuration.default.withSnakeCaseMemberNames
#ConfiguredJsonCodec final case class ModelClass(someField1: String, someField2: Int, someField3: String)
I want to keep my model clean without adding dependencies on external frameworks so it comprises only business-specific case classes.
Is it possible to avoid addding the annotation #ConfiguredJsonCodec and bringing implicit lazy val configuration: Configuration into scope? Maybe it could be configured on the Decoder level?

It's perfectly possible. It's a trade off:
if you have implicits in your companion objects, you don't have to import them
if you don't want to have coupling with libraries in your models, you have to out all implicits in trait/object and then mixin/import them every single time you need them
If you are developing application with a fixed stack, chosen libraries for each task, and so on - having all implicits in companion is just cleaner and easier to maintain.
package com.example
package object domain {
private[domain] implicit lazy val configuration: Configuration = ...
}
package com.example.domain
import io.circe.generic.extra._
#ConfiguredJsonCodec
final case class ModelClass(...)
Many utilities are optimized for this e.g. enumeratum-circe uses a mixin to add add codec for enumeration into companion object.
If you don't want to have them there, because e.g. you have your models in one module and it should be dependency-free, then you would have to put these implicits somewhere else. And that requires writing code manually, no way around it:
package com.example.domain
final case class ModelClass(...)
package com.example.domain.circe
import io.circe._
import io.circe.generic.extra.semiauto._
// if I want a mixin:
// class SomeClass extends Codecs { ... }
trait Codecs {
protected implicit lazy val configuration: Configuration = ...
implicit val modelClassDecoder: Decoder[ModelClass] = deriveConfiguredDecoder[ModelClass]
implicit val modelClassEncoder: Encoder[ModelClass] = deriveConfiguredEncoder[ModelClass]
}
// if I want an import:
// import com.example.domain.circe.Codecs._
object Circe extends Circe
If you pick this way, you are giving up on e.g. enumeraturm-circe's ability to provide codecs, and you will have to write them manually.
You have to pick one of these depending on your use case, but you cannot have the benefits of both at once: either you give up on boilerplate-reduction or on dependency-reduction.

Related

How to define format for both Y and List[X] in Play JSON when X extends from Y and Y is trait?

I can't convert list in specific case: when type is extends from trait.
When I can convert:
import play.api.libs.functional.syntax._
import play.api.libs.json._
sealed trait Item
case class Id(id: Long) extends Item
case class MyList(list: List[Id])
object MyFormat {
implicit lazy val idFormat = Json.format[Id]
implicit lazy val myListFormat = Json.format[MyList]
}
When I can not convert:
sealed trait Item
case class Id(id: Long) extends Item
case class MyList(list: List[Id])
object MyFormat {
implicit lazy val itemFormat = new Format[Item] {
override def writes(o: Item): JsValue = o match {
case i: Id => idFormat.writes(i)
}
override def reads(json: JsValue): JsResult[Item] = {
idFormat.reads(json)
}
}
implicit lazy val idFormat = Json.format[Id]
implicit lazy val myListFormat = Json.format[MyList]
}
Error:
Error:(33, 49) No instance of play.api.libs.json.Format is available for scala.collection.immutable.List[Main2.Id] in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
implicit lazy val myListFormat = Json.format[MyList]
Why I can't format in 2nd case?
If I add formatter for list:
implicit lazy val idsFormat = Json.format[List[Id]]
then I got Error:(33, 46) No instance of Reads is available for scala.collection.immutable.Nil in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
implicit lazy val idsFormat = Json.format[List[Id]]
PS:
The only one solution than I found:
Define custom format for List[Id]
When read or write, use format for Id
When read, use
def flatten[T](xs: Seq[JsResult[T]]): JsResult[List[T]] = {
val (ss: Seq[JsSuccess[T]], fs: Seq[JsError]) = xs.partition(_.isSuccess)
if (fs.isEmpty) JsSuccess(ss.map(_.get).toList) else fs.head
}
If play JSON features automatic/semiautomatic codec instances derivation, then it will use implicit to enable such a mechanism. It means that codecs for complex things should be introduced after their components.
In your case, it seems like play JSON tries to derive codec for List as for case class ,i. e. as for List(a1, List(a2, ..., List(an, Nil))) and when it hits Nil, it doesn't know how to derive codec for that.
I believe you want your list encoded not like a chain of folded objects but as JSON array.
Then you should search play sources for default List[T] codec and try to use it by specializing it for Id.
General tool for debugging missing implicits is compiler option "-Xlog-implicits". It will log all failed implicit searches to console, and it is possible to figure out what's missing by these messages.
It is also strongly advised to know how implicit works before working with libraries that use this feautre extensively.
And last, but not least:
Have you ever tried using circe? It features automatic and semi-automatic JSON derivation for families of sealed traits and standard scala classes. It even has integration with play framework. Circe derivation removes the most headache of writing codec code, but requires strong knowledge of implicit precedence to work properly. The way it works described here and here.
You can also try to create your own derivation for play-json if there's no adequate standard one with morphling.

Extending AutoDerivation in Circe does not work

My question concerns the second solution offered by mixel here: Scala Circe with generics
Note that the trait named Auto in Circe has been renamed to AutoDerivation in the current version of Circe.
I am using the solution mixel provides in his StackOverflow solution but have not been able to get it to work. I have tried things like updating my Circe version to the most recent one and making sure the Macro Paradise plugin is imported, but still no luck.
Here is my code. The first is its own file, called CirceGeneric.
import io.circe._
import io.circe.parser._
import io.circe.generic.extras._
object CirceGeneric {
trait JsonEncoder[T] {
def apply(in: T): Json
}
trait JsonDecoder[T] {
def apply(s: String): Either[Error, T]
}
object CirceEncoderProvider {
def apply[T: Encoder]: JsonEncoder[T] = new JsonEncoder[T] {
def apply(in: T) = Encoder[T].apply(in)
}
}
object CirceDecoderProvider {
def apply[T: Decoder]: JsonDecoder[T] = new JsonDecoder[T] {
def apply(s: String) = decode[T](s)
}
}
}
object Generic extends AutoDerivation {
import CirceGeneric._
implicit def encoder[T: Encoder]: JsonEncoder[T] = CirceEncoderProvider[T]
implicit def decoder[T: Decoder]: JsonDecoder[T] = CirceDecoderProvider[T]
}
The second is a method for unit testing that uses the Akka function responseAs. The method appears in a class called BaseServiceTest.
def responseTo[T]: T = {
def response(s: String)(implicit d: JsonDecoder[T]) = {
d.apply(responseAs[String]) match {
case Right(value) => value
case Left(error) => throw new IllegalArgumentException(error.fillInStackTrace)
}
}
response(responseAs[String])
}
The idea is to convert the result of responseAs[String] (which returns a string) into a decoded case class.
The code is not behaving as expected. Intellij does not detect any missing implicits, but when compilation time comes around, I am getting problems. I should mention that the BaseServiceTest file contains imports for CirceGeneric._ and Generic._, so a missing import statement is not the problem.
[error] [...]/BaseServiceTest.scala:59: could not find implicit value for parameter d: [...]CirceGeneric.JsonDecoder[T]
[error] response(responseAs[String])
Either the implicit conversion from Decoder[T] to JsonDecoder[T] is not happening, or the Decoder[T] instance is not being created. Either way, something is wrong.
You still need a Decoder or JsonDecoder context bound on responseTo.
def responseTo[T : Decoder]: T = ...
This is because all your code, and indeed mixel's code in the linked answer, is about abstracting from a Decoder out to a JsonDecoder trait which can be used for cross-library support. But you still don't have any way of constructing one without an underlying Decoder instance.
Now, there are some ways of automatically generating Decoders for (for instance) case classes contained in circe.generics.auto, but at this point in your code
def responseTo[T]: T = {
def response(s: String)(implicit d: JsonDecoder[T]) = ...
...
}
you're asking the compiler to be able to provide an implicit JsonDecoder (i.e., in your setup, Decoder) instance for any arbitrary type. As the accepted answer to the linked question explains, that's not possible.
You need to delay the implicit resolution to the point where you know what type you're dealing with - in particular, that you can provide a Decoder[T] instance for it.
EDIT: In your response to your comment regarding what the point is if you can't create JsonDecoders for all types...
My understanding of the linked question is that they're trying to abstract away the circe library in order to allow swapping out the JSON library implementation. This is done as follows:
add the JsonDecoder type class
have a package json which contains implicits (using Circe) for constructing them automatically via the package object extending AutoDerivation
have external code only refer to JsonDecoder and import the implicits in the json package
Then all the JSON serialization and implicit resolution works out without ever needing the calling code to reference io.circe, and it's easy to switch over the json/JsonDecoder to another JSON library if desired. But you're still going to have to use the JsonDecoder context bound, and be restricted to working with types where such an implicit can be constructed. Which is not every type.

Deserializing to java objects with Scala and json4s

I have HTTP client written in Scala that uses json4s/jackson to serialize and deserialize HTTP payloads. For now I was using only Scala case classes as model and everything was working fine, but now I have to communicate with third party service. They provided me with their own model but its written in Java, so now I need to deserialize jsons also to Java classes. It seams to work fine with simple classes but when class contains collections like Lists or Maps json4s has problems and sets all such fields to null.
Is there any way to handle such cases? Maybe I should use different formats (I'm using DefaultFormats + few custom ones). Example of problem with test:
import org.json4s.DefaultFormats
import org.json4s.jackson.Serialization.read
import org.scalatest.{FlatSpec, Matchers}
class JavaListTest extends FlatSpec with Matchers{
implicit val formats = DefaultFormats
"Java List" should "be deserialized properly" in {
val input = """{"list":["a", "b", "c"]}"""
val output = read[ObjectWithList](input)
output.list.size() shouldBe 3
}
}
And sample Java class:
import java.util.List;
public class ObjectWithList {
List<String> list;
}
I have also noticed that when I'll try to deserialize to Scala case class that contains java.util.List[String] type of field I'll get an exception of type: org.json4s.package$MappingException: Expected collection but got List[String]
Key for solving your issue, is composition of formatters. Basically you want to define JList formatter as list formatter composed with toJList function.
Unfortunately, json4s Formatters are extremely difficult to compose, so I used the Readers for you to get an idea. I also simplified an example, to having only java list:
import DefaultReaders._
import scala.collection.JavaConverters._
implicit def javaListReader[A: Reader]: Reader[java.util.List[A]] = new Reader[util.List[A]] {
override def read(value: JValue) = DefaultReaders.traversableReader[List, A].read(value).asJava
}
val input = """["a", "b", "c"]"""
val output = Formats.read[java.util.List[String]](parse(input))
To my knowledge json4s readers will not work with java classes out of the box, so you might either need to implement the Serializer[JList[_]] the same way, or mirror your java classes with case classes and use them inside your domain.
P.S.
Highly recommend you to switch to circe or argonaut, then you will forget about the most problems with jsons.

How do you convert a case class to a class?

I have a case class Person(name:String, id:Int) for serialization reasons (MessagePack), I need to convert this to a class. Is there an easy / elegant way to do this which does not require too much boilerplate or creation?
case class Person(name:String, id:Int) -> class Person(name:String, id:Int)
A case class can be serialized as a "normal" class as it is de facto a normal class.
As it makes all constructor arguments accessable by default it's an even better fit for serialisation than a normal class.
The case class just tells the compiler to automatically add some methods as equals and hashCode to the class. At the same time there is a companion object with an apply method added, but this doesn't affect the normal class at all.
So if there arise problems with Serialization the chance is quite hight that the source of the problem lies elsewhere.
You could try json4s https://github.com/json4s/json4s to convert your case class to JSON and then convert it to the MessagePack format.
import org.json4s._
import org.json4s.native.Serialization
import org.json4s.native.Serialization.{read, write}
implicit val formats = Serialization.formats(ShortTypeHints(List(classOf[Person])
val jsonString : String = write(Person("Andi", 42))
// convert with MessagePack as you would do with normal JSON

Serializing and unserializing case classes with lift-json

I'm attempting basic serialization/hydration with lift-json, but without success. As near as I can tell from the package readme, this should work. Help?
I'm using Scala 2.8.0 and Lift 2.2 cross-built for 2.8 with sbt ("net.liftweb" %% "lift-json" % "2.2").
import net.liftweb.json._
import net.liftweb.json.Serialization.{read, write}
implicit val formats = Serialization.formats(NoTypeHints)
case class Route(title: String)
val rt = new Route("x277a1")
val ser = write(rt)
// ser: String = {} ...
val deser = read[Route]("""{"title":"Some Title"}""")
// net.liftweb.json.MappingException: Parsed JSON values do not match with class constructor
Lift JSON's serialization does not work for case classes defined in REPL (paranamer can't find the bytecode to read the type metadata). Compile Route with scalac and then the above example works.
The same problem applies every time when the (de)serialuzed class is not on the classpath. In such case, paranamer can't read the parameter names. It is necessary to provide a custom ParameterNameReader.
Such problem applies for e.g.:
REPL (as mentioned) - unless you define the class outside the REPL and add via classpath.
Play Framework - unless you provide a simple custom ParameterNameReader (see below) or load the (de)serialized class as a Maven/Play/... dependency
Feel free to add another situation (you can edit this post).
The PlayParameterNameReader:
import net.liftweb.json.ParameterNameReader
import java.lang.reflect.Constructor
import play.classloading.enhancers.LocalvariablesNamesEnhancer
import scala.collection.JavaConversions._
object PlayParameterReader extends ParameterNameReader{
def lookupParameterNames(constructor: Constructor[_]) = LocalvariablesNamesEnhancer.lookupParameterNames(constructor)
}