How to handle MongoDB ObjectIds in Play framework using Reactivemongo? - json

I have a basic model with a case class
case class Record( id: Option[String],
data: Double,
user: String,
)
object RecordJsonFormats {
import play.api.libs.json.Json
implicit val recordFormat = Json.format[Record]
}
Field user is actually an ObjectId of other module also id is also an ObjectId yet then try to change String type to BSONObjectId macros in play.api.libs.json.Json break... so both user and if saved with object id fields get saved as String not ObjectId.
What is the optimal way to operate with ObjectIds in Play framework?
Maybe I should extend play.api.libs.json.Json with BSONObjectId?
Maybe there is a way to link models and IDs are tracked automatically without a need to declare them in model?

You can override the default type of _id. You just need to specify the type you want in the case class.
import java.util.UUID
import play.api.libs.json._
case class Record (_id: UUID = UUID.randomUUID())
object Record {
implicit val entityFormat = Json.format[Record]
}

MongoDB has a default _id field of type ObjectId, which uniquely identifies a document in a given collection. However, this _id typically does not have a semantic meaning in the context of the application domain. Therefore, a good practice is to introduce an additional id field as index of documents. This id can simply a Long number, no more or less.
Then, you can search documents by id easily, and do not care much about ObjectId.
This, https://github.com/luongbalinh/play-mongo/, is a sample project using Play 2.4.x and ReactiveMongo. Hopefully, it helps you.

For those using Official Mongo Scala Driver and Play Framework 2.6+, Here's my solution: https://gist.github.com/ntbrock/556a1add78dc287b0cf7e0ce45c743c1
import org.mongodb.scala.bson.ObjectId
import play.api.libs.json._
import scala.util.Try
object ObjectIdFormatJsonMacro extends Format[ObjectId] {
def writes(objectId: ObjectId): JsValue = JsString(objectId.toString)
def reads(json: JsValue): JsResult[ObjectId] = json match {
case JsString(x) => {
val maybeOID: Try[ObjectId] = Try{new ObjectId(x)}
if(maybeOID.isSuccess) JsSuccess(maybeOID.get) else {
JsError("Expected ObjectId as JsString")
}
}
case _ => JsError("Expected ObjectId as JsString")
}
}
Use it like this in your business objects:
case class BusinessTime(_id: ObjectId = new ObjectId(), payRate: Double)
object BusinessTime {
implicit val objectIdFormat = ObjectIdFormatJsonMacro
implicit val businessTimeFormat = Json.format[BusinessTime]
}

Related

Circe asJson not encoding properties from abstract base class

Suppose I have the following abstract base class:
package Models
import reactivemongo.bson.BSONObjectID
abstract class RecordObject {
val _id: String = BSONObjectID.generate().stringify
}
Which is extended by the following concrete case class:
package Models
case class PersonRecord(name: String) extends RecordObject
I then try to get a JSON string using some code like the following:
import io.circe.syntax._
import io.circe.generic.auto._
import org.http4s.circe._
// ...
val person = new PersonRecord(name = "Bob")
println(person._id, person.name) // prints some UUID and "Bob"
println(person.asJso) // {"name": "Bob"} -- what happened to "_id"?
As you can see, the property _id: String inherited from RecordObject is missing. I would expect that the built-in Encoder should function just fine for this use case. Do I really need to build my own?
Let's see what happens in encoder generation. Circe uses shapeless to derive its codecs, so its enough to check what shapeless resolves into to answer your question. So in ammonite:
# abstract class RecordObject {
val _id: String = java.util.UUID.randomUUID.toString
}
defined class RecordObject
# case class PersonRecord(name: String) extends RecordObject
defined class PersonRecord
# import $ivy.`com.chuusai::shapeless:2.3.3`, shapeless._
import $ivy.$ , shapeless._
# Generic[PersonRecord]
res3: Generic[PersonRecord]{type Repr = String :: shapeless.HNil} = ammonite.$sess.cmd3$anon$macro$2$1#1123d461
OK, so its String :: HNil. Fair enough - what shapeless does is extracting all fields available in constructor transforming one way, and putting all fields back through constructor if converting the other.
Basically all typeclass derivation works this way, so you should make it possible to pass _id as constructor:
abstract class RecordObject {
val _id: String
}
case class PersonRecord(
name: String,
_id: String = BSONObjectID.generate().stringify
) extends RecordObject
That would help type class derivation do its work. If you cannot change how PersonRecord looks like... then yes you have to write your own codec. Though I doubt it would be easy as you made _id immutable and impossible to set from outside through a constructor, so it would also be hard to implement using any other way.

Configure spray-json for non strict parsing deserialization

How to configure the spray-json parsing on parsing options?
Similarly as Jackson Parsing Features.
For example, I am parsing a json that has a field that my case class has not, and it is breaking:
spray.json.DeserializationException: Object is missing required member 'myfield'
UPDATE :
A simple example:
case class MyClass(a: String, b: Long);
and try to parse an incomplete json like
val data = "{a: \"hi\"}"
with a spray-json format like:
jsonFormat2(MyClass.apply)
// ...
data.parseJson.convertTo[MyClass]
(simplified code).
But the question goes further, I want to ask about configuration options like in other parsers. More examples:
Be able to ignore fields that exist in the JSON but not in the case class.
Ways of managing nulls or nonexistent values.
etc.
SprayJson allows you to define custom parsers like so:
case class Foo(a: String, b: Int)
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
override def read(json: JsValue): Foo = {
json.asJsObject.getFields("name", "id") match {
case Seq(JsString(name), id) =>
Foo(name, id.convertTo[Int])
}
}
override def write(obj: Foo): JsValue = obj.toJson
}
This allows you to parse any arbitrary payload and pull out the fields "name" and "id" - other fields are ignored. If those fields are not guaranteed you can add something like:
case Seq(JsString(name), JsNull) =>
Foo(name, 0)
You should look at what's available in JsValue.scala - in particular JsArray may come in handy if you're getting payloads with anonymous arrays (i.e. the root is [{...}] instead of {"field":"value"...})
Spray Json doesn't support default parameters. So You cannot have a case class like
case class MyClass(a: String, b: Int = 0)
and then parse json like {"a":"foo"}
However if you make the second parameter as Option. then it works.
import spray.json._
case class MyClass(a: String, b: Option[Int] = None)
object MyProtocol extends DefaultJsonProtocol {
implicit val f = jsonFormat2(MyClass)
}
import MyProtocol.f
val mc1 = MyClass("foo", Some(10))
val strJson = mc1.toJson.toString
val strJson2 = """{"a": "foo"}"""
val mc2 = strJson2.parseJson.convertTo[MyClass]
println(mc2)

How can I define a dynamic base json object?

I would like to design a base trait/class in Scala that can produce the following json:
trait GenericResource {
val singularName: String
val pluralName: String
}
I would inherit this trait in a case class:
case class Product(name: String) extends GenericResource {
override val singularName = "product"
override val pluralName = "products"
}
val car = Product("car")
val jsonString = serialize(car)
the output should look like: {"product":{"name":"car"}}
A Seq[Product] should produce {"products":[{"name":"car"},{"name":"truck"}]} etc...
I'm struggling with the proper abstractions to accomplish this. I am open to solutions using any JSON library (available in Scala).
Here's about the simplest way I can think of to do the singular part generically with circe:
import io.circe.{ Decoder, Encoder, Json }
import io.circe.generic.encoding.DerivedObjectEncoder
trait GenericResource {
val singularName: String
val pluralName: String
}
object GenericResource {
implicit def encodeResource[A <: GenericResource](implicit
derived: DerivedObjectEncoder[A]
): Encoder[A] = Encoder.instance { a =>
Json.obj(a.singularName -> derived(a))
}
}
And then if you have some case class extending GenericResource like this:
case class Product(name: String) extends GenericResource {
val singularName = "product"
val pluralName = "products"
}
You can do this (assuming all the members of the case class are encodeable):
scala> import io.circe.syntax._
import io.circe.syntax._
scala> Product("car").asJson.noSpaces
res0: String = {"product":{"name":"car"}}
No boilerplate, no extra imports, etc.
The Seq case is a little trickier, since circe automatically provides a Seq[A] encoder for any A that has an Encoder, but it doesn't do what you want—it just encodes the items and sticks them in a JSON array. You can write something like this:
implicit def encodeResources[A <: GenericResource](implicit
derived: DerivedObjectEncoder[A]
): Encoder[Seq[A]] = Encoder.instance {
case values # (head +: _) =>
Json.obj(head.pluralName -> Encoder.encodeList(derived)(values.toList))
case Nil => Json.obj()
}
And use it like this:
scala> Seq(Product("car"), Product("truck")).asJson.noSpaces
res1: String = {"products":[{"name":"car"},{"name":"truck"}]}
But you can't just stick it in the companion object and expect everything to work—you have to put it somewhere and import it when you need it (otherwise it has the same priority as the default Seq[A] instances).
Another issue with this encodeResources implementation is that it just returns an empty object if the Seq is empty:
scala> Seq.empty[Product].asJson.noSpaces
res2: String = {}
This is because the plural name is attached to the resource at the instance level, and if you don't have an instance there's no way to get it (short of reflection). You could of course conjure up a fake instance by passing nulls to the constructor or whatever, but that seems out of the scope of this question.
This issue (the resource names being attached to instances) is also going to be trouble if you need to decode this JSON you've encoded. If that is the case, I'd suggest considering a slightly different approach where you have something like a GenericResourceCompanion trait that you mix into the companion object for the specific resource type, and to indicate the names there. If that's not an option, you're probably stuck with reflection or fake instances, or both (but again, probably not in scope for this question).

How setup spray-json to set null when json element is not present?

Here is spray-json example. Here is NullOptions trait.
The problem is when I declare a case class say
object MyJsonProtocol extends DefaultJsonProtocol {
implicit val some: RootJsonFormat[Some] = jsonFormat2(Some)
}
case class Some (
name:String,
age:Int
)
and json do not contains a field for example:
{
"name":"John"
}
I get: java.util.NoSuchElementException: key not found: age
So I have to add an Option and NullOption trait like that:
object MyJsonProtocol extends DefaultJsonProtocol with NullOptions {
implicit val some: RootJsonFormat[Some] = jsonFormat2(Some)
}
case class Some (
name:String,
age:Option[Int]
)
Everything works. But I do not want to have a case classes where all member are Option. Is there a way to configure spray json unmarshalling to just set nulls without additional Option type?
P.S.
I understand that in general Option is better then null check, but in my case it is just monkey code.
Also complete example of marshalling during response processing is here
The only way I can think of is to implement your own Protocol via read/write, which might be cumbersome. Below is a simplified example. Note that I changed the age to be an Integer instead of an Int since Int is an AnyVal, which is not nullable by default. Furthermore, I only consider the age field to be nullable, so you might need to adopt as necessary. Hope it helps.
case class Foo (name:String, age: Integer)
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object FooJsonFormat extends RootJsonFormat[Foo] {
def write(foo: Foo) =
JsObject("name" -> JsString(foo.name),
"age" -> Option(foo.age).map(JsNumber(_)).getOrElse(JsNull))
def read(value: JsValue) = value match {
case JsObject(fields) =>
val ageOpt: Option[Integer] = fields.get("age").map(_.toString().toInt) // implicit conversion from Int to Integer
val age: Integer = ageOpt.orNull[Integer]
Foo(fields.get("name").get.toString(), age)
case _ => deserializationError("Foo expected")
}
}
}
import MyJsonProtocol._
import spray.json._
val json = """{ "name": "Meh" }""".parseJson
println(json.convertTo[Foo]) // prints Foo("Meh",null)
It seems you're out of luck
From the doc you linked:
spray-json will always read missing optional members as well as null optional members as None
You can customize the json writing, but not the reading.

Convert polymorphic case classes to json and back

I am trying to use spray-json in scala to recognize the choice between Ec2Provider and OpenstackProvider when converting to Json and back.
I would like to be able to give choices in "Provider", and if those choices don't fit the ones available then it should not validate.
My attempt at this can be seen in the following code:
import spray.json._
import DefaultJsonProtocol._
case class Credentials(username: String, password: String)
abstract class Provider
case class Ec2Provider(endpoint: String,credentials: Credentials) extends Provider
case class OpenstackProvider(credentials: Credentials) extends Provider
case class Infrastructure(name: String, provider: Provider, availableInstanceTypes: List[String])
case class InfrastructuresList(infrastructures: List[Infrastructure])
object Infrastructures extends App with DefaultJsonProtocol {
implicit val credFormat = jsonFormat2(Credentials)
implicit val ec2Provider = jsonFormat2(Ec2Provider)
implicit val novaProvider = jsonFormat1(OpenstackProvider)
implicit val infraFormat = jsonFormat3(Infrastructure)
implicit val infrasFormat = jsonFormat1(InfrastructuresList)
println(
InfrastructuresList(
List(
Infrastructure("test", Ec2Provider("nova", Credentials("user","pass")), List("1", "2"))
)
).toJson
)
}
Unfortunately, it fails because it can not find a formatter for Provider abstract class.
test.scala:19: could not find implicit value for evidence parameter of type Infrastructures.JF[Provider]
Anyone have any solution for this?
What you want to do is not available out of the box (i.e. via something like type hints that allow the deserializer to know what concrete class to instantiate), but it's certainly possible with a little leg work. First, the example, using a simplified version of the code you posted above:
case class Credentials(user:String, password:String)
abstract class Provider
case class Ec2Provider(endpoint:String, creds:Credentials) extends Provider
case class OpenstackProvider(creds:Credentials) extends Provider
case class Infrastructure(name:String, provider:Provider)
object MyJsonProtocol extends DefaultJsonProtocol{
implicit object ProviderJsonFormat extends RootJsonFormat[Provider]{
def write(p:Provider) = p match{
case ec2:Ec2Provider => ec2.toJson
case os:OpenstackProvider => os.toJson
}
def read(value:JsValue) = value match{
case obj:JsObject if (obj.fields.size == 2) => value.convertTo[Ec2Provider]
case obj:JsObject => value.convertTo[OpenstackProvider]
}
}
implicit val credFmt = jsonFormat2(Credentials)
implicit val ec2Fmt = jsonFormat2(Ec2Provider)
implicit val openStackFmt = jsonFormat1(OpenstackProvider)
implicit val infraFmt = jsonFormat2(Infrastructure)
}
object PolyTest {
import MyJsonProtocol._
def main(args: Array[String]) {
val infra = List(
Infrastructure("ec2", Ec2Provider("foo", Credentials("me", "pass"))),
Infrastructure("openstack", OpenstackProvider(Credentials("me2", "pass2")))
)
val json = infra.toJson.toString
val infra2 = JsonParser(json).convertTo[List[Infrastructure]]
println(infra == infra2)
}
}
In order to be able to serialize/deserialize instances of the abstract class Provider, I've created a custom formatter where I am supplying operations for reading and writing Provider instances. All I'm doing in these functions though is checking a simple condition (binary here as there are only 2 impls of Provider) to see what type it is and then delegating to logic to handle that type.
For writing, I just need to know which instance type it is which is easy. Reading is a little trickier though. For reading, I'm checking how many properties the object has and since the two impls have different numbers of props, I can differentiate which is which this way. The check I'm making here is very rudimentary, but it shows the point that if you can look at the Json AST and differentiate the types, you can then pick which one to deserialize to. Your actual check can be as simple or as complicated as you like, as long as it's is deterministic in differentiating the types.