I am new Play/Scala and started porting a Spring Boot RestAPI to Play2 as a learning exercise.
In Java/SpringRest ,its simply a matter of annotating POJOs and the JSon library handle the serialize/deserialization automatically.
According to every Play2/Scala tutorial I read, I have to write a Writer/Reader for each model/case class as follows
implicit val writesItem = Writes[ClusterStatus] {
case ClusterStatus(gpuFreeMemory, gpuTotalMemory, labelsLoaded, status) =>
Json.obj("gpuFreeMemory" -> gpuFreeMemory,
"gpuTotalMemory" -> gpuTotalMemory,
"labelsLoaded" -> labelsLoaded,
"status" -> status)
}
//HTTP method
def status() = Action { request =>
val status: ClusterStatus = clusterService.status()
Ok(Json.toJson(status))
}
This means If have a large domain model/response model, I have to write a lot of Writers/Readers for serialize/deserialization?
Is there any simpler way to handle this?
You can give a try to "com.typesafe.play" %% "play-json" % "2.7.2". For using that you just need to do the below steps:
1) Add below dependencies(Use version according to your project):
"com.typesafe.play" %% "play-json" % "2.7.2",
"net.liftweb" % "lift-json_2.11" % "2.6.2"
2) Define formats:
implicit val formats = DefaultFormats
implicit val yourCaseClassFormat= Json.format[YourCaseClass]
This format defines both read and writes for your case class.
Related
I am trying to generate encoders and decoders for two case classes:
object EventBusCases {
case class ValuationRequest(function: RequestValue = ALL_DAY_VALS, interval: RequestValue = IntraDayIntervals.MIN_5)
implicit val requestDecoder: Decoder[ValuationRequest] = deriveDecoder[ValuationRequest]
implicit val requestEncoder: Encoder[ValuationRequest] = deriveEncoder[ValuationRequest]
case class ValuationResponse(values: List[Valuation], function: RequestValue)
implicit val responseDecoder: Decoder[ValuationResponse] = deriveDecoder[ValuationResponse]
implicit val responseEncoder: Encoder[ValuationResponse] = deriveEncoder[ValuationResponse]
}
I keep getting errors like this one, but for both cases:
could not find Lazy implicit value of type io.circe.generic.encoding.DerivedAsObjectEncoder[eventbus.eventBusCases.ValuationResponse]
I decided to also try and derive encoders and decoders for the custom classes inside these ones, such as "Valuation", but I just get the same error on those ones.
I am using Circe 0.12.3 and Scala 2.12.8 and these are my Circe related Scala dependencies:
"com.beachape" %% "enumeratum" % "1.5.14",
"com.beachape" %% "enumeratum-circe" % "1.5.22",
"io.circe" %% "circe-core" % "0.12.3",
"io.circe" %% "circe-generic" % "0.12.3",
"io.circe" %% "circe-parser" % "0.12.3"
So, the way I found to make this work was to implement Encoders and Decoders to both ValuationRequest and ValuationResponse, as well as to all custom types contained in them.
For ValuationRequest and ValuationResponse, I basically added this bit to the same file containing both case classes:
object derivation {
implicit val encodeResponse: Encoder[ValuationResponse] = Encoder.instance {
case response # ValuationResponse(_,_) => response.asJson
}
implicit val decodeResponse: Decoder[ValuationResponse] =
List[Decoder[ValuationResponse]](
Decoder[ValuationResponse].widen
).reduceLeft(_ or _)
implicit val encodeRequest: Encoder[ValuationRequest] = Encoder.instance {
case response # ValuationRequest(_,_) => response.asJson
}
implicit val decodeRequest: Decoder[ValuationRequest] =
List[Decoder[ValuationRequest]](
Decoder[ValuationRequest].widen
).reduceLeft(_ or _)
}
I'm trying to get the maximum value of a MetricId field from a JSON String. However I'm getting a java.lang.UnsupportedOperationException: empty.max for the below String:
[{"MetricName":"name1","DateParsed":"2019-11-20 05:39:00","MetricId":"7855","isValid":"true"},
{"MetricName":"name2","DateParsed":"2019-05-22 17:45:00","MetricId":"1295","isValid":"false"}]
Here is how I've implemented a method for finding the Max value:
val metricIdRegex = """"MetricId"\s*:\s*(\d+)""".r
def maxMetricId(jsonString: String): String = {
metricIdRegex.findAllIn(jsonString).map({
case metricIdRegex(id) => id.toInt
}).max.toString
}
val maxId: String = maxMetricId(metricsString)
I'm expecting to get "7855" as a Max metric Id
What could be wrong with the method? I suspect that it could be a problem with the regex.
You could also use json4s which is quite popular and used by many other scala libraries:
import org.json4s._
import org.json4s.jackson.JsonMethods._
val data = """[{"MetricName":"name1","DateParsed":"2019-11-20 05:39:00","MetricId":"7855","isValid":"true"},
{"MetricName":"name2","DateParsed":"2019-05-22 17:45:00","MetricId":"1295","isValid":"false"}]"""
// parse data into JValue
val parsed = parse(data)
// go through the parsed variable and extract MetricId into a string list, then cast every item to int
val maxMetricId = (parsed \ "MetricId" \\ classOf[JString]).map{_.toInt}.max
Let me show an example how it can be done with a JSON parser efficiently without holding of a whole JSON input and parsed data in memory.
Add dependencies to your build.sbt:
libraryDependencies ++= Seq(
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-core" % "2.0.2" % Compile,
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-macros" % "2.0.2" % Provided // required only in compile-time
)
Add imports, define a data structure for repeating part of your JSON array which should be parsed out, derive a codec for it, open an input stream and scan it with provided handling function which will reduce all parsed metrics to the maximum value:
import com.github.plokhotnyuk.jsoniter_scala.macros._
import com.github.plokhotnyuk.jsoniter_scala.core._
import java.io.ByteArrayInputStream
import java.io.InputStream
case class Metric(#stringified MetricId: Int)
implicit val codec: JsonValueCodec[Metric] = JsonCodecMaker.make(CodecMakerConfig)
val in: InputStream = new ByteArrayInputStream( // <- replace it by FileInputStream
"""[{"MetricName":"name1","DateParsed":"2019-11-20 05:39:00","MetricId":"7855","isValid":"true"},
{"MetricName":"name2","DateParsed":"2019-05-22 17:45:00","MetricId":"1295","isValid":"false"}]""".getBytes("UTF-8"))
try {
var max = -1
scanJsonArrayFromStream[Metric](in) { m: Metric =>
max = Math.max(max, m.MetricId)
true
}
println(max)
} finally in.close()
And this code should print 7855.
I am using MongoDB Spark Connector to get a collection. The aim is that we want to return all the documents that are present in the collection. We want to return all these documents as an array of JSON documents.
I am able to get the collection but I am not sure how to convert the customRDD object which contains the list of documents to a JSON format. I can convert the first document as you can see in the code but how to convert all the documents that are read from the collection and then make one JSON message and send it.
Expected Output:
This can be the array of documents.
{
"objects":[
{
...
},
{
....
}
]
}
Existing Code:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SparkSession
import com.mongodb.spark.config._
import com.mongodb.spark._
import org.json4s.native.JsonMethods._
import org.json4s.JsonDSL.WithDouble._
var conf = new SparkConf()
conf.setAppName("MongoSparkConnectorIntro")
.setMaster("local")
.set("spark.hadoop.validateOutputSpecs", "false")
.set("spark.mongodb.input.uri","mongodb://127.0.0.1/mystore.mycollection?readPreference=primaryPreferred")
.set("spark.mongodb.output.uri","mongodb://127.0.0.1/mystore.mycollection?readPreference=primaryPreferred")
sc = new SparkContext(conf)
val spark = SparkSession.builder().master("spark://192.168.137.103:7077").appName("MongoSparkConnectorIntro").config("spark.mongodb.input.uri", "mongodb://127.0.0.1/mystore.mycollection?readPreference=primaryPreferred").config("spark.mongodb.output.uri", "mongodb://127.0.0.1/mystore.mycollection?readPreference=primaryPreferred").getOrCreate()
//val readConfig = ReadConfig(Map("collection" -> "metadata_collection", "readPreference.name" -> "secondaryPreferred"), Some(ReadConfig(sc)))
val readConfig = ReadConfig(Map("uri" -> "mongodb://127.0.0.1/mystore.mycollection?readPreference=primaryPreferred"))
val customRdd = MongoSpark.load(sc, readConfig)
//println("Before Printing the value" + customRdd.toString())
println("The Count: "+customRdd.count)
println("The First Document: " + customRdd.first.toString())
val resultJSOn = "MetaDataFinalResponse" -> customRdd.collect().toList
val stringResponse = customRdd.first().toJson()
println("Final Response: " +stringResponse)
return stringResponse
Note:
I don't want to further map the JSON documents into another model. I want them to be as it is. I just want to aggregate them in one JSON message.
Spark Version: 2.4.0
SBT File:
name := "Test"
version := "0.1"
scalaVersion := "2.12.8"
libraryDependencies += "org.slf4j" % "slf4j-simple" % "1.7.0"
libraryDependencies += "org.mongodb.spark" %% "mongo-spark-connector" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.0"
This answer generates json string without escape characters and much more efficient but you need to collect RDD to perform this(you can remove the code from my previous answer);
// We will create a new Document with the documents that are fetched from MongoDB
import scala.collection.JavaConverters._
import org.bson.Document
// Collect customRdd and convert to java array
// (we can only create new Document with java collections)
val documents = customRdd.collect().toSeq.asJava
// Create new document with the field name you want
val stringResponse = new Document().append("objects", documents).toJson()
My test currently expects to match the converted json string from a method under test. I have constructed an expected string to perform the match.
val input = Foobar("bar", "foo")
val body = Foobar("bar !!", "foo!!")
val responseHeaders = Map[String, String]("Content-Type" -> "application/json")
val statusCode = "200"
val responseEvent = ResponseEvent(input, body, responseHeaders, statusCode)
val expected ="{\"input\":{\"foo\":\"bar\",\"bar\":\"foo\"},\"body\":{\"foo\":\"bar !!\",\"bar\":\"foo!!\"},\"headers\":{\"Content-Type\":\"application/json\"},\"statusCode\":\"200\"}"
val result = Main.stringifyResponse(responseEvent)
result should be(expected)
The string matching is extremely sensitive and fails on any whitespace, also any string written on multiline is not accepted for the test because the result is only available on one line as a result of stringifying using json4s library.
Is there a better way to perform matching on json output without having to do a full blown string comparision using scalatest.
Is there a better approach to create this test?
checkout https://github.com/stephennancekivell/scalatest-json
libraryDependencies += "com.stephenn" %% "scalatest-json-jsonassert" % "0.0.3"
libraryDependencies += "com.stephenn" %% "scalatest-json4s" % "0.0.2"
libraryDependencies += "com.stephenn" %% "scalatest-play-json" % "0.0.1"
libraryDependencies += "com.stephenn" %% "scalatest-circe" % "0.0.1"
It lets you write tests without caring about the whitespace, since its json.
it("should pass matching json with different spacing and order") {
val input = """
|{
| "some": "valid json",
| "with": ["json", "content"]
|}
""".stripMargin
val expected = """
|{
| "with": ["json", "content"],
| "some": "valid json"
|}
""".stripMargin
input should matchJson(expected)
}
You have two options!
Use a library like Play Json with which you could box your raw Json String into a JsObject and do the check using Scalatest. If you already use a JSON library, see if you can leverage that!
Box your JSON into a case class and check for equality!
The methods in ADSRegistrationMap are used to save and retrieve the document from MongoDB. ObjectId is created during initialization. I have to do the same to load Registration from Json that is part of POST body, so I thought I could just add ADSRegistrationProtocol object to do that. It fails with compilation error. Any idea on how to fix it or do this better?
package model
import spray.json._
import DefaultJsonProtocol._
import com.mongodb.casbah.Imports._
import org.bson.types.ObjectId
import com.mongodb.DBObject
import com.mongodb.casbah.commons.{MongoDBList, MongoDBObject}
case class Registration(
system: String,
identity: String,
id: ObjectId = new ObjectId())
object RegistrationProtocol extends DefaultJsonProtocol {
implicit val registrationFormat = jsonFormat2(Registration)
}
object RegistrationMap {
def toBson(registration: Registration): DBObject = {
MongoDBObject(
"system" -> registration.system,
"identity" -> registration.identity,
"_id" -> registration.id
)
}
def fromBson(o: DBObject): Registration = {
Registration(
system = o.as[String]("system"),
identity = o.as[String]("identity"),
id = o.as[ObjectId]("_id")
)
}
}
Compilation Error:
[error] /model/Registration.scala:20: type mismatch;
[error] found : model.Registration.type
[error] required: (?, ?) => ?
[error] Note: implicit value registrationFormat is not applicable here because it comes after the application point and it lacks an explicit result type
[error] implicit val registrationFormat = jsonFormat2(Registration)
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
Updated ObjectId to String and jsonFormat2 to jsonFormat3 to fix the compilation error.
case class Registration(
system: String,
identity: String,
id: String = (new ObjectId()).toString())
object RegistrationProtocol extends DefaultJsonProtocol {
implicit val registrationFormat = jsonFormat3(Registration)
}
Getting runtime error now when converting body of POST request to the Registration object. Any idea?
val route: Route = {
pathPrefix("registrations") {
pathEnd {
post {
entity(as[Registration]) { registration =>
Here is what is in build.sbt
scalaVersion := "2.10.4"
scalacOptions ++= Seq("-feature")
val akkaVersion = "2.3.8"
val sprayVersion = "1.3.1"
resolvers += "spray" at "http://repo.spray.io/"
resolvers += "Sonatype releases" at "https://oss.sonatype.org/content/repositories/releases"
// Main dependencies
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaVersion,
"com.typesafe.akka" %% "akka-slf4j" % akkaVersion,
"com.typesafe.akka" %% "akka-camel" % akkaVersion,
"io.spray" % "spray-can" % sprayVersion,
"io.spray" % "spray-routing" % sprayVersion,
"io.spray" % "spray-client" % sprayVersion,
"io.spray" %% "spray-json" % sprayVersion,
"com.typesafe" % "config" % "1.2.1",
"org.apache.activemq" % "activemq-camel" % "5.8.0",
"ch.qos.logback" % "logback-classic" % "1.1.2",
"org.mongodb" %% "casbah" % "2.7.4"
)
Error:
12:33:03.477 [admcore-microservice-system-akka.actor.default-dispatcher-3] DEBUG s.can.server.HttpServerConnection - Dispatching POST request to http://localhost:8878/api/v1/adsregistrations to handler Actor[akka://admcore-microservice-system/system/IO-TCP/selectors/$a/1#-1156351415]
Uncaught error from thread [admcore-microservice-system-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[admcore-microservice-system]
java.lang.NoSuchMethodError: spray.json.JsonParser$.apply(Ljava/lang/String;)Lspray/json/JsValue;
at spray.httpx.SprayJsonSupport$$anonfun$sprayJsonUnmarshaller$1.applyOrElse(SprayJsonSupport.scala:36)
at spray.httpx.SprayJsonSupport$$anonfun$sprayJsonUnmarshaller$1.applyOrElse(SprayJsonSupport.scala:34)
To avoid any issue, I would define the Register class (which seems a data model) as follows
case class Register(system: String, identity: String, id: String)
That's because it makes more sense to me having an id field as String rather than as BSON ObjectId (I'm used to datamodel which don't depend on 3rd party libraries).
Therefore, the right SprayJson protocol would make use of jsonFormat3 rather than jsonFormat2:
object RegistrationProtocol extends DefaultJsonProtocol {
implicit val registrationFormat = jsonFormat3(Registration)
}
And that would solve any kind of JSON serialization issue.
Finally, your toBson and fromBson converters would be:
def toBson(r: Registration): DBObject = {
MongoDBObject(
"system" -> r.system,
"identity" -> r.identity,
"_id" -> new ObjectId(r.id)
)
}
and
def fromBson(o: DBObject): Registration = {
Registration(
system = o.as[String]("system"),
identity = o.as[String]("identity"),
id = o.as[ObjectId]("_id").toString
)
}
A that's where the BSON ObjectId is being used: much closer to the MongoDB dependant logic.