I have a Scala Play 2.2.2 application and as part of my Specification tests I would like to insert some fixture data for testing preferably in json format. For the tests I use the usual in-memory H2 database. How can I accomplish this? I have searched all the documentation but there is no mention to this anywhere.
Note that I would prefer not to build my own flavor of fixture implementation via the Global. There should be a non-hacky way to this right?
AFAIK there is no built-in stuff to do this, ala Rails, and it's hard to imagine what the devs could do without making Play Scala much more opinionated about the way persistence should be handled (which I'd personally consider a negative.)
I also use H2 for testing and employ plain SQL fixtures in a resource file and load them before tests using a couple of (fairly simple) helpers:
package object helpers {
import java.io.File
import java.sql.CallableStatement
import org.specs2.execute.{Result, AsResult}
import org.specs2.mutable.Around
import org.specs2.specification.Scope
import play.api.db.DB
import play.api.test.FakeApplication
import play.api.test.Helpers._
/**
* Load a file containing SQL statements into the DB.
*/
private def loadSqlResource(resource: String)(implicit app: FakeApplication) = DB.withConnection { conn =>
val file = new File(getClass.getClassLoader.getResource(resource).toURI)
val path = file.getAbsolutePath
val statement: CallableStatement = conn.prepareCall(s"RUNSCRIPT FROM '$path'")
statement.execute()
conn.commit()
}
/**
* Run a spec after loading the given resource name as SQL fixtures.
*/
abstract class WithSqlFixtures(val resource: String, val app: FakeApplication = FakeApplication()) extends Around with Scope {
implicit def implicitApp = app
override def around[T: AsResult](t: => T): Result = {
running(app) {
loadSqlResource(resource)
AsResult.effectively(t)
}
}
}
}
Then, in your actual spec you can do something like so:
package models
import helpers.WithSqlFixtures
import play.api.test.PlaySpecification
class MyModelSpec extends PlaySpecification {
"My model" should {
"locate items correctly" in new WithSqlFixtures("model-fixtures.sql") {
MyModel.findAll().size must beGreaterThan(0)
}
}
}
Note: this specs2 stuff could probably be better.
Obviously if you really need JSON you'll have to add extra machinery to deserialise your models and persist them in the database (often in your app you'll be doing these things anyway, in which case that might be relatively trivial.)
You'll also need:
Some evolutions to establish your DB schema in conf/evolutions/default
The evolution plugin enabled, which will build your schema when the FakeApplication starts up
The appropriate H2 DB config
Related
As specified in the documentation it is possible to convert snake case to a camel case that is idiomatic in Scala. I tried it and it worked fine. Here is it:
implicit lazy val configuration: Configuration = Configuration.default.withSnakeCaseMemberNames
#ConfiguredJsonCodec final case class ModelClass(someField1: String, someField2: Int, someField3: String)
I want to keep my model clean without adding dependencies on external frameworks so it comprises only business-specific case classes.
Is it possible to avoid addding the annotation #ConfiguredJsonCodec and bringing implicit lazy val configuration: Configuration into scope? Maybe it could be configured on the Decoder level?
It's perfectly possible. It's a trade off:
if you have implicits in your companion objects, you don't have to import them
if you don't want to have coupling with libraries in your models, you have to out all implicits in trait/object and then mixin/import them every single time you need them
If you are developing application with a fixed stack, chosen libraries for each task, and so on - having all implicits in companion is just cleaner and easier to maintain.
package com.example
package object domain {
private[domain] implicit lazy val configuration: Configuration = ...
}
package com.example.domain
import io.circe.generic.extra._
#ConfiguredJsonCodec
final case class ModelClass(...)
Many utilities are optimized for this e.g. enumeratum-circe uses a mixin to add add codec for enumeration into companion object.
If you don't want to have them there, because e.g. you have your models in one module and it should be dependency-free, then you would have to put these implicits somewhere else. And that requires writing code manually, no way around it:
package com.example.domain
final case class ModelClass(...)
package com.example.domain.circe
import io.circe._
import io.circe.generic.extra.semiauto._
// if I want a mixin:
// class SomeClass extends Codecs { ... }
trait Codecs {
protected implicit lazy val configuration: Configuration = ...
implicit val modelClassDecoder: Decoder[ModelClass] = deriveConfiguredDecoder[ModelClass]
implicit val modelClassEncoder: Encoder[ModelClass] = deriveConfiguredEncoder[ModelClass]
}
// if I want an import:
// import com.example.domain.circe.Codecs._
object Circe extends Circe
If you pick this way, you are giving up on e.g. enumeraturm-circe's ability to provide codecs, and you will have to write them manually.
You have to pick one of these depending on your use case, but you cannot have the benefits of both at once: either you give up on boilerplate-reduction or on dependency-reduction.
I have some model definition inside a XSD file and I need to reference these models from an OpenApi definition. Manually remodeling is no option since the file is too large, and I need to put it into a build system, so that if the XSD is changed, I can regenerate the models/schemas for OpenApi.
What I tried and what nearly worked is using xsd2json and then converting it with the node module json-schema-to-openapi. However xsd2json is dropping some of the complexElement models. For example "$ref": "#/definitions/tns:ContentNode" is used inside of one model as the child type but there is no definition for ContentNode in the schema, where when I look into the XSD, there is a complexElement definition for ContentNode.
Another approach which I haven't tried yet but seems a bit excessive to me is using xjb to generate Java models from the XSD and then using JacksonSchema to generate the json schema.
Is there any established library or way, to use XSD in OpenApi?
I ended up implementing the second approach using jaxb to convert the XSD to java models and then using Jackson to write the schemas to files.
Gradle:
plugins {
id 'java'
id 'application'
}
group 'foo'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'com.fasterxml.jackson.module', name: 'jackson-module-jsonSchema', version: '2.9.8'
}
configurations {
jaxb
}
dependencies {
jaxb (
'com.sun.xml.bind:jaxb-xjc:2.2.7',
'com.sun.xml.bind:jaxb-impl:2.2.7'
)
}
application {
mainClassName = 'foo.bar.Main'
}
task runConverter(type: JavaExec, group: 'application') {
classpath = sourceSets.main.runtimeClasspath
main = 'foo.bar.Main'
}
task jaxb {
System.setProperty('javax.xml.accessExternalSchema', 'all')
def jaxbTargetDir = file("src/main/java")
doLast {
jaxbTargetDir.mkdirs()
ant.taskdef(
name: 'xjc',
classname: 'com.sun.tools.xjc.XJCTask',
classpath: configurations.jaxb.asPath
)
ant.jaxbTargetDir = jaxbTargetDir
ant.xjc(
destdir: '${jaxbTargetDir}',
package: 'foo.bar.model',
schema: 'src/main/resources/crs.xsd'
)
}
}
compileJava.dependsOn jaxb
With a converter main class, that does something along the lines of:
package foo.bar;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonMappingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.module.jsonSchema.JsonSchema;
import com.fasterxml.jackson.module.jsonSchema.JsonSchemaGenerator;
import foo.bar.model.Documents;
public class Main {
public static void main(String[] args) {
ObjectMapper mapper = new ObjectMapper();
JsonSchemaGenerator schemaGen = new JsonSchemaGenerator(mapper);
try {
JsonSchema schema = schemaGen.generateSchema(Documents.class);
System.out.print(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(schema));
} catch (JsonMappingException e) {
e.printStackTrace();
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
}
It is still not perfect though,... this would need to iterate over all the model classes and generate a file with the schema. Also it doesn't use references, if a class has a member of another class, the schema is printed inline instead of referencing. This requires a bit more customization with the SchemaFactoryWrapper but can be done.
The problem you have is that you are applying inference tooling over a multi-step conversion. As you have found, inference tooling is inherently fussy and will not work in all situations. It's kind of like playing Chinese whispers - every step of the chain is potentially lossy, so what you get out the other end may be garbled.
Based on the alternative approach you suggest, I would suggest a similar solution:
OpenAPI is, rather obviously, an API definition standard. It should be possible for you to take a code first approach, composing your API operations in code and exposing the types generated from XJB. Then you can use Apiee and its annotations to generate the OpenAPI definition. This assumes you are using JAX-RS for your API.
This is still a two-step process, but one with a higher chance of success. The benefit here is that your first step, inferring your XSD types into java types, will hopefully have very little (if any) impact on the code which defines your API operations. Although there will still be a manual step (updating the models) the OpenAPI definition will update automatically once the code has been rebuilt.
So Im writing a very large integration library and for no particular reason, I decided to do it in Scala. But, I encountered the Int = null issue and have yet to see a successful solution. Heres the situation
class Dexter(val shouldILetLive: Boolean, val stabNumTimes: Int)
Now, instantiate it
me = new Dexter(true,null)
run thru jackson gets
me: {
shouldILetLive: true
}
Or
me = new Dexter(false,10)
run thru jackson gets
me: {
shouldILetLive: false,
stabNumTimes: 10
}
awesome. Here's the problem. In my world, data means something. AND lack of data means something.
me = new Dexter(null,null)
me: {
}
means I haven't answered the question yet. So when dealing with a language that forces you to add values to those data elements, all of a sudden i get
me: {
shouldILetLive: false,
stabNumTimes: 0
}
Which means my company may have a director garroted.
Is there a solution I haven't come across yet? Am I not seeing something or am I using the wrong language for messaging based systems.
thanks.
Scala has a better way than Java to deal with missing data - the Maybe monad - which in implemented using the Option type.
The example below serializes to Json and deserialize back ( is with Circe but you can probably do something similar with Jackson scala module) - you can see that all options are handled correctly
import io.circe.generic.auto._
import io.circe.parser._
import io.circe.syntax._
import io.circe.{DecodingFailure, Printer}
case class Dexter(shouldILetLive: Option[Boolean],stabNumTimes:Option[Int])
def serde(x: Dexter) ={
val json=Printer.noSpaces.copy(dropNullKeys = true).pretty(x.asJson)
println(json)
decode[Dexter](json) match {
case Left(x: DecodingFailure) => Dexter(None,None)
case Right(d) => d
}
}
serde(Dexter(None,None))
serde(Dexter(Some(true),None))
serde(Dexter(Some(false),Some(10)))
I am testing out Grails static compilation, specifically GrailsCompileStatic. The documentation is limited in explaining what Grails dynamic features aren't supported. My test Controller is very simple, but I'm running into problems already.
#GrailsCompileStatic
class UserController {
UserService userService
def list() {
def model = [:]
def model = request.JSON
withFormat {
json {
render(model as JSON)
}
}
}
}
When compiling the application I get two compile time errors. The first about a missing property for JSON on the request object, and a second error about a missing method for json in the withFormat closure.
Seems to me I'm either doing something wrong or GrailsCompileStatic doesn't work with these features?
About request.JSON
The request object's getJSON() method is added via the ConvertersPluginSupport class. The exact lines are:
private static void enhanceRequest() {
// Methods for Reading JSON/XML from Requests
def getXMLMethod = { -> XML.parse((HttpServletRequest) delegate) }
def getJSONMethod = { -> JSON.parse((HttpServletRequest) delegate)}
def requestMc = GrailsMetaClassUtils.getExpandoMetaClass(HttpServletRequest)
requestMc.getXML = getXMLMethod
requestMc.getJSON = getJSONMethod
}
As you can see it uses the dynamic dispatch mechanism, but fortunately it's not such a big deal. You can simply replicate it by executing JSON.parse(request) anywhere in your controller.
Pay attention though! JSON.parse(HttpServletRequest) returns an Object, which is either a JSONObject or a JSONArray, so if you plan on using them explicitly, and you are compiling statically, you will have to cast it.
You might create a common base class for your controllers:
import org.codehaus.groovy.grails.web.json.JSONArray
import org.codehaus.groovy.grails.web.json.JSONObject
import grails.converters.JSON
#GrailsCompileStatic
class BaseController {
protected JSONObject getJSONObject() {
(JSONObject) JSON.parse(request)
}
protected JSONArray getJSONArray() {
(JSONArray) JSON.parse(request)
}
}
Then in your controller you can simpy invoke getJSONObject() or getJSONArray. It's a bit of a workaround, but results in a staticly compileable code.
About withFormat
This is a bit more complicated. The withFormat construct is really a method, which has a Closure as it's first parameter. The internal implementation then figures out based on the current request or response content type which part of the argument closure is to be used.
If you want to figure out how to do this statically, take a look at the source code.
You could extend this class, then use it's protected methods, but I don't know if it's worth all the hussle, you would loose much of Grails' conciseness. But if you really want to do it, you can. Don't you just love open source projects ? :)
I'm using Slick 2 code generator in my Scala/Play application to generate table classes of my PostgreSQL database. Some of the fields are JSON type though and they are turned into a String by the generator. I was wondering if I can somehow use slick-pg plugin to make the generator recognize Postgres JSON type?
I've tried to extend slick.driver.PostgresDriver directly in Build.scala:
import slick.driver.PostgresDriver
import com.github.tminglei.slickpg._
trait MyPostgresDriver extends PostgresDriver
with PgArraySupport
with PgDateSupport
with PgRangeSupport
with PgHStoreSupport
with PgPlayJsonSupport
with PgSearchSupport
with PgPostGISSupport {
override val Implicit = new ImplicitsPlus {}
override val simple = new SimpleQLPlus {}
trait ImplicitsPlus extends Implicits
with ArrayImplicits
with DateTimeImplicits
with RangeImplicits
with HStoreImplicits
with JsonImplicits
with SearchImplicits
with PostGISImplicits
trait SimpleQLPlus extends SimpleQL
with ImplicitsPlus
with SearchAssistants
with PostGISAssistants
}
object MyPostgresDriver extends MyPostgresDriver
But I don't know how to use it with the code generator routine instead of the standard driver:
SourceCodeGenerator.main(
Array(
"scala.slick.driver.PostgresDriver", //how do I use MyPostgresDriver here?
"org.postgresql.Driver",
"jdbc:postgresql://localhost:5432/db?user=root&password=root",
"app",
"db"
)
)
You can't make the generator pick up the type this way. It (or rather the Slick code that reverse engineers the Model from the database schema) currently only detects a hand full of types and simply assumes String for all other ones. This will be improved in the future. In order to make it use a different type for a column you have to customize the generator. The corresponding Slick documentation example actually shows how to customize the type:
http://slick.typesafe.com/doc/2.0.0/code-generation.html#customization
You can also define a function to access to the json tree directly
val jsonSQL=
SimpleFunction.binary[Option[String],String,Option[String]]("json_extract_path_text")