Maven surefire-plugin doesn't run integration tests (they named with "IT" suffix by convention), but sbt runs both: unit and integration. So, how to prevent this behaviour? Is there a common way to distinguish integration and unit tests for ScalaTest (don't run FeatureSpec-tests by default)
How to do that is exactly documented on the sbt manual on http://www.scala-sbt.org/release/docs/Detailed-Topics/Testing#additional-test-configurations-with-shared-sources :
//Build.scala
import sbt._
import Keys._
object B extends Build {
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testTasks) : _*)
.settings(
libraryDependencies += specs,
testOptions in Test := Seq(Tests.Filter(itFilter)),
testOptions in FunTest := Seq(Tests.Filter(unitFilter))
)
def itFilter(name: String): Boolean = name endsWith "ITest"
def unitFilter(name: String): Boolean = (name endsWith "Test") && !itFilter(name)
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
}
Call sbt test for unit tests and sbt fun:test for integration test and sbt test fun:test for both.
The simplest way with the latest sbt is just to apply IntegrationTest config and corresponding settings as described here, - and you put your tests in src/it/scala directory in your project.
Related
I am new Play/Scala and started porting a Spring Boot RestAPI to Play2 as a learning exercise.
In Java/SpringRest ,its simply a matter of annotating POJOs and the JSon library handle the serialize/deserialization automatically.
According to every Play2/Scala tutorial I read, I have to write a Writer/Reader for each model/case class as follows
implicit val writesItem = Writes[ClusterStatus] {
case ClusterStatus(gpuFreeMemory, gpuTotalMemory, labelsLoaded, status) =>
Json.obj("gpuFreeMemory" -> gpuFreeMemory,
"gpuTotalMemory" -> gpuTotalMemory,
"labelsLoaded" -> labelsLoaded,
"status" -> status)
}
//HTTP method
def status() = Action { request =>
val status: ClusterStatus = clusterService.status()
Ok(Json.toJson(status))
}
This means If have a large domain model/response model, I have to write a lot of Writers/Readers for serialize/deserialization?
Is there any simpler way to handle this?
You can give a try to "com.typesafe.play" %% "play-json" % "2.7.2". For using that you just need to do the below steps:
1) Add below dependencies(Use version according to your project):
"com.typesafe.play" %% "play-json" % "2.7.2",
"net.liftweb" % "lift-json_2.11" % "2.6.2"
2) Define formats:
implicit val formats = DefaultFormats
implicit val yourCaseClassFormat= Json.format[YourCaseClass]
This format defines both read and writes for your case class.
How can I generate FIRRTL file from chisel code? I have installed sbt, firrtl and verilator according to the github wiki. And created a chisel code for simple adder. I want to generate the FIRRTL and covert it to Verilog? My problem is how to get the firrtl file from the chisel code.
Thanks.
Source file : MyQueueTest/src/main/scala/example/MyQueueDriver.scala
package example
import chisel3._
import chisel3.util._
class MyQueue extends Module {
val io = IO(new Bundle {
val a = Flipped(Decoupled(UInt(32.W)))
val b = Flipped(Decoupled(UInt(32.W)))
val z = Decoupled(UInt(32.W))
})
val qa = Queue(io.a)
val qb = Queue(io.b)
qa.nodeq()
qb.nodeq()
when (qa.valid && qb.valid && io.z.ready) {
io.z.enq(qa.deq() + qb.deq())
}
}
object MyQueueDriver extends App {
chisel3.Driver.execute(args, () => new MyQueue)
}
I asked a similar question here.
The solution could be to use full template provided here, or you can simply do that:
Add these lines at the end of your scala sources :
object YourModuleDriver extends App {
chisel3.Driver.execute(args, () => new YourModule)
}
Replacing "YourModule" by the name of your module.
And add a build.sbt file in the same directory of your sources with these lines :
scalaVersion := "2.11.8"
resolvers ++= Seq(
Resolver.sonatypeRepo("snapshots"),
Resolver.sonatypeRepo("releases")
)
libraryDependencies += "edu.berkeley.cs" %% "chisel3" % "3.0-SNAPSHOT"
To generate FIRRTL and Verilog you will just have to do :
$ sbt "run-main YourModuleDriver"
And the FIRRTL (yourmodule.fir) /Verilog (yourmodule.v) sources will be in generated directory.
I am trying to create a JSON String from a Scala Object as described here.
I have the following code:
import scala.collection.mutable._
import net.liftweb.json._
import net.liftweb.json.Serialization.write
case class Person(name: String, address: Address)
case class Address(city: String, state: String)
object LiftJsonTest extends App {
val p = Person("Alvin Alexander", Address("Talkeetna", "AK"))
// create a JSON string from the Person, then print it
implicit val formats = DefaultFormats
val jsonString = write(p)
println(jsonString)
}
My build.sbt file contains the following:
libraryDependencies += "net.liftweb" %% "lift-json" % "2.5+"
When I build with sbt package, it is a success.
However, when I try to run it as a Spark job, like this:
spark-submit \
--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0,net.liftweb:lift-json:2.5+ \
--class "com.foo.MyClass" \
--master local[4] \
target/scala-2.10/my-app_2.10-0.0.1.jar
I get this error:
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: net.liftweb#lift-json;2.5+: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1068)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:287)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:154)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
What am I doing wrong here? Is net.liftweb:lift-json:2.5+ in my packages argument incorrect? Do I need to add a resolver in build.sbt?
Users may also include any other dependencies by supplying a comma-delimited list of maven coordinates with --packages.
2.5+ in your build.sbt is Ivy version matcher syntax, not actual artifact version needed for Maven coordinates. spark-submit apparently doesn't use Ivy for resolution (and I think it would be surprising if it did; your application could suddenly stop working because a new dependency version was published). So you need to find what version 2.5+ resolves to in your case e.g. using https://github.com/jrudolph/sbt-dependency-graph (or trying to find it in show dependencyClasspath).
I'm writing a veery simple scala script to connect to Mysql using slick 3.
My build.sbt looks like this:
name := "slick_sandbox"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"com.typesafe.slick" %% "slick" % "3.0.3",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"mysql" % "mysql-connector-java" % "5.1.6"
)
application.conf:
Drivder is an intentional mistake; also, I did not provide a db username or password!
mysqldb = {
url = "jdbc:mysql://localhost/slickdb"
driver = com.mysql.jdbc.Drivder
connectionPool = disabled
keepAliveConnection = true
}
Main.scala
import slick.driver.MySQLDriver.api._
import scala.concurrent.ExecutionContext.Implicits.global
object Main {
def main(args: Array[String]) {
// test to see this function is being run; it IS
println("foobar")
// I expected an error here due to the intentional
// mistake I've inserted into application.conf
// I made sure the conf file is getting read; if I change mysqldb
// to some other string, I get correctly warned it is not a
// valid key
val db = Database.forConfig("mysqldb")
val q = sql"select u.name from users ".as[String]
db.run(q).map{ res=>
println(res)
}
}
}
It compiles OK. Now this is the result I see when I run sbt run on the terminal:
felipe#felipe-XPS-8300:~/slick_sandbox$ sbt run
[info] Loading project definition from /home/felipe/slick_sandbox/project
[info] Set current project to slick_sandbox (in build file:/home/felipe/slick_sandbox/)
[info] Compiling 1 Scala source to /home/felipe/slick_sandbox/target/scala-2.11/classes...
[info] Running Main
foobar
[success] Total time: 5 s, completed Sep 17, 2015 3:29:39 AM
Everything looks deceptively OK; even though I explicitly ran the query on a database that doesn't exist, slick went ahead as if nothing has happened.
What am I missing here?
Slick runs queries asynchronously. So it just didn't have enough time to execute it. In your case you have to wait for result.
object Main {
def main(args: Array[String]) {
println("foobar")
val db = Database.forConfig("mysqldb")
val q = sql"select u.name from users ".as[String]
Await.result(
db.run(q).map{ res=>
println(res)
}, Duration.Inf)
}
}
I am rather new to CHISEL.
Is it possible for CHISEL testbench to receive an arg passed in during runtime?
For example, sbt run --backend c --compile --test --genHarness --dut1
--dut1 is meant to be received by the testbench as an arg. It will be used to determine which DUT to be instantiated.
Yes, I believe that would work.
sbt "project myproject" "run my_arg --backend c --targetDir my_target_dir"
You can catch that in your own main, strip out your arguments, and pass Chisel its arguments. Something sort of like this:
````
object top_main {
def main(args: Array[String]): Unit = {
val my_arg = args(0)
val chiselArgs = ArrayBufferString
chiselMain(chiselArgs.toArray, () => iforgettheexactsyntax(my_arg))
}
}
Check out (Chisel runtime error in test harness) for an example main that invokes Chisel.