Play framework: Unable to Inject Database object - mysql

I am trying to connect to mysql using play framework. I am new to play and unable to figure out the exact problem. Any help will be highly appreciated.
The configuration in conf\application.conf is as follows:
config = "db"
default = "default"
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost/ng_play"
db.default.username=root
db.default.password="****"
ebean.default = ["models.*"]
build.sbt
name := """play-scala-tutorial-one"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
jdbc,
cache,
ws,
"mysql" % "mysql-connector-java" % "5.1.36",
"org.scalatestplus.play" %% "scalatestplus-play" % "1.5.1" % Test
)
resolvers += "scalaz-bintray" at "http://dl.bintray.com/scalaz/releases"

Mysql version and database-connector version was mismatch. and also adding db.default.hikaricp.connectionTestQuery="SELECT TRUE" this to application.conf helped to mitigate one issue.
Thanks #silentprogrammer and #salem for help.

Related

Class io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer could not be found

I'm new to Confluent Cloud and deploying it using free trial of Confluent Kafka cluster.
I am trying to produce/consume the JSON messages from Kafka Topic using authenticated Schema Registry which can map Schema.
I am referring to the official documentation https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-json.html
but the JSON serializers mentioned is not working.
I am getting below error in my SCALA code:
Invalid value io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer for configuration value.serializer: Class io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer could not be found.
Could you please let us know if this has been deprecated or not valid anymore?
Can someone please advise here?
This is my exact build.sbt :
name := "JsonProducer"
version := "0.1"
scalaVersion := "2.12.5"
resolvers += "confluent" at "https://packages.confluent.io/maven/" // https://mvnrepository.com/artifact/io.confluent/kafka-json-schema-serializer
libraryDependencies += "io.confluent" % "kafka-json-schema-serializer" % "6.2.0"
Solved this by adding these dependencies in build.sbt:
resolvers += "jitpack" at "https://jitpack.io"
libraryDependencies += "org.everit.json" % "org.everit.json.schema" % "1.5.1"
libraryDependencies += "io.confluent" % "kafka-json-schema-serializer" % "6.2.0"
You need this dependency
"io.confluent" % "kafka-json-schema-serializer" % "6.2.0"

How to fix error 'Unable to create injector' in Play Framework 2.7 when trying to connect to MySql db

I am working on Play 2.7 project for the first time. As I added some db config stuff for mysql db, I get error at the start saying: "CreationException: Unable to create injector, see the following errors:"
1st part
2nd part
and a lot of other lines
build.sbt :
name := """playreview"""
organization := "asis"
version := "1.0-SNAPSHOT"
lazy val playreview = (project in file("."))
.enablePlugins(PlayJava, PlayEbean)
scalaVersion := "2.13.0"
libraryDependencies ++= Seq(guice, evolutions, javaJdbc)
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.41"
application.conf :
play.db {
config = "db"
default = "default"
}
db {
default.driver=com.mysql.jdbc.Driver
default.url="jdbc:mysql://localhost:3306/playreviewdb?useSSL=false"
default.username=root
default.password="*****"
}
# db connections = ((physical_core_count * 2) + effective_spindle_count)
fixedConnectionPool = 9
database.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = ${fixedConnectionPool}
}
}
ebean.default = ["models.*"]

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.csv.CSVParser.parse

I get this error when running the program:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.csv.CSVParser.parse
This is my SBT assembly file:
name := "mytest"
version := "1.0"
scalaVersion := "2.10.6"
organization := "org.test"
val sparkVersion = "1.6.1"
val mahoutVersion = "0.12.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-mllib" % sparkVersion,
// Mahout's Spark libs
"org.apache.mahout" %% "mahout-math-scala" % mahoutVersion,
"org.apache.mahout" %% "mahout-spark" % mahoutVersion
exclude("org.apache.spark", "spark-core_2.10"),
"org.apache.mahout" % "mahout-math" % mahoutVersion,
"org.apache.mahout" % "mahout-hdfs" % mahoutVersion
exclude("com.thoughtworks.xstream", "xstream")
exclude("org.apache.hadoop", "hadoop-client"),
// other external libs
"com.databricks" % "spark-csv_2.10" % "1.3.2",
"com.github.nscala-time" %% "nscala-time" % "2.16.0"
exclude("org.apache.commons", "commons-csv"),
"org.elasticsearch" % "elasticsearch" % "2.3.0",
"org.elasticsearch" % "elasticsearch-spark_2.10" % "2.3.0"
exclude("org.apache.spark", "spark-catalyst_2.10")
exclude("org.apache.spark", "spark-sql_2.10"))
resolvers += "typesafe repo" at " http://repo.typesafe.com/typesafe/releases/"
resolvers += Resolver.mavenLocal
assemblyMergeStrategy in assembly := {
case "plugin.properties" => MergeStrategy.discard
case PathList("org", "joda", "time", "base", "BaseDateTime.class") => MergeStrategy.first
case PathList("org", "apache", "commons", "csv", "CSVParser.class") => MergeStrategy.first
case PathList("org", "apache", "commons", "csv", "CSVPrinter.class") => MergeStrategy.first
case PathList("org", "apache", "commons", "csv", "ExtendedBufferedReader.class") => MergeStrategy.last
case PathList(ps # _*) if ps.last endsWith "package-info.class" =>
MergeStrategy.first
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
}
I also tested "com.databricks" % "spark-csv_2.10" % "1.5.0" and "com.databricks" % "spark-csv_2.10" % "1.4.0", but the same error appears all the time. I know that it has something to do with dependencies. Do I need to add any other library?
This looks like a problematic class path.
I would avoid using "assemblyMergeStrategy" to fix the classpath like this. It works ok if you have configuration file conflicts, like log4j, but if you have this kind of a mess, it's really not the right tool for the job.
Suggested solution:
Use exclude("org.apache.commons", "commons-csv") in all the dependencies that use commons-csv. Leave out only the one that you actually need (in this case the one from spark).
I would overall try to fix the classpath with exclusion rules without having to use "assemblyMergeStrategy".
For posterity :
If you have a dependency on apache-solr you probably have a conflict with the dependency solr-commons-csv.jar which embeds a class with the same name (org.apache.commons.csv.CSVParser)

play framework 2.3 classnotfound using play-hikaricp

I keep getting the following stack trace when I try to configure my Play 2.3 application to use hikaricp
java.lang.RuntimeException: java.lang.ClassNotFoundException: "com.mysql.jdbc.jdbc2.optional.MysqlDataSource"
com.zaxxer.hikari.util.PoolUtilities.createInstance(PoolUtilities.java:105)
com.zaxxer.hikari.pool.HikariPool.initializeDataSource(HikariPool.java:518)
com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:137)
com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:102)
com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:80)
com.edulify.play.hikaricp.HirakiCPDBApi$$anonfun$1.apply(HirakiCPDBApi.scala:36)
com.edulify.play.hikaricp.HirakiCPDBApi$$anonfun$1.apply(HirakiCPDBApi.scala:32)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
scala.collection.SetLike$class.map(SetLike.scala:92)
scala.collection.AbstractSet.map(Set.scala:47)
my build.sbt includes the mysql-connector and it should be in the classpath
name := """myapp"""
version := "2.3-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava)
scalaVersion := "2.11.1"
resolvers += Resolver.url("Edulify Repository", url("http://edulify.github.io/modules/releases/"))(Resolver.ivyStylePatterns)
libraryDependencies ++= Seq(
javaCore,
"mysql" % "mysql-connector-java" % "5.1.18",
javaJdbc,
"org.codehaus.jackson" % "jackson-mapper-asl" % "1.9.13",
"com.lowagie" % "itext" % "2.1.7",
"net.sf.jasperreports" % "jasperreports" % "5.2.0",
"org.mindrot" % "jbcrypt" % "0.3m",
javaEbean,
cache,
javaWs,
"com.edulify" %% "play-hikaricp" % "1.4.1"
)
also here is my conf/play.plugins file:
200:com.edulify.play.hikaricp.HikariCPPlugin
Any help would be appreciated
~Nick
same problem happened to me I think in hikaricp.prod.properties file you provide
jdbcUrl = "JDBC_CONNECTION" remove quotes solved the problem
jdbcUrl = JDBC_CO

How to setup username and password with Slick's source code generator?

Following the directions in this page: http://slick.typesafe.com/doc/2.0.0/code-generation.html
we see that something like the following segment of code is required to generate models for mysql tables
val url = "jdbc:mysql://127.0.0.1/SOME_DB_SCHEMA?characterEncoding=UTF-8&useUnicode=true"
val slickDriver = "scala.slick.driver.MySQLDriver"
val jdbcDriver = "com.mysql.jdbc.Driver"
val outputFolder = "/some/path"
val pkg = "com.pligor.server"
scala.slick.model.codegen.SourceCodeGenerator.main(
Array(slickDriver, jdbcDriver, url, outputFolder, pkg)
)
These parameteres are enough for an H2 database as the example in the link has it.
How to include username and password for the MySQL database?
From several links found in the internet and also based on the cvogt's answer this is the minimum that you need to do.
Note that this is a general solution for sbt. If you are dealing with play framework you might find it easier to perform this task with the relevant plugin
First of all you need a new sbt project because of all the library dependencies that are needed to be referenced in order for slick source generator to run.
Create the new sbt project using this tutorial: http://scalatutorials.com/beginner/2013/07/18/getting-started-with-sbt/
Preferably use the method Setup using giter8
If it happens to work with Intellij then you need to create file project/plugins.sbt and insert inside this line: addSbtPlugin("com.hanhuy.sbt" % "sbt-idea" % "1.6.0").
Execute gen-idea in sbt to generate an intellij project.
With giter8 you get an auto-generated file ProjectNameBuild.scala inside project folder. Open this and include at least these library dependencies:
libraryDependencies ++= List(
"mysql" % "mysql-connector-java" % "5.1.27",
"com.typesafe.slick" %% "slick" % "2.0.0",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"org.scala-lang" % "scala-reflect" % scala_version
)
where scala version is the variable private val scala_version = "2.10.3"
Now create the custom source code generator that looks like that:
import scala.slick.model.codegen.SourceCodeGenerator
object CustomSourceCodeGenerator {
import scala.slick.driver.JdbcProfile
import scala.reflect.runtime.currentMirror
def execute(url: String,
jdbcDriver: String,
user: String,
password: String,
slickDriver: String,
outputFolder: String,
pkg: String) = {
val driver: JdbcProfile = currentMirror.reflectModule(
currentMirror.staticModule(slickDriver)
).instance.asInstanceOf[JdbcProfile]
driver.simple.Database.forURL(
url,
driver = jdbcDriver,
user = user,
password = password
).withSession {
implicit session =>
new SourceCodeGenerator(driver.createModel).writeToFile(slickDriver, outputFolder, pkg)
}
}
}
Finally you need to call this execute method inside main project object. Find the file ProjectName.scala that was auto-generated by giter8.
Inside it you will find a println call since this is merely a "hello world" application. Above println call something like that:
CustomSourceCodeGenerator.execute(
url = "jdbc:mysql://127.0.0.1/SOME_DB_SCHEMA?characterEncoding=UTF-8&useUnicode=true",
slickDriver = "scala.slick.driver.MySQLDriver",
jdbcDriver = "com.mysql.jdbc.Driver",
outputFolder = "/some/path",
pkg = "com.pligor.server",
user = "root",
password = "xxxxxyourpasswordxxxxx"
)
This way every time you execute sbt run you are going to generate the Table classes required by Slick automatically
Note that at least for 2.0.1 this is fixed. Just add username and password to the end of the Array as Strings
This has been asked and answered here: https://groups.google.com/forum/#!msg/scalaquery/UcS4_wyrJq0/obLHheIWIXEJ . Currently you need to customize the code generator. A PR for 2.0.1 is in the queue.
My solution is nearly the same as George's answer, but I'll add mine anyway. This is the entire file I use to generate code for my mysql database in an SBT project.
SlickAutoGen.scala
package mypackage
import slick.model.codegen.SourceCodeGenerator
object CodeGen {
def main(args: Array[String]) {
SourceCodeGenerator.main(
Array(
"scala.slick.driver.MySQLDriver",
"com.mysql.jdbc.Driver",
"jdbc:mysql://localhost:3306/mydb",
"src/main/scala/",
"mypackage",
"root",
"" // I don't use a password on localhost
)
)
}
}
build.sbt
// build.sbt --- Scala build tool settings
libraryDependencies ++= List(
"com.typesafe.slick" %% "slick" % "2.0.1",
"mysql" % "mysql-connector-java" % "5.1.24",
...
)
To use this, just modify settings, save in project root directory and run as follows:
$ sbt
> runMain mypackage.CodeGen