Play Framework 2.3 on OpenShift database token substitution not working - openshift

I am trying to deploy a Play Framework 2.3 application to OpenShift.
I am following this example: https://github.com/JamesSullivan/play2-openshift-quickstart
Building and deploying the application is working (by that I mean the push to the git repository is working and the build is completing successfully), but during startup I see this error in play.log:
AbstractConnectionHook -
Failed to obtain initial connection Sleeping for 0ms and trying again.
Attempts left: 0. Exception: null.
Message:No suitable driver found for jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}
Oops, cannot start the server.
Configuration error: Configuration error[Cannot connect to database [default]]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:94)
at play.api.Configuration.reportError(Configuration.scala:743)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:247)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:238)
at scala.collection.immutable.List.map(List.scala:272)
at play.api.db.BoneCPPlugin.onStart(DB.scala:238)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at scala.collection.immutable.List.foreach(List.scala:381)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.api.Play$.start(Play.scala:90)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:55)
at play.core.server.NettyServer$.createServer(NettyServer.scala:244)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:280)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:275)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:275)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: java.sql.SQLException: No suitable driver found for jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:363)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:240)
... 18 more
So it looks like the ${OPENSHIFT_POSTGRESQL_DB_URL} environment variable token-substitution is not working.
If I log in to my application, I see this via env (obviously I replaced the username, password, IP and port for the purposes of posting here):
OPENSHIFT_POSTGRESQL_DB_URL=postgresql://xxxx:yyyy#ip:port
I have also tried using the other environment variables, like OPENSHIFT_POSTGRESQL_DB_HOST but those too do not get substituted.
The relevant part of my openshift.conf looks like this:
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}"
db.default.user=myappuser
db.default.password=myapppassword
From the linked quickstart project, the following command is used to start the Play server (again, I replaced server-ip for the purposes of this post):
/app-root/runtime/repo/target/universal/stage/bin/myapp
"-DapplyEvolutions.default=true"
-Dhttp.port=8080 -Dhttp.address=server-ip
-Dconfig.resource=openshift.conf
You can see the openshift.conf file being referenced.

I tried a lot of things, eventually I found something that worked:
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://"${OPENSHIFT_POSTGRESQL_DB_HOST}":"${OPENSHIFT_POSTGRESQL_DB_PORT}/mydb
db.default.user=(((db-user)))
db.default.password=(((dp-password)))
The upshot is, it seems, you need to watch out very carefully for correct usage of the quotation characters.
It looks "wrong" (at first glance) since the last quotation character closes the string prior to the OPENSHIFT_POSTGRESQL_DB_PORT variable.

Related

Dropped rows in Spark when modifying database in MySQL

I've been following the 5 min how to for setting up an htap databse with tidb_tispark and everything works until I get to the section Launch TiSpark. My first issue occurs when executing the line:
docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell
But I got around that by modifying the spark version to the version I found inside the container:
docker-compose exec tispark-master /opt/spark-2.3.3-bin-hadoop2.7/bin/spark-shell
My second issue occurs when executing the three line block:
import org.apache.spark.sql.TiContext
val ti = new TiContext(spark)
ti.tidbMapDatabase("TPCH_001")
When I run the last statement I get the following output
scala> ti.tidbMapDatabase("TPCH_001")
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
2019-07-11 16:14:36 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
This doesn't prevent me from running the query:
spark.sql("select * from nation").show(30);
But when I follow the further steps of the tutorial to modify the db from MySQL, the changes are not reflected immediately in Spark. Furthermore, at some point in the future (I believe > 5 minutes later), the row that was modified stops showing up in Spark SQL queries.
I'm rather new to this kind of setup and don't really know how to debug this issue. Searches for the warnings I received weren't illuminating.
I don't know if it's helpful but when I connect MySQL this is the server version I get:
Server version: 5.7.25-TiDB-v3.0.0-rc.1-309-g8c20289c7 MySQL Community Server (Apache License 2.0)
I'm one of the main dev of TiSpark. Sorry for your bad experience with it.
Due to my docker problem, I cannot directly reproduce your issue but it seems you hit one of the bug fixed recently.
https://github.com/pingcap/tispark/pull/862/files
The tutorial document is not quite up-to-date and points to an older version. That's why it didn't work with spark 2.1.1 as in tutorial. We will update it ASAP.
Newer version of TiSpark doesn't use tidbMapDatabase anymore but hooks with catalog directly instead. Method tidbMapDatabase remains for backward compatibility. Unfortunately, the tidbMapDatabase had a bug(when we ported it from older version) that it retrieves timestamp for query only once you call the function. That causes TiSpark always uses old timestamp to do snapshot reading and newer data would never be seen by it.
In newer version of TiSpark (TiSpark 2.0+ with Spark 2.3+), databases and tables are directly hooked into catalog services and you can directly call
spark.sql("use TPCH_001").show
spark.sql("select * from nation").show
This should give you fresh data.
So try restart your Spark driver, just try the two lines of code above and see if it works.
Let me know if this fix your problem. On the other hand, we will check our docker image to make sure if it contains the fix already.
If things still get wrong, would you please help to run below code and let us know the version of TiSpark.
spark.sql("select ti_version()").show
Again, sorry for causing you trouble and thanks for trying.
EDIT
To address your comment:
The warning is due to spark itself will try to locate the database in its native catalog first and this will cause a Failed to get warning. But the failover process will delegate the search to tispark and then behave correctly. So this warning can be ignored. It's recommended that add below lines to your log4j.properties in conf folder of your spark.
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
We will polish the docker tutorial image soon. Thank you so much for trying.

apache drill on cluster start error

I install apache drill on a cluster with 3 nodes.
When I use the following command to start it,it will not really running.
bin/drillbit.sh start
error
I don't know how to solve it and want you help.
The zookeeper is running without problems.
Then I check the log, and it show the following infos:
Exception in thread "main" org.apache.drill.exec.exception.DrillbitStartupException: Failure while initializing values in Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:287)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:271)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:267)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path
I check the java.library.path, it is the following:
/home/hadoop/bigdata/hadoop-2.7.2/lib/native/::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
So, I add the following setting:
declare -x DRILL_JAVA_LIB_PATH="/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"
However, it not work and turn out the same problem like before.
The declare -x DRILL_JAVA_LIB_PATH snippet you provided will not point drill to the pam library. Please follow all the instructions in the Drill docs here https://drill.apache.org/docs/using-jpam-as-the-pam-authenticator/
Note: you will have to perform those steps on all 3 nodes of your cluster.

Play TypeSafe Activator fails to start - IllegalArgumentException "Failed to download new template catalog properties"

Moving from play 2.2.x to latest activator last night. Downloaded minimal 1.2.10, extracted it in program file (x86)\typesafe... and put the directory into the system path variable. cloned my repository, and when i executed activator run it downloaded the required modules and my app is up and running. All great so far. run works!
Then I tried to create a new app, and activator fails, with the following trace:
Checking for a newer version of Activator (current version 1.2.10)...
... our current version 1.2.10 looks like the latest.
Found previous process id: 9632
FOUND REPO = activator-local # file:////C:/Program%20Files%20(x86)/Typesafe/activator-1.2.10-minimal/repository
Play server process ID is 9760
[info] play - Application started (Prod)
[info] play - Listening for HTTP on /127.0.0.1:8888
[info] a.e.s.Slf4jLogger - Slf4jLogger started
[WARN] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [ActorSystem(default)] Failed to download new template ca
talog properties: java.lang.IllegalArgumentException: requirement failed: Source file 'C:\Users\admin\.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' is a directory.
[ERROR] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [akka://default/user/template-cache] Could not find a te
mplate catalog. (activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.
properties with an index hash in it, even though we should have downloaded one
activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.properties with a
n index hash in it, even though we should have downloaded one
at activator.cache.TemplateCacheActor.preStart(TemplateCacheActor.scala:184)
at akka.actor.Actor$class.aroundPreStart(Actor.scala:470)
at activator.cache.TemplateCacheActor.aroundPreStart(TemplateCacheActor.scala:25)
at akka.actor.ActorCell.create(ActorCell.scala:580)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
I've taken a look at several similar issues on SO and elsewhere. I've deleted .activator directory and retried, I've tried this process from behind a proxy and not, as well as offline (surely offline should work!), but it consistently gives the above error. activator ui gives the same error. I'm stuck and any suggestions would be appreciated. (Edit. tried with full activator download, rather than minimal, and I get the same error.)
Look for reasons it might be impossible to create or access 'C:\Users\admin.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' ... maybe a permissions issue?
The failed check is for "is a directory" but that also fails if it just doesn't exist or can't be accessed.

Connecting a MySQL database to Glassfish classpath is not set or classname is wrong

I'm swapping out a derby database for a MySQL one. I had everything working before but after what I thought was the proper configuration I'm getting the error:
Caused by: javax.resource.ResourceException: Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource
Full error output from console:
Caused by: javax.resource.ResourceException: Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource
at com.sun.gjc.common.DataSourceObjectBuilder.getDataSourceObject(DataSourceObjectBuilder.java:292)
at com.sun.gjc.common.DataSourceObjectBuilder.constructDataSourceObject(DataSourceObjectBuilder.java:114)
at com.sun.gjc.spi.ManagedConnectionFactory.getDataSource(ManagedConnectionFactory.java:1292)
at com.sun.gjc.spi.DSManagedConnectionFactory.getDataSource(DSManagedConnectionFactory.java:148)
at com.sun.gjc.spi.DSManagedConnectionFactory.createManagedConnection(DSManagedConnectionFactory.java:101)
at com.sun.enterprise.resource.allocator.LocalTxConnectorAllocator.createResource(LocalTxConnectorAllocator.java:87)
I've double checked some of the names, the connection pool and other resources.I've also added the MySQL driver .jars to the library of glassfish in both projects. The database was definitely working correctly through eclipse because I was able to view tables and display the resources inside the database context of eclipse. So I know that at least THOSE drivers are working correcly. Also the persistence.xml file looks good. it references the jdbc/mydatabase jndi reference like it should and default JTA is selected as the manament type.
Does anyone have another suggestion? Thank you
I've also added the MySQL driver .jars to the library of glassfish in both projects.
It was apparently not done right. The JAR has to go in /glassfish/domains/[domainname]/lib/ext folder of the Glassfish installation where [domainname] usually defaults to domain1. You can and should not configure it from the Eclipse side on.
Looks like I am replying very late, but however people who would be referring to this thread may find the following information being useful. So I am posting it here:
Download the connector Jar from http://dev.mysql.com/downloads/connector/j/5.0.html
unzip the pack and copy mysql-connector-java-verno-bin.jar
past the same at [GlassFish Installation Directory]/domains/[domain name]/lib folder
restart your domain and ping to check your connection in JDBC Connection Pools
There you go. If your MySql is running then it will ping the DB successfully
I copied the jar file to $glassfish_install_folder\glassfish\lib, after that it worked. I use glassfish 4.0.
Check this link from oracle.
I encountered this issue in 2018 and would like to note that if you are using glassfish 4 (current is 5) then it seems you have to use the Connector/J 5.1.47 version for it to work. If you use the current version (Connector/J 8.0.13) then exception mentioned in the original question keeps appearing, no matter where you place the .jar file.
With Connector/J 5.1.47 it works perfectly.
Solution is
> asadmin add-library /path/to/mysql-connector-java-bin.jar
Check this link:
https://blog.payara.fish/using-mysql-with-payara
I encountered this issue in 2019 and would like to note that if you are using docker image payara/server-full (I am using 5.194 so far) as I do the location to place the driver jar is:
/opt/payara/appserver/glassfish/domains/production/lib/
In the end I am doing something like this in the Dockerfile of payara server:
RUN wget http://central.maven.org/maven2/org/mariadb/jdbc/mariadb-java-client/2.2.0/mariadb-java-client-2.2.0.jar \
-O /opt/payara/appserver/glassfish/domains/production/lib/mariadb-java-client-2.2.0.jar

Directly upload video captured from webcam

I'm working on a site, which needs to have a feature of users capturing their opinions about a specific topic with their webcam, and uploading them to the server. After a short research, and viewing similar questions like this one I ran out of possibilities:
I cannot use Flash Media Server (out of budget)
I cannot use Red5, as the server where the app will be hosted is not able to build or install it. (our server guy told me that Red5 was last updated in 2007, and he got some errors which I include below (in case that someone would have an idea how to fix it) )
Are there any other possibilities?
red5 install errors:
xxyyzz#zzyyxx ~/red5 $ sh red5.sh
Exception in thread "main" java.lang.NoClassDefFoundError: org/red5/server/Standalone
Caused by: java.lang.ClassNotFoundException: org.red5.server.Standalone
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: org.red5.server.Standalone. Program will exit.
xxyyzz#zzyyxx ~/red5 $ ant prepare
Buildfile: /home/xxyyzz/red5/build.xml
[echo] java.home is /usr/lib64/icedtea6/jre and the target version is 1.6
prepare:
[mkdir] Created dir: /home/xxyyzz/red5/bin
[mkdir] Created dir: /home/xxyyzz/red5/dist
[mkdir] Created dir: /home/xxyyzz/red5/bin/testcases
[mkdir] Created dir: /home/xxyyzz/red5/bin/testcases/testreports
BUILD SUCCESSFUL
Total time: 1 second
xxyyzz#zzyyxx ~/red5 $ ant build
Buildfile: /home/xxyyzz/red5/build.xml
[echo] java.home is /usr/lib64/icedtea6/jre and the target version is 1.6
BUILD FAILED
Target "build" does not exist in the project "RED5".
Total time: 1 second
If you're looking for "production ready" solutions you're pretty much looking at flash media server. Other than that, try googling around for a PHP FLV encoding but I dont think you're gonna be able to stream to php and encode on the fly. Dunno, have to research and ask a PHP developer. Maybe post a modified version of this question with PHP tags.
Take a look at haxevideo, I've never used it but it could be what your after. http://code.google.com/p/haxevideo/
Teo! Have you tried to look for an online service (like Nimbb)? I used NImbb in one of my projects and it worked very well. I didn't need to spend time in programming and the technology they offer was very use for the end-users. This is a very
Please let me know if my information helps you.