I am trying to deploy a dancer based application on openshift. I am unable to workaround teh following issues.
How do I get dancer to use the openshift environment variables e.g. OPENSHIFT_MYSQL_DB_HOST or OPENSHIFT_DATA_DIR. Putting them in the config.yml files is not working i tried $OPENSHIFT_DATA_DIR and $ENV{OPENSHIFT_DATA_DIR} overriding them in the application code is not working...
Does openshift store the console log somewhere? rhc tail does not provide the complete output...
Is it possible to run the app on the server from a ssh shell? I tried it but am getting a permission denied error
Dancer is a perl based web framework. see https://metacpan.org/pod/Dancer::Cookbook
I am not sure what a dancer application is, but for a java application with a mysql db on openshift, you access it with the following code
Import the following:
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.sql.DataSource;
then use the following as your connection string:
InitialContext ic = new InitialContext();
Context initialContext = (Context) ic.lookup("java:comp/env");
DataSource dataSource = (DataSource) initialContext.lookup("jdbc/MysqlDS");
Connection connection = dataSource.getConnection();
I hope this helps.
Related
Currently I have some new modules using Spring Boot and a H2 embedded database for functional testing.
The legacy module works with a lot of Liquibase scripts to construct the whole database.
I was looking to use Wix Embedded Mysql to make the test database more production-like. After read the docs I did not find anything specific about how to handle scripts using tools like Liquibase or Flyway.
Is it possible to execute a Liquibase goal on this embedded database after his startup?
After a few days of research, yes, there is a way of running Liquibase over Wix Embedded MySQL.
Here is the step by step:
Configuring Wix Embedded Database
The configuration around Wix is pretty straight forward as described on their GitHub:
MysqldConfig config = aMysqldConfig(v5_7_latest)
.withCharset(UTF8)
.withPort(3060)
.withUser("myuser", "mypassword")
.withTimeZone("America/Sao_Paulo")
.build();
EmbeddedMysql mysqld = anEmbeddedMysql(config)
.addSchema("myschema")
.start();
Liquibase configuration
I have added the Liquibase maven dependency on my project, so we have access to Liquibase code programmatically, the API can be found here.
First we have to build a datasource and pass to Liquibase find the correct implementation of our database, with the result we can then manipulate the Liquibase object to execute the goals:
DataSourceBuilder<?> dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.username(mysqld.getConfig().getUsername());
dataSourceBuilder.password(mysqld.getConfig().getPassword());
dataSourceBuilder.driverClassName(com.mysql.jdbc.Driver.class.getName());
dataSourceBuilder.url("jdbc:mysql://localhost:3060/myschema");
DataSource dataSource = dataSourceBuilder.build();
Database database = DatabaseFactory
.getInstance()
.findCorrectDatabaseImplementation(
new JdbcConnection(dataSource.getConnection()) // Fetch MySQL database implementation
);
Liquibase liquibase = new Liquibase("liquibase/mychanges.xml", // Path to liquibase changes
new ClassLoaderResourceAccessor(),
database);
liquibase.update(new Contexts()); // This execute the liquibase:update on the embedded database
I have the following code that I would like to execute. I have tried requiring mysql and node-mysql and they both give me the same error:
Code:
var AWS = require("aws-sdk");
var mysql = require("mysql");
exports.handler = (event, context, callback) => {
try {
console.log("GOOD");
}
catch (error) {
context.fail(`Exception: ${error}`)
}
};
Error:
{
"errorMessage": "Cannot find module 'mysql'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:417:25)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)",
"Object.<anonymous> (/var/task/index.js:2:13)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)"
]
}
How do I import mysql into node using lambda or get this to work?
Ohk so this is expected to happen.
The problem is that AWS Lambda runs on a different machine and there is no way you can configure that particular machine to run in a custom environment. You can however package the Node Module of mysql or node-mysql in a zip and upload to AWS Lambda. Steps are,
npm install mysql --save
Zip your folder and INCLUDING your node package
Upload this zip file as your code in AWS Lambda.
You can also take a better approach by using Serverless Framework. More info here. In this approach, you write a YAML file which contains all the details and configuration you want to deploy your lambda with. Under your lambda configuration, specify path to your node module (say, nodemodule/**) under package -> include section. This will package your required alongwith your code. Later using command line you can deploy this lambda. It uses AWS Cloudformation service and is one of most prefered way of deploying resources.
More information on packaging using Serverless Framework can be found here.
Note: To use serverless framework there couple of steps like getting API keys for your user, setting right permissions in IAM etc. These are just initial setup and won't be need later. Do perform those prior to deploying using serverless framework.
Hope this helps!
In case any body needs an alternative,
You can use the cloud9 IDE which is free to open the lambda function and execute the npm init using the terminal window against the lambda function folder this will provide the node package file, which then can be used to install dependencies.
if using package.json, simply add below and run "npm install"
{
"dependencies": {
"mysql": "2.12.0"
}
}
I experienced this when using knex, although I had mysql in my package.json.
I had to require('mysql') in my lambda (or a file it references) so that Serverless packages it during deployment.
I want to create a database within a pipeline script to be used by the deployed app. But first I started testing the connection. I got this problem:
java.sql.SQLException: No suitable driver found for jdbc:mysql://mysql:3306/test_db
I have the database plugin and the MySQL database plugin installed.
How do I get the JDBC driver?
import groovy.sql.Sql
node{
def sql = Sql.newInstance("jdbc:mysql://mysql:3306/test_db", "user","passwd", "com.mysql.jdbc.Driver")
def rows = sql.execute "select count(*) from test_table;"
echo rows.dump()
}
Update after albciff answer:
My versions of:
Jenkins = 2.19.1
Database plugin = 1.5
Mysql database plugin = 1.1
The latest test script.
import groovy.sql.Sql
Class.forName("com.mysql.jdbc.Driver")
Which throws:
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
From the MySQL DataBase Plugin documentation you can see that jdbc drivers for MySQL are included:
Note that MySQL JDBC driver is under GPLv2 with FOSS exception. This
plugin by itself qualifies under the FOSS exception, but if you are
redistributing this plugin, please do check the license terms.
Drizzle(+MySQL) Database Plugin is available as an alternative to this
plugin, and that one is under the BSD license.
More concretely the actual last version (1.1) for this plugin contains connector version 5.1.38:
Version 1.1 (May 21, 2016) mysql-connector version 5.1.38
So probably in order to have the driver available you have to force the driver to be registered.
To do so use Class.forName("com.mysql.jdbc.Driver") before instantiate the connection in your code:
import groovy.sql.Sql
node{
Class.forName("com.mysql.jdbc.Driver")
def sql = Sql.newInstance("jdbc:mysql://mysql:3306/test_db", "user","passwd", "com.mysql.jdbc.Driver")
def rows = sql.execute "select count(*) from test_table;"
echo rows.dump()
}
UPDATE:
In order to has the JDBC connector classes available in the Jenkins pipeline groovy scripts you need to update the DataBase plugin to last currently version:
Version 1.5 (May 30, 2016) Pipeline Support
You can simply add the java connector in the java class path.
If jenkins is running java < 9 you probably will find the right place inside something like that:
<java_home>/jre/lib/ext
If jenkins is running java >= 9 you probably will find the right place inside something like that:
/usr/share/jenkins/jenkins.war
To find your paths you can check:
http://your.jenkins.host/systemInfo (or navigate system info path by GUI) and search for java.ext.dirs or java.class.path
http://your.jenkins.host/script (running console script such as System.getProperty("java.ext.dirs") or System.getProperty("java.class.path"))
This snippet can help you with the jenkins.war thing when running inside docker:
#adding extra jars to default jenkins java classpath (/usr/share/jenkins/jenkins.war)
RUN sudo mkdir -p /usr/share/jenkins/WEB-INF/lib/
RUN whereis jar #just to find full jar command classpath to use with sudo
COPY ./jar-ext/groovy/mysql-connector-java-8.0.21.jar /usr/share/jenkins/WEB-INF/lib/
RUN cd /usr/share/jenkins && sudo /opt/java/openjdk/bin/jar -uvf jenkins.war ./WEB-INF/lib/mysql-connector-java-8.0.21.jar
For Jenkins running on Java >= 9 add the jdbc drivers under ${JENKINS_HOME}/war/WEB-INF/lib and under the --webroot directory.
Here's yet another question about jdbc's mysql driver. Considering the number of search results I got when I googled, I'm pretty bummed nothing I found in them worked for me.
The error:
hostname# java -cp /usr/share/java/mysql-connector.jar:/home/user JDBCTest
java.sql.SQLException: No suitable driver found for jdbc:mysql://<db ip>:3306/dbname
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at JDBCTest.main(sqltest.java:14)
The code (pulled from a short how to):
import java.sql.Connection;
import java.sql.DriverManager;
class JDBCTest {
private static final String url = "jdbc:mysql://dbipaddress:3306/dbname";
private static final String user = "username";
private static final String password = "password";
public static void main(String args[]) {
try {
Connection con = DriverManager.getConnection(url, user, password);
System.out.println("Success");
} catch (Exception e) {
e.printStackTrace();
}
}
}
I'm 90% certain /usr/share/java/mysql-connector-java.jar is the correct path for the class. That's what I've found both online, and using locate.
I've tried setting the environment classpath to CLASSPATH=$CLASSPATH:/usr/share/java/mysql-connector-java.jar in /etc/environment. As you can see, I've tried the -cp flag as well.
I can connect to the mysql server and database with the credentials I have in the JDBCTest class using the command line mysql-client. So it is not an error with the db server or my user/password.
As far as I can tell, my jdbc url is correct. That was one of the more common problems I found when searching...
I'm using Ubuntu 12.04 64bit on my servers.
libmysql-java is installed. As is, openjdk-7-jre-headless.
I'm running this completely outside of Tomcat, so all the answers saying to copy the driver into Tomcat's directory shouldn't apply.
So, I'm stumped. I would think using the -cp flag would just force it to work. Is there something in my java install missing? Something that got left out of openjdk-7-jre-headless?
How do I fix this?
Note: This class is just a quick test to help me diagnose why a larger (proprietary) app will not connect to my db. The larger app throws the same error. I'm hoping that fixing this small class will fix the larger app.
You are probably using a version of the MySQL JDBC driver that is not JDBC 4 compliant, so it is not automatically loaded by DriverManager. In that case you need to explicitly load it using:
Class.forName("com.mysql.jdbc.Driver");
The other option is to use a version of the library that is JDBC 4 compliant and will be automatically loaded.
Try adding the following on the first line of your main method:
Class.forName("com.mysql.jdbc.Driver");
If it throws an exception, then the JVM cannot access /usr/share/java/mysql-connector.jar. If that is the case, then check file permissions using:
ls -lah /usr/share/java/mysql-connector.jar
You should have at least read access to this file, and obviously the file should exist.
I generated a .war file for my SpringMVC + Maven + Hibernate + MySQL app which was working perfectly fine on localhost and local MySQL database. The way I configure the database is through a WebAppConfig.java file which looks at an application.properties file and retrieves the appropriate information.
Then I created an OpenShift account and deployed that .war file. I added MySQL and PHPMyAdmin cartridges so I can maintain a database. When I try to retrieve information or push to the database through my application I receive this error.
HTTP Status 500 - Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
message Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:948)
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:838)
javax.servlet.http.HttpServlet.service(HttpServlet.java:641)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:812)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
I already added the appropriate information for my database in my properties file so I don't think that is the issue.
application.properties
#DB
db.driver = com.mysql.jdbc.Driver
db.url = jdbc:mysql://{OPENSHIFT_MYSQL_DB_HOST}:{OPENSHIFT_MYSQL_DB_PORT}/springmvc
db.username = {OPENSHIFT_MYSQL_DB_USERNAME}
db.password = {OPENSHIFT_MYSQL_DB_PASSWORD}
#Hibernate
hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
hibernate.show_sql = true
entitymanager.packages.to.scan = org.example.app.model
hibernate.cache.provider_class = org.hibernate.cache.NoCacheProvider
Note: In my actual code I have the actual OPENSHIFT_MYSQL_DB_HOST and OPENSHIFT_MYSQL_DB_PORT values not those placeholders!
I forgot to actually answer this question.
I just want to clarify once again that using the 'OPENSHIFT' variables rather than putting the ACUTAL values in the application.properties fixed the issue.
db.url = jdbc:mysql://${OPENSHIFT_DB_HOST}:${OPENSHIFT_DB_PORT}/${OPENSHIFT_APP_NAME}
db.username = ${OPENSHIFT_MYSQL_DB_USERNAME}
db.password = ${OPENSHIFT_MYSQL_DB_PASSWORD}
Make sure mysql cartridge is up and running; if need be try restarting it. Otherwise, please post your properties file. Also please read the following threads, it may be of help:
https://www.openshift.com/forums/openshift/hibernate-mysql-connection-failing
https://www.openshift.com/forums/openshift/mysql-db-stops-responding-after-some-time
Thanks for posting to our forums as well:
https://www.openshift.com/forums/openshift/openshift-app-cant-connect-to-mysql-jdbcconnectionexception-could-not-open
Looks like you'll want to use:
db.username = {OPENSHIFT_MYSQL_DB_USERNAME}
db.password = {OPENSHIFT_MYSQL_DB_PASSWORD}
instead of:
db.username = root
db.password = pass
Missing the $ in the variable names, you can also run it locally very easily to make sure it's just the mysql variables and not a coding error.
Have you checked PHPMyAdmin to make sure MYSQL is up has the database and tables you expect and validate all your sql.
Does WebAppConfig have the proper Spring annotations? Does it build fully with no errors? Do your unit tests work? Do you have all the maven dependencies and versions established?
This has worked for me using OpenStack on all their available java server types.
I don't understand why openshift force us to use their environment variables instead of using "localhost:3306" and the actual values for username/password. This is making us very inconvenient. Also, adding this line of code jdbc:mysql://${OPENSHIFT_DB_HOST}:${OPENSHIFT_DB_PORT}/${OPENSHIFT_APP_NAME}
in my application-context.xml gets a compilation error since spring doesn't recognize these values.