I have an application currently working on my local Dev machine. It uses Wildfly 10, MySQL 5.7 and Hibernate. My application looks for the 'AppDS' datasource from within Wildfly.
I've created a Wildfly 10 container and a MySQL container on OpenShift V3. Typically, I would log into Wildfly and configure a datasource, but all that configuration is lost when a container restarts. I thought it would be a matter of finding my connection environment settings, and using the pre-configured database connections, but I can't find what the variables should be set to, and the default connections don't work without them.
I downloaded and read OpenShift for Developers, but they side-step the issue by creating a direct database connection, rather than going through a datasource.
exporting the environment variables failed because 'no matches for apps.openshift.io/, Kind=DeploymentConfig'. Is the book out of date? Are they not using deployment config to store environment variables?
I would appreciate it greatly if someone could point me in the right direction.
I have a project running locally on my machine that uses Wildfly 10, Mysql 5.7 and Hibernate. I found the documentation to be incomplete. After a few days of working with it, I have figured out how to deploy a simple J2EE project with this stack.
I am updating my question with the step-by-step I wish I'd had. I hope this saves someone some time in the future.
create new openshift user
create project dbtest
add MySQL to dbtest project:
The following service(s) have been created in your project: mysql:
Username: test
Password: test
Database Name: testdb
Connection URL: mysql://mysql:3306/
add Wildfly to the project:
oc login https://api.starter-us-west-1.openshift.com
oc project dbtest
oc status
scale current wildfly pod to 0. (you won't have enough CPU to run 3 pods, and redeploy tries to start a new one and hot swap them)
From left menu: Applications->Deployments->(dbtest)Wildfly10 pod->environment(tab)-> add:
MYSQL_DATABASE=testdb
MYSQL_DB_ENABLED=true
MYSQL_USER:test
MYSQL_PASSWORD: test
push wildfly pod back to 1.
use terminal in Wildfly to run ./add-user.sh
oc port-forward wildfly10-6-rkr58 :9990 (replace wildfly10-6-rkr58 with your pod name, found by clicking on the running pod [circle with a 1 in it] and noting the pod name in the upper left corner])
login to Wildfly from 127.0.0.1: and test the MySQLDS. It should now connect.
Go through the environment variables mentioned here to get a better understanding.
Related
I'm trying to deploy an instance of SonarQube on a Kubernetes cluster which uses a MySQL instance hosted on Amazon Relational Database Service (RDS).
A stock SonarQube deployment with built-in H2 DB has already been successfully stood up within my Kubernetes cluster with an ELB. No problems, other than the fact that this is not intended for production.
The MySQL instance has been successfully stood up, and I've test-queried it with SQL commands using the username and password that the SonarQube Kubernetes Pod will use. This is using the AWS publicly-exposed host, and port 3306.
To redirect SonarQube to use MySQL instead of the default H2, I've added the following environment variable key-value pair in my deployment configuration (YAML).
spec:
containers:
- name: sonarqube2
image: sonarqube:latest
env:
- name: SONARQUBE_JDBC_URL
value: "jdbc:mysql://MyEndpoint.rds.amazonaws.com:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true"
ports:
- containerPort: 9000
For test purposes, I'm using the default "sonar/sonar" username and password, so no need to redefine at this time.
The inclusion of the environment variable causes "CrashLoopBackOff". Otherwise, the default SonarQube deployment works fine. The official Docker Hub for SonarQube states to use env vars to point to a different database. Am trying to do the same, just Kubernetes style. What am I doing wrong?
==== Update: 1/9 ====
The issue has been resolved. See comments below. SonarQube 7.9 and higher does not support MySQL. See full log below.
End of Life of MySQL Support : SonarQube 7.9 and future versions do not support MySQL.
Please migrate to a supported database. Get more details at
https://community.sonarsource.com/t/end-of-life-of-mysql-support
and https://github.com/SonarSource/mysql-migrator
I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/
I am trying to understand PCF concepts and thinking that once i am done with creating mysql services in PCF, how i can manage that database like creating tables and maintaining that table just like we do in pur traditional environment using mySqldeveoper. I came across one service like PivotalMySQLWeb and tried but didnt liked it much. So if somehow i can get connection details of mysql service , i can use that to connect using sql developer.
The links #khalid mentioned are definitely good.
http://docs.pivotal.io/p-mysql/2-0/use.html
https://github.com/andreasf/cf-mysql-plugin#usage
More generally, you can use an SSH tunnel to access any service, not just MySQL. This also allows you to use whatever tool you would like to access the service.
This is documented here, but if for some reason that goes away here are the steps.
Create your target service instance, if you don't have one already.
Push an app, any app. It really doesn't matter, it can be a hello world app. The app doesn't even need to use the service. We just need something to connect to.
Either Bind the service from #1 to the app in #2 or create a service key using the service from #1. If you bind to the app, run cf env <app> or if you use a service key run cf service-key MY-DB EXTERNAL-ACCESS-KEY and either one will give you your service credentials.
Run cf ssh -L 63306:us-cdbr-iron-east-01.p-mysql.net:3306 YOUR-HOST-APP, where 63306 is the local port you'll connect to on your machine and us-cdbr-iron-east-01.p-mysql.net:3306 are the host and port from the credentials in step #3.
The tunnel is now up, use whatever client you'd like to connect to your service. For example: mysql -u b5136e448be920 -h localhost -p -D ad_b2fca6t49704585d -P 63306, where b5136e448be920 and ad_b2fca6t49704585d are the username and database name from step #3 and 63306 is the local port you picked from step #4.
Additionally, if you want to connect aws-rds-mysql (instantiated from Pivotal Cloud Foundry) from IntelliJ, you can use the DB-Navigator Plugin (https://plugins.jetbrains.com/plugin/1800-database-navigator) inside IntelliJ, through which, database manipulation can be performed.
After creating the ssh tunnel $ cf ssh -L 63306:<DB_HOSTNAME>:3306 YOUR-HOST-APP (as also mentioned in https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/ssh-services.html),
Go to DB Navigator plugin and click on custom under new connection.
Enter the URL as: jdbc:mysql://:password>#localhost:63306/<database_name>
The following thread might be helpful for you as well How do I connect to my MySQL service on Pivotal Cloud Foundry (PCF) via MySQL Workbench or CLI or MySQLWeb Database Management App?
I'm new to Beanstalk. I've created a Rails application and set the database production configuration to use the environment variables hopefully provided by AWS. I'm using Mysql (mysql2 gem), and want to use RDS and Passenger (I have no preference there).
On my development environment I can run the rails application with my local Mysql (it is just a basic application I've created for experimentation).
I have added the passenger gem to Gemfile and bundled, but I'm using WEBBrick in development still.
The only thing I did not do by the book is that I did not use 'eb' but rather tried from the console. My application/environment failed to run as while "rake db:migrate" it still thinks I wanted it to connect to the local Mysql (I guess from the logs that it is not aware of RACK_ENV and hence uses 'development').
Any tip? I can of course try next the 'eb', yet would prefer to work with the console.
Regards,
Oren
In Elastic Beanstalk (both the web console and the cli), you can pass environnement variables. If you pass the RAKE_ENV variable, you will change your environnement.
After that you still need to pass your database parameters (db password, name, ...) which should not be hardcoded into the code.
Have you tried to run
bin/rake db:migrate RAILS_ENV=development
?
I got the same issue and that worked for me.
I recommend you enter to EC2 instance through this command "eb ssh" (The first time you need specified you .pem file, if you have not one you can create in IAM services) and check your logs for more information about yours error.
If you have problems when you are uploading your code (eb deploy) you have the log in this file: "/var/log/eb-activity.log" (Remember this file is in your EC2 instance)
If you have a problems with your app, you can read the logs in this files: "/var/app/support/logs/production.log" or "/var/app/support/logs/passenger.log"
Other recommedations is install EB CLI version 3. for manage your eb instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html
I believed that Elastic Beanstalk will run 'rake db:migrate' by itself. Indeed it seems to try, but that is failing. I gave my bounty to 'Yahs Hef', even though I will only try this evening (UK). My disorientation with AWS caused me to forget this easy solution, of running the migration by myself. If this does not work by itself, I'll simplify the database configuration as possibile.
I created my scaled application on Openshift server with following command:
rhc app create MyApp jbossews-2.0 -s
Then add Mysql:
rhc cartridge add mysql-5.5 -a MyApp
My application using Struts2, Spring & Hibernate. I configured the datasource as follow:
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/MysqlDS"/>
</bean>
The JNDI "MysqlDS" is defined in .openshift\config\context.xml with the connection url:
url="jdbc:mysql://5344d4de4382ec43c9000090-myapp.rhcloud.com:37941/mydb"
The problem is my scale app can not establish the connection to Mysql with an error:
org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not create connection to database server. Attempted reconnect
3 times. Giving up.)
I'm sure the username & password to access the database is correct. It seem MySQL on Openshift server doesn't open its port. When I tried to use an external database on freemysqlhosting.net (with open host & port) the application run well. But I just want to use MySQL db on Openshift. Anyone who have experience on this please give me some suggestion. Thanks
Make sure that you restarted your application after adding the mysql cartridge, sometimes the environment variables don't show up correctly until you restart. Also try to ssh to your gear and see if you can use the "mysql" command to connect to your mysql database directly.
If you left the .openshift/config/context.xml unchanged, the JNDI name fore the MySQL datasource is actually jdbc/MySQLDS and not jdbc/MysqlDS.
This was changed some time ago.
The documentation here http://openshift.github.io/documentation/oo_cartridge_guide.html#tomcat-cartridge-integrations is unfortunately not correct.