SonarQube on Kubernetes with MySQL via AWS RDS - mysql

I'm trying to deploy an instance of SonarQube on a Kubernetes cluster which uses a MySQL instance hosted on Amazon Relational Database Service (RDS).
A stock SonarQube deployment with built-in H2 DB has already been successfully stood up within my Kubernetes cluster with an ELB. No problems, other than the fact that this is not intended for production.
The MySQL instance has been successfully stood up, and I've test-queried it with SQL commands using the username and password that the SonarQube Kubernetes Pod will use. This is using the AWS publicly-exposed host, and port 3306.
To redirect SonarQube to use MySQL instead of the default H2, I've added the following environment variable key-value pair in my deployment configuration (YAML).
spec:
containers:
- name: sonarqube2
image: sonarqube:latest
env:
- name: SONARQUBE_JDBC_URL
value: "jdbc:mysql://MyEndpoint.rds.amazonaws.com:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true"
ports:
- containerPort: 9000
For test purposes, I'm using the default "sonar/sonar" username and password, so no need to redefine at this time.
The inclusion of the environment variable causes "CrashLoopBackOff". Otherwise, the default SonarQube deployment works fine. The official Docker Hub for SonarQube states to use env vars to point to a different database. Am trying to do the same, just Kubernetes style. What am I doing wrong?
==== Update: 1/9 ====
The issue has been resolved. See comments below. SonarQube 7.9 and higher does not support MySQL. See full log below.
End of Life of MySQL Support : SonarQube 7.9 and future versions do not support MySQL.
Please migrate to a supported database. Get more details at
https://community.sonarsource.com/t/end-of-life-of-mysql-support
and https://github.com/SonarSource/mysql-migrator

Related

How to Connect Golang application to mysql statefulset in Kubernetes

I followed the official walkthrough on how to deploy MySQL as a statefulset here https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
I have it up and running well but the guide says:
The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.
this is my connection code:
func NewMysqlClient() *sqlx.DB {
//username:password#protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
}
when I connect the app to the headless MySQL service, I get:
Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.
My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.
I tried using mysql-0.mysql,mysql-1.mysql,mysql-2.mysql but then I get:
dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.
Thank you!
You have to use the service name for connecting with the MySQL from Go application.
So your traffic flow like
Go appliction POD running inside same K8s cluster as POD inside the container
send a request to MySQL service -> MySQL service forward traffic to MySQL stateful sets (PODs or in other merge replicas)
So if you have created the service in your case host name will be service name : mysql
For example you can refer this : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
If you notice how WordPress is connceting to mysql
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
it's using the MySQL service name wordpress-mysql as hostname to connect.
If you just want to connect with Read Replica you can use the service name mysql-read
OR
you can also use try connecting with
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql
Option -2
if you just to connect with specific POD or write a replica you can use the
<pod-name>.mysql
The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.
Another appropriate approach could be your application code ignores master, replica instance, etc and operates like it's connected to a single master instance and read, write query splitting is abstracted in a capable proxy. And that proxy is responsible for routing the write queries to the master instance and read queries to the replica instances.
Example proxy - https://proxysql.com/

How to upgrade Aurora serverless MySQL cluster from 5.6 to 5.7 when using CloudFormation

I have Aurora serverless MySQL cluster running engine version 5.6. It is set up using CloudFormation.
What is the best way to upgrade the cluster to support MySQL 5.7?
I tried changing EngineVersion from 5.6 to 5.7, and engine from aurora to aurora-mysql as well as specifying new parameter group for 5.7.
Updating the stack with these changes returns an error:
In-place upgrade of the engine to a new major version isn't supported on serverless engine mode. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidDBClusterStateFault;
I don't trust this error as this shouldn't be a major version and what documentation I can find supports the idea that this should be possible.
Below is the CloudFormation code snippet, excluding irrelevant properties:
RDSDBClusterParameterGroup:
Type: 'AWS::RDS::DBClusterParameterGroup'
Properties:
Description: Aurora Cluster Parameter Group for aurora-mysql5.7
Family: aurora-mysql5.7
Parameters:
general_log: '0'
RDSCluster:
Type: 'AWS::RDS::DBCluster'
DependsOn:
- RDSDBClusterParameterGroup
Properties:
DBClusterParameterGroupName:
Ref: RDSDBClusterParameterGroup
Engine: aurora-mysql
EngineMode: serverless
EngineVersion: 5.7
[..]
I wasn't able to perform an upgrade. It counts as a major version since we are upgrading from Aurora serverless V1 to V2.
Was a bit complicated to find the best solution since I had to use CFN.
Resolved it like this:
using console created snapshot from existing 5.6 cluster
restored snapshot to a new 5.7 supported cluster (Aurora serverless v2)
imported the new cluster resource in the existing CFN stack
updated the template in my pipeline and run deployment again(no changes since it is already imported)
verified everything works in the new cluster and all data is present.
deleted the old v1 (5.6) cluster from the template and stack.
It is now possible to do in-place upgrades as described in https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.MajorVersionUpgrade.html#AuroraMySQL.Updates.MajorVersionUpgrade.1to2
I was able to upgrade using CloudFormation by changing:
DBCluster > Engine to 'aurora-mysql'
DBCluster > EngineVersion to '5.7.mysql_aurora.2.07.1' [1]
DBClusterParameterGroup > Family to 'aurora-mysql5.7'
Because a DBClusterParameterGroup already existed I had to change the logicalID and chose DBClusterParameterGroup57
Using your example:
RDSDBClusterParameterGroup57:
Type: 'AWS::RDS::DBClusterParameterGroup'
Properties:
Description: Aurora Cluster Parameter Group for aurora-mysql5.7
Family: aurora-mysql5.7
Parameters:
general_log: '0'
RDSCluster:
Type: 'AWS::RDS::DBCluster'
DependsOn:
- DBClusterParameterGroup57
Properties:
DBClusterParameterGroupName:
Ref: RDSDBClusterParameterGroup
Engine: aurora-mysql
EngineMode: serverless
EngineVersion: '5.7.mysql_aurora.2.07.1'
[..]
[1] You can check available versions by running:
aws rds describe-db-engine-versions --engine aurora-mysql --query 'DBEngineVersions[?contains(SupportedEngineModes,`serverless`)]'

connecting jenkins hosted on kubernetes to MySQL on Google Cloud Platform

I have hosted Jenkins on Kubernetes cluster which is hosted on Google Cloud Platform. I am trying a run a a python script though Jenkins. There is a need for the script to read a few values on MySQL. The MySQL instance is being run separately on one of the instances. I have been facing issues connecting Kubernetes to MySQL instance. I am getting the following error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '35.199.154.36' (timed out)
This is the document I came across
According to the document, I tried connecting via Private IP address.
Generated a secret which has MySQL username and password and included the Host IP address in the following format taking this document as reference:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
hostname:<MySQL external ip address>
kubectl create secret generic literal-token --from-literal user=<username> --from-literal password=<password>
This is the raw yaml file for the pod that I am trying to insert in the Jenkins Pod template.
Any help regarding how I can overcome this SQL connection problem would be appreciated.
You can't create a secret in the pod template field. You need to either create the secret before running the jobs and mount it from your pod template, or just refer to your user/password in your pod templates as environment variables, depending on your security levels

Configure a Wildfly 10 MSQL datasource in OpenShift v3

I have an application currently working on my local Dev machine. It uses Wildfly 10, MySQL 5.7 and Hibernate. My application looks for the 'AppDS' datasource from within Wildfly.
I've created a Wildfly 10 container and a MySQL container on OpenShift V3. Typically, I would log into Wildfly and configure a datasource, but all that configuration is lost when a container restarts. I thought it would be a matter of finding my connection environment settings, and using the pre-configured database connections, but I can't find what the variables should be set to, and the default connections don't work without them.
I downloaded and read OpenShift for Developers, but they side-step the issue by creating a direct database connection, rather than going through a datasource.
exporting the environment variables failed because 'no matches for apps.openshift.io/, Kind=DeploymentConfig'. Is the book out of date? Are they not using deployment config to store environment variables?
I would appreciate it greatly if someone could point me in the right direction.
I have a project running locally on my machine that uses Wildfly 10, Mysql 5.7 and Hibernate. I found the documentation to be incomplete. After a few days of working with it, I have figured out how to deploy a simple J2EE project with this stack.
I am updating my question with the step-by-step I wish I'd had. I hope this saves someone some time in the future.
create new openshift user
create project dbtest
add MySQL to dbtest project:
The following service(s) have been created in your project: mysql:
Username: test
Password: test
Database Name: testdb
Connection URL: mysql://mysql:3306/
add Wildfly to the project:
oc login https://api.starter-us-west-1.openshift.com
oc project dbtest
oc status
scale current wildfly pod to 0. (you won't have enough CPU to run 3 pods, and redeploy tries to start a new one and hot swap them)
From left menu: Applications->Deployments->(dbtest)Wildfly10 pod->environment(tab)-> add:
MYSQL_DATABASE=testdb
MYSQL_DB_ENABLED=true
MYSQL_USER:test
MYSQL_PASSWORD: test
push wildfly pod back to 1.
use terminal in Wildfly to run ./add-user.sh
oc port-forward wildfly10-6-rkr58 :9990 (replace wildfly10-6-rkr58 with your pod name, found by clicking on the running pod [circle with a 1 in it] and noting the pod name in the upper left corner])
login to Wildfly from 127.0.0.1: and test the MySQLDS. It should now connect.
Go through the environment variables mentioned here to get a better understanding.

How to deploy reportportal into production environment

Based on deployment instruction we need to deploy reportportal to the production environment
in the instruction mentioned the following:
For production usage we recommend to:
deploy MongoDB database at separate enviroment, and connect App to this server. MongoDB is mandatory part.
choose only required Bug Tracking System integration service. Exclude the rest
our question is:
how to connect first VM with dockerized reportportal to second one VM with hosted database
Maybe there is any environment variable which is pointing app to database?
There are couple of connection settings that should be applied to services which use database. Here is the list:
- rp.mongo.host=XXX
- rp.mongo.port=27017
- rp.mongo.dbName=reportportal
- rp.mongo.user=XXX
- rp.mongo.password=XXX
MongoDB is used by the following services: UAT (authorization), API, JIRA, RALLY. There is example of docker-compose YAML which contains all mentioned properties.
How i understand, mongo db container should be removed from docker-compose config, hence we should create second config with DB(mongo) container:
image: mongo:3.2
## Uncomment if needed
# ports:
# - "27017:27017"
volumes:
- reportportal-data:/data/db
restart: always
## Consider disabling smallfiles for production usage
command: --smallfiles
And set db settings to first docker-compose.yml file?