I have hosted Jenkins on Kubernetes cluster which is hosted on Google Cloud Platform. I am trying a run a a python script though Jenkins. There is a need for the script to read a few values on MySQL. The MySQL instance is being run separately on one of the instances. I have been facing issues connecting Kubernetes to MySQL instance. I am getting the following error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '35.199.154.36' (timed out)
This is the document I came across
According to the document, I tried connecting via Private IP address.
Generated a secret which has MySQL username and password and included the Host IP address in the following format taking this document as reference:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
hostname:<MySQL external ip address>
kubectl create secret generic literal-token --from-literal user=<username> --from-literal password=<password>
This is the raw yaml file for the pod that I am trying to insert in the Jenkins Pod template.
Any help regarding how I can overcome this SQL connection problem would be appreciated.
You can't create a secret in the pod template field. You need to either create the secret before running the jobs and mount it from your pod template, or just refer to your user/password in your pod templates as environment variables, depending on your security levels
Related
I followed the official walkthrough on how to deploy MySQL as a statefulset here https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
I have it up and running well but the guide says:
The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.
this is my connection code:
func NewMysqlClient() *sqlx.DB {
//username:password#protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
}
when I connect the app to the headless MySQL service, I get:
Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.
My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.
I tried using mysql-0.mysql,mysql-1.mysql,mysql-2.mysql but then I get:
dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.
Thank you!
You have to use the service name for connecting with the MySQL from Go application.
So your traffic flow like
Go appliction POD running inside same K8s cluster as POD inside the container
send a request to MySQL service -> MySQL service forward traffic to MySQL stateful sets (PODs or in other merge replicas)
So if you have created the service in your case host name will be service name : mysql
For example you can refer this : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
If you notice how WordPress is connceting to mysql
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
it's using the MySQL service name wordpress-mysql as hostname to connect.
If you just want to connect with Read Replica you can use the service name mysql-read
OR
you can also use try connecting with
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql
Option -2
if you just to connect with specific POD or write a replica you can use the
<pod-name>.mysql
The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.
Another appropriate approach could be your application code ignores master, replica instance, etc and operates like it's connected to a single master instance and read, write query splitting is abstracted in a capable proxy. And that proxy is responsible for routing the write queries to the master instance and read queries to the replica instances.
Example proxy - https://proxysql.com/
I'm trying to deploy an instance of SonarQube on a Kubernetes cluster which uses a MySQL instance hosted on Amazon Relational Database Service (RDS).
A stock SonarQube deployment with built-in H2 DB has already been successfully stood up within my Kubernetes cluster with an ELB. No problems, other than the fact that this is not intended for production.
The MySQL instance has been successfully stood up, and I've test-queried it with SQL commands using the username and password that the SonarQube Kubernetes Pod will use. This is using the AWS publicly-exposed host, and port 3306.
To redirect SonarQube to use MySQL instead of the default H2, I've added the following environment variable key-value pair in my deployment configuration (YAML).
spec:
containers:
- name: sonarqube2
image: sonarqube:latest
env:
- name: SONARQUBE_JDBC_URL
value: "jdbc:mysql://MyEndpoint.rds.amazonaws.com:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true"
ports:
- containerPort: 9000
For test purposes, I'm using the default "sonar/sonar" username and password, so no need to redefine at this time.
The inclusion of the environment variable causes "CrashLoopBackOff". Otherwise, the default SonarQube deployment works fine. The official Docker Hub for SonarQube states to use env vars to point to a different database. Am trying to do the same, just Kubernetes style. What am I doing wrong?
==== Update: 1/9 ====
The issue has been resolved. See comments below. SonarQube 7.9 and higher does not support MySQL. See full log below.
End of Life of MySQL Support : SonarQube 7.9 and future versions do not support MySQL.
Please migrate to a supported database. Get more details at
https://community.sonarsource.com/t/end-of-life-of-mysql-support
and https://github.com/SonarSource/mysql-migrator
When trying to deploy my application, I recently got the following error:
ERROR: Service:AmazonCloudFormation, Message:Stack named
'awseb-e-123-stack' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS'
Reason: The following resource(s) failed to update: [AWSEBRDSDatabase].
ERROR: Updating RDS database named: abcdefg12345 failed
Reason: DB Security Groups can no longer be associated
with this DB Instance. Use VPC Security Groups instead.
ERROR: Failed to deploy application.
How do you switch over a DB Security Group to a VPC Security Group? Steps for using the Elastic Beanstalk Console would be greatly appreciated.
For anyone arriving via Google, here's how you do it via CloudFormation:
The official docs contains an example, at the very bottom https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.DeleteDBVPCGroups
SecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
VpcId: <vpc_id>
GroupDescription: Explain your SG
SecurityGroupIngress:
- Description: Ingress description
CidrIp: 10.214.0.0/16
IpProtocol: tcp
FromPort: 3306
ToPort: 3306
RDSDb:
Type: 'AWS::RDS::DBInstance'
Properties:
VPCSecurityGroups:
- Fn::GetAtt:
- SecurityGroup
- GroupId
Had the same issue but was able to fix it by doing the following
Created a RDS db instance from the RDS console
Created a snapshot of the instance
From Elastic Beanstalk console under configuration/database, create the RDS db using the instance
Once the new RDS db instance was created by EBS, inn configuration/software add db environment properties
I hope it helps you resolve this issue.
I am receiving an error when trying to load up my webpage
Failed to connect to MySQL: (2005) Unknown MySQL server host ':/cloudsql/testsite:europe-west1:testdatabase' (2)Error:
I have a Google Compute Engine VM set up with a LAMP stack (Apache/2.4.10 (Debian)/ Database client version: libmysql - 5.5.55 / PHP extension: mysqli)
I also have set up an instance on Google SQL with user credentials for aforementioned VM (i have set up both First Gen and Second Gen)
I can access both a local MySQL database on the VM as well as the Google SQL databases via phpAdmin installed locally
HOWEVER i appear to have an issue with the DB_HOST credentials in my config.php file when i run the script
path = /var/www/html/includes/config.php
I get
usually for local MYSQL databases i use
// The MySQL credentials
$CONF['host'] = 'localhost';
$CONF['user'] = 'YOURDBUSER';
$CONF['pass'] = 'YOURDBPASS';
$CONF['name'] = 'YOURDBNAME';
Documentation (and github links) recommend path
:/cloudsql/project-id:region:sql-db-instance-name
which is what i have done (see above) - but i keep getting the error message.
Am i typing the host description incorrectly? Or have i missed a configuration step?
Thanks in advance
It seems as if i have erred and that the credentials format i stated earlier are for Google App Engine
If you are on Google Compute Engine, you have two options:
Connect to the public IP address of your Cloud SQL instance. This requires you whitelist your GCE instance on the ACL for the Cloud SQL instance.
Use the Cloud SQL proxy. This is a extra daemon you run on your GCE instance that allows you to connect via TCP on localhost or a socket.
I'm trying to connect my server code running as a Docker container in our Kubernetes cluster (hosted on Google Container Engine) to a Google Cloud SQL managed MySQL 5.7 instance. The issue I'm running into is that every connection is being rejected by the database server with Access denied for user 'USER'#'IP' (using password: YES). The database credentials (username, password, database name, and SSL certificates) are all correct and work when connecting via other MySQL clients or the same application running as a container on a local instance.
I've verified that all credentials are the same on the local and the server-hosted versions of the app and that the user I'm connecting with has the wildcard % host specified. Not really sure what to check next here, to be honest...
An edited version of the connection code is below:
let connectionCreds = {
host: Config.SQL.HOST,
user: Config.SQL.USER,
password: Config.SQL.PASSWORD,
database: Config.SQL.DATABASE,
charset: 'utf8mb4',
};
if (Config.SQL.SSL_ENABLE) {
connectionCreds['ssl'] = {
key: fs.readFileSync(Config.SQL.SSL_CLIENT_KEY_PATH),
cert: fs.readFileSync(Config.SQL.SSL_CLIENT_CERT_PATH),
ca: fs.readFileSync(Config.SQL.SSL_SERVER_CA_PATH)
}
}
this.connection = MySQL.createConnection(connectionCreds);
Additional information: the server application is built in Node using the mysql2 library to connect to the database. There are no special firewall rules in place that are causing network issues, and that's confirmed by the fact that the library IS connecting, but failing to authenticate.
After setting up Cloud SQL Proxy I managed to figure out what the actual error was: somewhere between the secret and the pod configuration an extra newline was being added to the database name, causing any connection attempt to fail. With the proxy set up this was made clear because there was an actual error message to that effect displayed.
(notably all of my logging around the credentials that I was using to validate that the credentials were accurate didn't explicitly display the newline and was disguised by the fact that the console display added line breaks to wrap the display, and it happened to line up exactly with where the database name ended)
Have you read the documentation on https://cloud.google.com/sql/docs/mysql/connect-container-engine ?
In Container Engine, you need to set up a Cloud SQL Proxy container alongside your application pod and talk to it. The Cloud SQL Proxy will then make the actual call to Cloud SQL service.
If the container worked locally, I assume you have Application Default Credentials set on your development machine. It could be failing because those credentials are not on your container as a Service Account file. Try configuring a Service Account file, or create your GKE cluster with --scopes argument that gives your instances access to Cloud SQL.