How to Connect Golang application to mysql statefulset in Kubernetes - mysql

I followed the official walkthrough on how to deploy MySQL as a statefulset here https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
I have it up and running well but the guide says:
The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.
this is my connection code:
func NewMysqlClient() *sqlx.DB {
//username:password#protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
}
when I connect the app to the headless MySQL service, I get:
Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.
My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.
I tried using mysql-0.mysql,mysql-1.mysql,mysql-2.mysql but then I get:
dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.
Thank you!

You have to use the service name for connecting with the MySQL from Go application.
So your traffic flow like
Go appliction POD running inside same K8s cluster as POD inside the container
send a request to MySQL service -> MySQL service forward traffic to MySQL stateful sets (PODs or in other merge replicas)
So if you have created the service in your case host name will be service name : mysql
For example you can refer this : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
If you notice how WordPress is connceting to mysql
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
it's using the MySQL service name wordpress-mysql as hostname to connect.
If you just want to connect with Read Replica you can use the service name mysql-read
OR
you can also use try connecting with
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql
Option -2
if you just to connect with specific POD or write a replica you can use the
<pod-name>.mysql
The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.

Another appropriate approach could be your application code ignores master, replica instance, etc and operates like it's connected to a single master instance and read, write query splitting is abstracted in a capable proxy. And that proxy is responsible for routing the write queries to the master instance and read queries to the replica instances.
Example proxy - https://proxysql.com/

Related

SQLAlchemy connecting to Digital Ocean database on port 25060?

I'm currently a little stuck as to why I am unable to connect to a DB from a Kubernetes cluster hosting my Fast API app.
I've gone through the steps of ensuring that my DB has the Kubernetes cluster and pool whitelisted for incoming connections.
I have also ensured I have the correct environment variables when attempting to connect, so the following both exist and are correct within my environment (redacted for security):
POSTGRES_USER: "***************"
POSTGRES_PASSWORD: "***************"
POSTGRES_SERVER: "***************"
POSTGRES_PORT: "25060"
POSTGRES_DB: "***************"
I am also constructing the Database URL within my Fast API's app as follows:
from typing import Any, Dict, List, Optional, Union
from pydantic import AnyHttpUrl, BaseSettings, HttpUrl, PostgresDsn, validator
class Settings(BaseSettings):
...
POSTGRES_SERVER: str
POSTGRES_USER: str
POSTGRES_PASSWORD: str
POSTGRES_DB: str
POSTGRES_PORT: str
SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None
#validator("SQLALCHEMY_DATABASE_URI", pre=True)
def assemble_db_connection(cls, v: Optional[str], values: Dict[str, Any]) -> Any:
if isinstance(v, str):
return v
return PostgresDsn.build(
scheme="postgresql",
user=values.get("POSTGRES_USER"),
password=values.get("POSTGRES_PASSWORD"),
host=values.get("POSTGRES_SERVER"),
port=values.get("POSTGRES_PORT"),
path=f"/{values.get('POSTGRES_DB') or ''}",
)
I'm seeing everything running correctly from what I can see, so I am able to connect to the DB with those credentials outside of SQLAlchemy and the Fast API app.
However, when I run alembic upgrade head to run migrations within the container on the cluster (or pod), I am seeing the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out
Is the server running on host "*******" () and accepting
TCP/IP connections on port 5432?
I totally understand that 5432 is not accepting incoming connections, as I have specified a different port value ...
Is there something I have done wrong ... or some extra steps I need to take, or is there something more subtle going wrong with SQLAlchemy?
Can anyone advise any steps to take to try and understand why it is attempting to make a connection on 5432 when I specified a port value in the env?

Why can't I connect to AWS RDS?

I'm trying to connect to my new AWS RDS I just made.
I followed the "Setting up for RDS" (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SettingUp.html), then the "Tutorial: Create an Amazon VPC for Use with a DB Instance" (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateVPC.html), then the "Creating a MySQL DB Instance and Connecting to a Database on a MySQL DB Instance" (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html) but I'm not able to connect to my DB from my computer or my dedicated server on the web.
Following the previous docs, I have this config :
My DB instance
The VPC
The subnetworks
Example of subnetwork's details :
The first security group :
The second security group, calling the first one :
For the first security group, I put both my private IP and the IP of my dedicated server, and their ports.
I even tried to put 0.0.0.0/0 for SSH and TCP, it didn't work either.
For the DB instance, I tried to add the two security group instead of only the db-securitygroup, it didn't work.
I tried to use a different Port for the DB instance, it didn't work.
With MySQL Workbench or with PDO on my dedicated server, I'm unable to connect to the DB : "SQLSTATE[HY000] [2003] Can't connect to MySQL server on [...]"
I think your security groups are incorrect. If the RDS instance is the only thing you currently have running in the VPC, then you should only have one security group, which is assigned to the RDS server, and that security group should have a rule for port 3306 that allows ingress from your personal IP address, and your dedicated server's IP address.
Take a look to this instruction, pay attention to step 3, 4 and 5. It is for ElasticSearch but I think in your case steps are similar

Cloud Run:<Cloud SQL instance IP adress> :3306: connect: connection timed out

I want to connect Cloud SQL in Cloud Run Application. I used golang. this is the code around sql connect setting.
func getEnv(key, def string) string {
v := os.Getenv(key)
if v == "" {
return def
}
return v
}
DB: DB{
User: getEnv("DB_USER", "<user name>"),
Pass: getEnv("DB_PASS", "<password>"),
Host: getEnv("DB_HOST", "0.0.0.0"),
Port: getEnv("DB_PORT", "3306"),
Database: getEnv("DB_DATABASE", "<database name>"),
},
dsn := fmt.Sprintf("%s:%s#tcp(%s:%s)/%s?charset=utf8&parseTime=true",
config.DB.User, config.DB.Pass, config.DB.Host, config.DB.Port, config.DB.Database)
db, err := gorm.Open("mysql", dsn)
I set environment variable at Cloud Run setting console. After delpoy Application, Cloud Run console display Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. and dial tcp <Cloud SQL Private IP> :3306: connect: connection timed out I wonder SQL connection is wrong...
You have not mentioned the word "VPC" in your question so I'm assuming you don't use it.
Cloud Run cannot directly connect to a private IP of a Cloud SQL instance. You need to configure a Serverless VPC Access Connector and specify it while deploying your Cloud Run app.
Cloud Run containers are not part of a VPC by default, so unless you do this, they will not have access to the private networks.
There is several way to connect your Cloud SQL database to Cloud Run. If it's MySQL, the easiest way is to follow the official documentation
If you want to use the IP with TCP connection, firstly, you can't use 0.0.0.0 as IP.
Use the Cloud SQL public IP (for this you have to authorize 0.0.0.0/0 network range on your Cloud SQL instance, and it's absolutely not recommended)
Plug your Cloud SQL to your VPC. And, as described by Ahmet, use serverless VPC connector to link Cloud Run with your VPC. Then add the private IP of your Cloud SQL in your code.

connecting jenkins hosted on kubernetes to MySQL on Google Cloud Platform

I have hosted Jenkins on Kubernetes cluster which is hosted on Google Cloud Platform. I am trying a run a a python script though Jenkins. There is a need for the script to read a few values on MySQL. The MySQL instance is being run separately on one of the instances. I have been facing issues connecting Kubernetes to MySQL instance. I am getting the following error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '35.199.154.36' (timed out)
This is the document I came across
According to the document, I tried connecting via Private IP address.
Generated a secret which has MySQL username and password and included the Host IP address in the following format taking this document as reference:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
hostname:<MySQL external ip address>
kubectl create secret generic literal-token --from-literal user=<username> --from-literal password=<password>
This is the raw yaml file for the pod that I am trying to insert in the Jenkins Pod template.
Any help regarding how I can overcome this SQL connection problem would be appreciated.
You can't create a secret in the pod template field. You need to either create the secret before running the jobs and mount it from your pod template, or just refer to your user/password in your pod templates as environment variables, depending on your security levels

Containerized server application failing to connect to MySQL databases

I'm trying to connect my server code running as a Docker container in our Kubernetes cluster (hosted on Google Container Engine) to a Google Cloud SQL managed MySQL 5.7 instance. The issue I'm running into is that every connection is being rejected by the database server with Access denied for user 'USER'#'IP' (using password: YES). The database credentials (username, password, database name, and SSL certificates) are all correct and work when connecting via other MySQL clients or the same application running as a container on a local instance.
I've verified that all credentials are the same on the local and the server-hosted versions of the app and that the user I'm connecting with has the wildcard % host specified. Not really sure what to check next here, to be honest...
An edited version of the connection code is below:
let connectionCreds = {
host: Config.SQL.HOST,
user: Config.SQL.USER,
password: Config.SQL.PASSWORD,
database: Config.SQL.DATABASE,
charset: 'utf8mb4',
};
if (Config.SQL.SSL_ENABLE) {
connectionCreds['ssl'] = {
key: fs.readFileSync(Config.SQL.SSL_CLIENT_KEY_PATH),
cert: fs.readFileSync(Config.SQL.SSL_CLIENT_CERT_PATH),
ca: fs.readFileSync(Config.SQL.SSL_SERVER_CA_PATH)
}
}
this.connection = MySQL.createConnection(connectionCreds);
Additional information: the server application is built in Node using the mysql2 library to connect to the database. There are no special firewall rules in place that are causing network issues, and that's confirmed by the fact that the library IS connecting, but failing to authenticate.
After setting up Cloud SQL Proxy I managed to figure out what the actual error was: somewhere between the secret and the pod configuration an extra newline was being added to the database name, causing any connection attempt to fail. With the proxy set up this was made clear because there was an actual error message to that effect displayed.
(notably all of my logging around the credentials that I was using to validate that the credentials were accurate didn't explicitly display the newline and was disguised by the fact that the console display added line breaks to wrap the display, and it happened to line up exactly with where the database name ended)
Have you read the documentation on https://cloud.google.com/sql/docs/mysql/connect-container-engine ?
In Container Engine, you need to set up a Cloud SQL Proxy container alongside your application pod and talk to it. The Cloud SQL Proxy will then make the actual call to Cloud SQL service.
If the container worked locally, I assume you have Application Default Credentials set on your development machine. It could be failing because those credentials are not on your container as a Service Account file. Try configuring a Service Account file, or create your GKE cluster with --scopes argument that gives your instances access to Cloud SQL.