SQLAlchemy connecting to Digital Ocean database on port 25060? - sqlalchemy

I'm currently a little stuck as to why I am unable to connect to a DB from a Kubernetes cluster hosting my Fast API app.
I've gone through the steps of ensuring that my DB has the Kubernetes cluster and pool whitelisted for incoming connections.
I have also ensured I have the correct environment variables when attempting to connect, so the following both exist and are correct within my environment (redacted for security):
POSTGRES_USER: "***************"
POSTGRES_PASSWORD: "***************"
POSTGRES_SERVER: "***************"
POSTGRES_PORT: "25060"
POSTGRES_DB: "***************"
I am also constructing the Database URL within my Fast API's app as follows:
from typing import Any, Dict, List, Optional, Union
from pydantic import AnyHttpUrl, BaseSettings, HttpUrl, PostgresDsn, validator
class Settings(BaseSettings):
...
POSTGRES_SERVER: str
POSTGRES_USER: str
POSTGRES_PASSWORD: str
POSTGRES_DB: str
POSTGRES_PORT: str
SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None
#validator("SQLALCHEMY_DATABASE_URI", pre=True)
def assemble_db_connection(cls, v: Optional[str], values: Dict[str, Any]) -> Any:
if isinstance(v, str):
return v
return PostgresDsn.build(
scheme="postgresql",
user=values.get("POSTGRES_USER"),
password=values.get("POSTGRES_PASSWORD"),
host=values.get("POSTGRES_SERVER"),
port=values.get("POSTGRES_PORT"),
path=f"/{values.get('POSTGRES_DB') or ''}",
)
I'm seeing everything running correctly from what I can see, so I am able to connect to the DB with those credentials outside of SQLAlchemy and the Fast API app.
However, when I run alembic upgrade head to run migrations within the container on the cluster (or pod), I am seeing the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out
Is the server running on host "*******" () and accepting
TCP/IP connections on port 5432?
I totally understand that 5432 is not accepting incoming connections, as I have specified a different port value ...
Is there something I have done wrong ... or some extra steps I need to take, or is there something more subtle going wrong with SQLAlchemy?
Can anyone advise any steps to take to try and understand why it is attempting to make a connection on 5432 when I specified a port value in the env?

Related

How do I connect to a MySQL database server running on PlanetScale with SSL from node.js on localhost?

I'm trying to connect to the MySQL server on PlanetScale, but can't as it requires SSL.
Here's their doc for that, but it's unclear what it says.
https://planetscale.com/docs/concepts/secure-connections
Here's the connection URL: DATABASE_URL='mysql://co30rXXXXXXX:pscale_pw_XXXXXXX#hoqx01444p30.us-east-4.psdb.cloud/restaurant?ssl={"rejectUnauthorized":true}'
Here's what I see from my terminal when I run yarn run migration-run
yarn run v1.22.18 $ npx prisma migrate dev Environment variables
loaded from .env Prisma schema loaded from prisma/schema.prisma
Datasource "db": MySQL database "restaurant" at
"hoqx0XXXXX.us-east-4.psdb.cloud:3306"
Error: Migration engine error: unknown error: Code: UNAVAILABLE server
does not allow insecure connections, client must use SSL/TLS
error Command failed with exit code 1. info Visit
https://yarnpkg.com/en/docs/cli/run for documentation about this
command.
Is there anyone who has tried to connect to PlanetScale DB from Node.js on localhost? I have tried some other suggestions from Stackoverflow, but don't seem to work.
?ssl={"rejectUnauthorized":false}&sslcert=/etc/ssl/certs/ca-certificates.crt
Adding these params at the end of the connection link, the issue has been fixed. :)
SSL ISSUE ON WINDOWS
If you're working on a Windows machine and using a .env file for your connection string, here is what worked for me to run locally (windows does not have a default /etc/ssl/certs/ reference as answered here).
You get your connection string from the PlanetScale console, via "overview" > "connect"
This will look something like:
DATABASE_URL='mysql://xxxxxx:*****#aws-eu-west-1.connect.psdb.cloud/dbName?ssl={"rejectUnauthorized":true}'
When plainly using this you will most likley get the follow error message (as the question states):
Code: UNAVAILABLE server does not allow insecure connections, client must use SSL/TLS
You therefore need to provide a local cert, one can be downloaded from the following trusted location:
https://curl.se/docs/caextract.html
Next, you need to save this file to a logical location on disk that can be referenced in your connection string, for example c:/temp/cacert.pem
Once saved you can then append then following to your connection string:
&sslcert=C:\\temp\\cacert.pem
Restart your server and you should be all set! 🎉
The equivelant ssl cert update in NodeJs would look as follows:
const connection = mysql.createConnection({
host: 'hostNameHere',
user: 'userNameHere',
password: 'passwordHere',
database: 'dbHere',
ssl: {
ca: fs.readFileSync('C:\\temp\\cacert.pem')
}
});

How to Connect Golang application to mysql statefulset in Kubernetes

I followed the official walkthrough on how to deploy MySQL as a statefulset here https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
I have it up and running well but the guide says:
The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.
this is my connection code:
func NewMysqlClient() *sqlx.DB {
//username:password#protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
}
when I connect the app to the headless MySQL service, I get:
Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.
My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.
I tried using mysql-0.mysql,mysql-1.mysql,mysql-2.mysql but then I get:
dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.
Thank you!
You have to use the service name for connecting with the MySQL from Go application.
So your traffic flow like
Go appliction POD running inside same K8s cluster as POD inside the container
send a request to MySQL service -> MySQL service forward traffic to MySQL stateful sets (PODs or in other merge replicas)
So if you have created the service in your case host name will be service name : mysql
For example you can refer this : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
If you notice how WordPress is connceting to mysql
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
it's using the MySQL service name wordpress-mysql as hostname to connect.
If you just want to connect with Read Replica you can use the service name mysql-read
OR
you can also use try connecting with
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql
Option -2
if you just to connect with specific POD or write a replica you can use the
<pod-name>.mysql
The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.
Another appropriate approach could be your application code ignores master, replica instance, etc and operates like it's connected to a single master instance and read, write query splitting is abstracted in a capable proxy. And that proxy is responsible for routing the write queries to the master instance and read queries to the replica instances.
Example proxy - https://proxysql.com/

Google Cloud Function with Cloud MySQL Authorisation fails ([Errno 111] Connection refused)

I'm trying to connect to my Google Cloud MySQL database through a Google Cloud Function to read some data. The function build succeeds, but when executed only this is displayed:
Error: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/e3q8)
Here is my connection code:
import sqlalchemy
# Depending on which database you are using, you'll set some variables differently.
# In this code we are inserting only one field with one value.
# Feel free to change the insert statement as needed for your own table's requirements.
# Uncomment and set the following variables depending on your specific instance and database:
connection_name = "single-router-309308:europe-west4:supermarkt-database"
db_name = "supermarkt-database"
db_user = "hidden"
db_password = "hidden"
# If your database is MySQL, uncomment the following two lines:
driver_name = 'mysql+pymysql'
query_string = dict({"unix_socket": "/cloudsql/{}".format(connection_name)})
# If the type of your table_field value is a string, surround it with double quotes. < SO note: I didn't really understand this line. Is this the problem?
def insert(request):
request_json = request.get_json()
stmt = sqlalchemy.text('INSERT INTO products VALUES ("Testid", "testname", "storename", "testbrand", "4.20", "1kg", "super lekker super mooi", "none")')
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername=driver_name,
username=db_user,
password=db_password,
database=db_name,
query=query_string,
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)
try:
with db.connect() as conn:
conn.execute(stmt)
except Exception as e:
return 'Error: {}'.format(str(e))
return 'ok'
I got it mostly from following this tutorial: https://codelabs.developers.google.com/codelabs/connecting-to-cloud-sql-with-cloud-functions#0 . I'm also using Python 3.7, as used in the tutorial.
SQLAlchemy describes it as not necessarily under the control of the programmer.
For context, the account through which is being connected has the SQL Cloud Admin role, and the Cloud SQL Admin API is enabled. Thanks in advance for the help!
PS: I did find this answer: Connecting to Cloud SQL from Google Cloud Function using Python and SQLAlchemy but have no idea where the settings for Firewall with SQL can be found. I didn't find them in SQL > Connection / Overview or Firewall.
Alright so I figured it out! In Edit Function > Runtime, Build and Connection Settings, head over to Connection Settings and make sure "Only route requests to private IPs through the VPC connector" is enabled. The VPC connector requires different authorization.
Also, apparently I needed my TABLE name, not my DATABASE name as the variable DB_NAME. Thanks #guillaume blaquiere for your assistance!

After connecting to MySQL database, I'm getting "Error: Got packets out of order"

Currently I am trying to set up a simple REST API using Deno and MySQL. After succesfully creating database, table and inserting some values into it, I'm failing with getting those values from the Deno side. Here is my code:
import { Client } from "https://deno.land/x/mysql/mod.ts";
const client = await new Client().connect({
hostname: "127.0.0.1",
username: "root",
port: 3306,
db: "testDatabase",
password: "",
});
await client.execute('use Ponys');
await client.query('SELECT * FROM Students');
After execute/query I always get this messages:
INFO connecting 127.0.0.1:3306
INFO connected to 127.0.0.1
Error: Got packets out of order
I'm running the app with this command:
deno run --allow-all index.ts
My local SQL server is running all the time.
Can you help find me the answer why I cannot get the values? Thanks!
According to the developer, it's a bug.
https://github.com/manyuanrong/deno_mysql/issues/16
More specifically...
https://github.com/manyuanrong/deno_mysql/issues/16#issuecomment-639344637
Prepare for the current crushing reality of Deno not yet possessing a functional mysql driver. It doesn't support passwords!
But when it does man... but when it does, it will soon be a one stop shop of awesomeness.
Just imagine... Beastly NGINX as the SSL Proxy, Single file Deno as the runtime Gateway and MySQL as the relational database running spectacularly in a Digital Ocean $5.00 Droplet..
I simply cannot wait.
If you making SQL requests after a long period of time - you can get a same issue.
Fix of that possible by tune this values in mysql service config:
interactive_timeout
wait_timeout

create_engine problems mysql 5.7 and sqlalchemy

I've inherited an application making use of python & sqlalchemy to interact with a mysql database. When I issue:
mysql_engine = sqlalchemy.create_engine('mysql://uname:pwd#192.168.xx.xx:3306/testdb', connect_args={'use_unicode':True,'charset':'utf8', 'init_command':'SET NAMES UTF8'}, poolclass=NullPool)
, at startup, an exception is thrown:
cmd = unicode("USE testdb")
with mysql_engine.begin() as conn:
conn.execute(cmd)
sqlalchemy.exc.OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '192.168.xx.xx' (101)") None None
However, using IDLE I can do:
>>> import MySQLdb
>>> Con = MySQLdb.Connect(host="192.168.xx.xx", port=3306, user="uname", passwd="pwd", db="testdb")
>>> Cursor = Con.cursor()
>>> sql = "USE testdb"
>>> Cursor.execute(sql)
The application at this point defaults to using an onboard sqlite database. After this I can quite happily switch to the MySQL database using the create_engine statement above. However, on reboot the MySQL database connection will fail again, defaulting to the onboard sqlite db, etc, etc.
Has anyone got any suggestions as to how this could be happening?
Just thought I would update this - the problem still occurs exactly as described above. I've updated the app so that the user can manually connect to the MySQL db by selecting a menu option. This calls the identical code which exceptions when the app is starting, but works just fine once the app is up and running.
The MySQL instance is completely separate from the app and running throughout, so it should be available to receive connections at all times.
I guess the fundamental question i'm grappling with is how can the same connect code work when the app is up and running, but throw an exception when it is starting?
Is there any artifact of SQLAlchemy that can cause it to fail to create usable connections that isn't dependant on the connection parameters or the remote database?
Ahhh, it all seems so obvious now...
The reason for the exception on startup was because the network interface hadn't finished configuring when the application would make its first request to the remote database. (Which is why the same thing would be successful when attempted at a later time).
As communication with the remote database is a prerequisite for the application, I now do something like this:
if grep -Fxq "mysql" /path/to/my/db/config.config
then
while ! ip a | grep inet.*wlan0 ; do sleep 1; echo "waiting for network..."; done;
fi
... in the startup script for my application - ensuring that the network interface has finished configuring before the application can run.
Of course, the application will never run if the interface doesn't configure, so it still needs some finessing to allow it to timeout and default to using a local database...