Why query set names, get slower after new pod starts - mysql

Already running web app on kubernetes and facing issue, after new pod come with autoscale.
First touches on web are slower. I am already using shared caches&tmp for web. But when I look on database logs, there is issue:
2020-06-01T10:00:32.605088Z 3069191 Connect database#ip-of-node on database using TCP/IP
2020-06-01T10:00:32.673244Z 3069191 Query SET NAMES 'utf8mb4'
2020-06-01T10:00:34.272685Z 3069193 Connect database#ip-of-node on database using TCP/IP
When I have only one pod, there is literally no time between first and second connect.
But when new pod come, there is "lag" when I try browse our web.
This "lag" is visible on database where time, between Query SET NAMES and second Connect ... TCP/IP,
take more time then normal. After enter something on web, 2nd time is running without "lag".
thanks for any advice

Related

Initial database connection error at the start of the day

Context: Telephony system (Asterisk) using the MySQL C API to connect to the database to lookup the routing for a call as it comes in. The lookup involves connecting to the database, executing a query, then closing the connection.
Sometimes the very first call in the morning generates the following error:
Access denied for user 'asterisk'#'127.0.0.1' (using password: YES)
Normally this would mean the password was wrong, but that's obviously not the case here, since it uses the same user and password all the time for all the calls. It's as if the system has somehow "gone to sleep" or perhaps a file handle has become stale somewhere, so that the first attempt to the connect to the database fails, but the rest work fine. Also it only happens occasionally, so I'm unable to replicate it - very strange!
I'm using Asterisk 1.8.32 with MySQL 5.5 on Debian 8.7.
It's a bit of a headscratcher, so I would be grateful for any suggestions!
First of all it is very bad idea use 1.8.* tree at current moment becuase of security feature.
Move to 11.* fix this issue.
Also you can do following in my.cnf
interactive_timeout=
Set to any value more then 4 days(weekend)
Other option is reload mysql module by crontab every 3 hrs.
Best option(except upgrade) is move from mysql to res_odbc, which have keepalive option. res_config_mysql considering deprecated, so any new systems should use ODBC.

After Aurora Cluster DB failover, unable to write to DB

Right now I am connecting to a cluster endpoint that I have set up for an Aurora DB-MySQL compatible cluster, and after I do a "failover" from the AWS console, my web application is unable to properly connect to the DB that should be writable.
My setup is like this:
Java Web App (tomcat8) with HikariCP as the connection pool, with ConnecterJ as the driver for MySQL. I am evaluating Aurora-MySQL to see if it will satisfy some of the needs the application has. The web app sits in an EC2 instance that is in the same VPC and SG as the Aurora-MySQL cluster. I am connecting through the cluster endpoint to get to the database.
After a failover, I would expect HikariCP to break connections (it does), and then attempt to reconnect (it does), however, the application must be connecting to the wrong server, because anytime a write is hit to the database, a SQL Exception is thrown that says:
The MySQL server is running with the --read-only option so it cannot execute this statement
What is the solution here? Should I rework my code to flush DNS after all connections go down, or after I start receiving this error, and then try to re-initiate connections after that? That doesn't seem right...
I don't know why I keep asking questions if I just answer them (I should really be more patient), but here's an answer in case anyone stumbles upon this in a Google search:
RDS uses DNS changes when working with the cluster endpoint to make it looks "seamless". Since the IP behind the hostname can change, if there is any sort of caching going on, then you can see pretty quickly how a change won't be reflected. Here's a page from AWS' docs that go into it a bit more: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
To resolve my issue, I went into the jvm's security file and then changed it to be 0 just to verify if what was happening was correct. Seems correct. Now I just need to figure out how to do it properly...

Issue with connecting to database on Wordpress

I am trying to get Wordpress up and running but I get the "Error establishing a database connection" page pop up.
Here is the setup and what I've done:
I have a server running Wordpress fine. I took a snapshot from the AWS volume that had the wp-config.php information from the running server and spawned a new server with a volume that is snapshotted. I've checked all my settings and it all looks fine.
On the SQL sever side (MYSQL), I added the new IP with all the correct username/passwords so the database server will allow it to connect. I also have put print statements while wordpress tries to load the database. The values returned are all correct. Based on some threads I read, I also deleted my wp-config file and re-copied it from the original server.
I also made sure the permissions are correct. Any other suggestions on what I could be missing?
I was able to fix this. The issue had nothing to do with Wordpress or the server, but with MYSQL. The database have Schema privileges and I hadn't set it for my new IP address. Adding the privileges fixed the issue.

MySQL connections limit in Micro CloudFoundry

I'm running my application with the Micro CloudFoundry, but I'm having trouble connecting to MySQL 'User 'usGh0jJk8EoZn' has exceeded the 'max_user_connections' resource'. How can I change this value?
I'm not quite sure you can change that value.
Before going down that road though, you may want to make sure that you are not leaking connections. Is your application running correctly when deployed locally (i.e. not using regular CloudFoundry nor Micro CF)? How are you connecting to the database? It may seem strange that you hit a connection limit if you're actually the sole user of your app, which I assume you are if using micro.
as ebottard said, it's well worth making sure your code isn't leaking connections. But, if you want to change the mysql setup for the instance running on Micro CloudFoundry, you can SSH in to the VM using the 'vcap' user.
Once connected, you will find the mysql configuration file at /var/vcap/jobs/mysql_node/config/my.cnf
For maximum connections you will also have to change the max_user_conns value in /var/vcap/jobs/mysql_node/config/mysql_node.yml
Please also take a look at;
http://docs.cloudfoundry.com/infrastructure/micro/using-mcf.html#logging-in-to-micro-cloud-foundry

Find the number of active connections per database in MySQL

I have a drupal application that uses the database named db1. Each time a drupal request is sent, a new connection to database will be established.So after a certain number of conneections has reached the site turns offline showing the following error:
User db1 already has more than 'max_user_connections' active connections.
So my aim is to close the database connection and thereafter avoiding the offline error. For this I need to find the number of active connections made to my database.I have tried with the SHOW PROCESSLIST command. But it is not showing the number of connections.
Is there any method to find the number of database connections made to a mysql database?
Thanks,
SHOW STATUS LIKE 'threads_connected';
Although if your Drupal (or the webserver to be precise) is not closing connections automatically after each request, there's probably something that is not configured correctly.