mysql auto start on connection attempt - mysql

Is there a way to keep a mysql server stopped and for it to be started only after a connection is attempted.
I have a development mysql server in my ubuntu machine and I would like to keep it stopped unless needed and I don't want to have to remember to start it manually.
It's ok if this can be solved using something like vagrant or docker.
Bonus point if the server can be shut down after a configurable amount of idle time.
I remember openshift did something like this, if not connections were made the vm would be shut down and be started only after some connection was attempted to it.

Any MySQL client will attempt to connect to port 3306 (by default) to establish a MySQL connection. If the MySQL server is not running, there's nothing listening on port 3306, so the connection will simply fail.
To do what you're describing, there would need to be something listening on port 3306. Perhaps a proxy of some kind, like HAProxy or ProxySQL. Perhaps one of these can be configured to start a service on demand, but I've never seen anyone attempt to do that.
The reason is that it takes some time to start up a MySQL server process. At least a few seconds, but it could be much longer, depending on whether the server needs to perform crash recovery if it wasn't shut down cleanly last time it stopped. The proxy that starts it would have to keep re-trying the connection until it responds.
There's also a possibility that a downed MySQL server cannot be started, for example if there's a configuration problem that prevents it from starting, or a corrupted database. Then your client would try repeatedly to connect, with a delay each time, and never be able to start the service.
I wonder if what you really need is not MySQL, but an embedded database like SQLite.

Related

MySQL No. of connections count

I have a medium sized database (7 GB) with around 200 concurrent users. i am getting some database lag issues, suddenly my node-mysql client freezes during selects and inserts.
as a process of troubleshooting i checked SHOW STATUS on the DB, everything seemed to be okay but just the Connections attribute has 262050.
want to understand if this number is okay or if the figure is exorbitant?
Exorbitant. Definitely.
Find a machine running your node app, and watch its logs and/or error outputs while you stop and restart the MySQL server.
Hopefully the node app will chatter away telling you that lots of connections were closed as you stop the MySQL server.
This looks like a connection leak. You probably have some sort of problem with your node connection pooling, or maybe with releasing connections after your node app uses then.

moving from localhost socket to TCP/IP increased execution time

recently we revoked grants of localhost on mysql .
I had mysql and scripts running on same server and we use to connect data mysql through socket (localhost) and it used to take 1 hr for a data intensive script to run.
But when we starting using TCP/IP to connect to mysql my script's execution time increased drastically. now it takes 3.5 hr to complete.
Can somebody suggest what could be the possible reason?
I understand moving from local socket connection to TCP/IP would increase time but the leap would be this much I'm unable to figure out.
Please help me

MYSQL 5.0 Windows 2003 starts but port not available after Windows update

My Windows 2003 server installed updates this morning and since then MYSQL starts but doesn't listen on port 3306. I have searched for a solution but so far nothing has helped.
I get the error: Can't connect to MySQL server on 'localhost' (10061)
I have my firewall switched off. I can see the mysqld process running, I've checked the MYSQL error log and event viewer. I've used netstat and can see that MYSQL is not listening on port 3306.
From various posts about the issues I have tried:
Trying to connect by IP eg. 127.0.0.1
Backing up my my.ini file and re-configuring using the MYSQL config wizard. I have since restored the original file.
Backing up then removing the ibdata1, ib_logfile0, ib_logfile1 files. I have since restored the original files.
Ensuring there is a firewall rule even though my firewall is off.
Working through my ini file to make sure that port 3306 is enabled and enabling TCP/IP connections (I've always used localhost in the past without a problem).
Running mysqladmin to make sure that port 3306 is specified.
Rebooting several times.
Starting my web, MYSQL and other services in various orders in case one service or another was trying to reserve the port for itself.
I'm getting no errors in the MYSQL error log and just a warning in Event Viewer: Changed limits: max_open_files: 2048 max_connections: 1024 table_cache: 507 - which I think is fine.
Thank you for those who took the time to comment.
Miraculously the MYSQL service has now started and is running fine at least until the next re-boot.
I have no idea why it wasn't working or what changed in the environment to make it work as I changed nothing since my original question nor did I restart the service. I suspect another service stalling / interfering with it. If I do get to the bottom of it though I will post it here.

Amazon RDS (Mysql2::Error 110)

I've had a Rails application running in production for the past 6 months, with weekly deployments, without any issue.
Now, I've been having a recurring issue for about 3 weeks and it seems to get worst every week.
When my app boots and reaches the point where it's trying to connect to the DB, I get this error :
Can't connect to MySQL server on '***.amazonaws.com' (110) (Mysql2::Error)
AFAIK, this error tells me that I've reached MySQL's max connections limit.
From the configs, I should be able to open 296 connections. My app is set to run 7 instances with each a database connection pool of 5, so it can't really exceed 70 connections when deploying a new instance.
I've never seen the connection count go above 20 in either the AWS RDS Console or the SHOW PROCESSLIST command.
I don't think it has anything to do with either Rails or my application server (Puma), since I can't connect through the MySQL Command-Line Tool when the issue occurs.
Has anyone had a similar issue with MySQL on RDS or MySQL itself?
The database pool isn't per application, it's per process. If it's threading/multi process per instance it could be using more than that. Have you tried restarting mysql? It sounds like you have some hanging connections for whatever reason.
I've been getting these issues recently. Could it be related to the pending-restart change of parameter group on my RDS Instance? I sure hope not. As I understand a pending change should have no effect on the current performance.

How to delay ActiveRecord MySQL reconnect during a failover

We have a Rails 3.1.3 app, connecting to MySQL via the mysql2 gem. Standard config. We also have a handful of Resque workers performing background jobs. The DB hostname we point to (in database.yml) is actually a Virtual IP (VIP) that points to either node1 or node2.
Behind the scenes, the two MySQL servers (nodes) are setup in a High Availability configuration. The data folders are replicated via DRBD, with mysqld only running on the "active" node. When the cluster detects that node1 is not available, it starts mysqld on node2 and points the VIP to it.
If you want more details on the specific setup, it's very similar to this MySQL HA cookbook.
Here's the issue: When a failover happens, it takes approx 30-60 seconds to complete, during which there is no MySQL server available. Any Resque jobs that are currently running fail badly.
Here's the question(s): How can we tell ActiveRecord to reconnect after a delay? Maybe attempt several reconnects with a backoff timer? Or is there a better way of dealing with this?
Your HA setup is going to cause you infinite amounts of pain in the future. Use database-layer replication instead of block-device-layer replication; MySQL Proxy was designed to do this.