I have web app running on NodeJS + MySQL. Initially the web app works fine,but all of a sudden the MySQL connection gets refused with following error being thrown:
ECONNREFUSED 127.0.0.1:3306
Simply restarting the server with pm2 reload solves the issue temporarily.But again after a long span of time,the above error creeps in.
The configuration in NodeJS for making MySQL connection is as following:
"sqlconn": {
"connectionLimit": 10,
"host": "127.0.0.1",
"user": "root",
"password": "XYZ",
"database": "test",
"port": 3306,
"multipleStatements": true
}
Any idea on how to resolve this issue?
NOTE: I am using a digital ocean droplet with RAM size 512MB
To check what was going wrong,I opened the MySQL log file:
/var/log/mysql/error.log
The logs read something like following:
InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
InnoDB: Cannot allocate memory for the buffer pool
InnoDB: Plugin initialization aborted with error Generic error
Plugin 'InnoDB' init function returned error.
Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Failed to initialize plugins.
So the reason was that Mysql can't restart because it's out of memory.
To resolve the memory issue,this answer can be followed : https://stackoverflow.com/a/32932601/3994271
The possible solutions as discussed in the above link are :
Increasing RAM size , or
configuring a swapfile
I decided to go with configuring a swapfile.
The following link provides details on configuring a swap file:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04
You must be not closing the mysql connections while doing the query. If you keep the connections open, it would give up after sometime.
Also there is a bug in node-mysql. You can use mysql pool.enter link description here
Related
Using mysql(v8.0.21) image with mac docker-desktop (v4.2.0, Docker-Engine v20.10.10)
As soon service up:
entrypoints ready
innoDB initialization done
ready for connection
But as soon try to run the direct script(query) it crashes, refused to connect (also from phpmyadmin) and restarted again.
terminal log
connection refused for phpMyAdmin
In the logs we are able to see an Error:
[ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/ib_buffer_pool' for reading: No such file or directory
The error we are able to see into log is not an issue, as it is fixed and updated by InnoDB already, here is the reference below:
https://jira.mariadb.org/browse/MDEV-11840
Note: docker-compose file we are pretty much sure that, there is no error as same is working fine for windows as well ubuntu, but the issue is only for macOS.
Thanks #NicoHaase and #Patrick for going through the question and suggestions.
Found the reason for connection refused and crashing, posting answer so that it maybe helpful for others.
It was actually due to docker-desktop macOS client there was by default 2GB Memory was allocated as Resource, and for our scenario it was required more than that.
We just allocate more memory according to our requirement and it was just started working perfectly fine.
For resource allocation:
open docker-desktop preferences
resources > advanced
I deploy a docker container with Ghost inside, to the Google Cloud Run.
The Cloud Run service has a service account with Cloud SQL Client role.
I've added the SQL instance into the connections of the Cloud Run Service.
The Ghost's configuration file is the below property
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/xxxxxxx",
"user": "xxxxxxx",
"password": "xxxxxxx",
"database": "ghost1"
}
},
I have an Google Cloud SQL - MySQL instance up and running. I can connect to it through Public IP and using the same credentials.
After I deploy the container, I am getting a "We'll be right back" page on the Ghost.
When I look into the logs on each side, I see some errors which I do not understand the root cause.
Examples of the logs at Google Cloud SQL:
2021-11-14T06:40:37.183921Z 6971 [Warning] User 'mysql.session'#'localhost' was assigned access 0x8000 but was allowed to have only 0x0.
2021-11-14T06:49:09.008652Z 7002 [Note] Aborted connection 7002 to db: 'ghost1' user: 'xxxxxxx' host: 'cloudsqlproxy~107.178.207.100' (Got an error reading communication packets)
2021-11-14T06:50:29.721121Z 7471 [Note] Got timeout reading communication packets
Examples of the logs at Google Cloud Run
DatabaseError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at DatabaseError.KnexMigrateError
I have tried a lot of combinations like using VPC connector and Private IP but I keep getting the same network timeout errors all the time. I suspect that the Ghost mysql adaptor library (knex) is doing something wrong but I am not sure whether that is true and if there is something I can do about it.
Thanks for your help
For what it's worth, we have an example app that connects over Unix sockets.
As long as you're connecting to public IP, you won't need a Serverless VPC Access connector or a Private IP.
Also, you might double check that your socket path is correct. It should look like this: /cloudsql/project-name:region:instance-name.
There's a similar question here that might help:
How do I run Ghost with MySQL on GCP?
Make sure you allow SSL connections only and create a new certificate chain for your instance.
"ssl": {
"cert": "<database_cert>",
"ca": "<database_ca>",
"key": "<database_key>"
}
example here
Activating SSL connections only and adding the certificate chain did not work for me.
I am using the unix socketPath to connect from Ghost to a CloudSQL database like mentioned above: /cloudsql/project-name:region:instance-name.
When running Ghost version 5.x.x the db connection stopped working after some random time. The Cloud Run logs showed:
Error: connect ETIMEDOUT
and
Error: Connection lost: The server closed the connection.
I tested my configuration by running Ghost on localhost, using the cloudsql-proxy for authentication and it worked without problems.
I finally got it working on Cloud Run by patching the node_modules/knex/lib/client.js. Re-initialise the connection pool and try to reconnect to the database in case the pool has been destroyed.
The approach is also explained here in more detail: https://stackoverflow.com/a/69191743/2546954
I'm getting this generic error when trying to login to WordPress, or logging into Django, or trying to reimport a table (trying to debug the error):
1030 - Got error -1 from storage engine
MySQL.com says this about it:
Error: 1030 SQLSTATE: HY000 (ER_GET_ERRNO)
Message: Got error %d from storage engine
Check the %d value to see what the OS error means. For example, 28
indicates that you have run out of disk space.
But that still seems ambiguous to me. How do I find the OS error? I have not found any help through Google searches, or mysqltuner, or restarting the services, or repairing through PHPMyAdmin.
The problem was my.cnf had innodb_force_recovery set to 1. What I don't understand is that it was working perfectly fine for months.
Also, I found this in one of the error logs (somehow there were two):
InnoDB: A new raw disk partition was initialized or
InnoDB: innodb_force_recovery is on: we do not allow
InnoDB: database modifications by the user. Shut down
InnoDB: mysqld and edit my.cnf so that newraw is replaced
InnoDB: with raw, and innodb_force_... is removed.
Found the solution to remove or set that value to 0, here:
Mysql 'Got error -1 from storage engine' error
I have 2 Virtual Machines on Azure in the same Virtual Network.
One virtual machine runs a NodeJs process which is responsible for MySQL operations.
Other virtual machine runs a MySQL instance. I can connect to it from the other VM and from the NodeJs process fine.
Sometimes it will fail and throw an error about Connection timeout when acquiring a connection from the pool.
My connection string uses a local IP address from within the Virtual Network to access the database so it should have this much delay to exceed a 10 second timeout. When it works it's rapid, I mean really fast! But sometimes it just breaks and randomly starts working again. Anyone ever come across this?
If it's any help this is a MySQL instance based on Ubuntu Server 15.10.
Exception:
{
"error": {
"name": "Error",
"status": 500,
"message": "connect ETIMEDOUT",
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true,
"stack": "Error: connect ETIMEDOUT
at PoolConnection.Connection._handleConnectTimeout (projectdir/node_modules/loopback-connector-mysql/node_modules/mysql/lib/Connection.js:375:13)
at Socket.g (events.js:180:16)
at Socket.EventEmitter.emit (events.js:92:17)
at Socket._onTimeout (net.js:327:8)
at Timer.unrefTimeout [as ontimeout] (timers.js:412:13)
--------------------
at Protocol._enqueue (projectdir/node_modules/loopback-connector-mysql/node_modules/mysql/lib/protocol/Protocol.js:135:48)
at Protocol.handshake (projectdir/node_modules/loopback-connector-mysql/node_modules/mysql/lib/protocol/Protocol.js:52:41)
at PoolConnection.connect (projectdir/node_modules/loopback-connector-mysql/node_modules/mysql/lib/Connection.js:123:18)
at Pool.getConnection (projectdir/node_modules/loopback-connector-mysql/node_modules/mysql/lib/Pool.js:45:23)
at MySQL.executeSQL (projectdir/node_modules/loopback-connector-mysql/lib/mysql.js:200:12)
at projectdir/node_modules/loopback-connector-mysql/node_modules/loopback-connector/lib/sql.js:408:10
at projectdir/node_modules/loopback-datasource-juggler/lib/observer.js:175:9
at doNotify (projectdir/node_modules/loopback-datasource-juggler/lib/observer.js:93:49)
at MySQL.ObserverMixin._notifyBaseObservers (projectdir/node_modules/loopback-datasource-juggler/lib/observer.js:116:5)
at MySQL.ObserverMixin.notifyObserversOf (projectdir/node_modules/loopback-datasource-juggler/lib/observer.js:91:8)"
}
}
Per my experience, there are 2 situations may raise your issue.
The connections reaches the max_connection number of MySQL server, and there are no available connections for a new connection client. In this situation, you may check your code, whether you release the connection after MySQL operations.
On the other hand, when you get the timeout exception, you may login on Azure manage portal, in monitor page of your VM portal, to check whether the metrics reach the bottleneck of the VM which will also occur your issue. In this situation, you can scale on your VM to enlarge your VM hardwares.
The Node.js mysql module has a few options.
One of these options is the 'connectTimeout', which defaults to 10000ms (which is roughly 10 seconds).
If nothing is done within those 10 seconds, the connection closes automatically.
What the solution could be for you problem, is using pooling connections.
With this, you create a connection pool. Everytime a query needs to be executed, it takes a connection from the pool uses it and when it expires it automatically returns to the pool, ready to be restarted, thus no more connection timeout errors.
I have a process running all the time and when it idles for a while the connections in the pool are in a sleeping state. Eventually MySQL purges those connections based on the wait_timeout setting in my.cnf. Once this happens and I try to use a connection it will fail because the module assumes the connection is still live and tries to use it only to get a timeout or connection exception.
To prevent this you can either overwrite the mysql module code to support "connection lifetime" in the pool or stop using the pool and manage your own connections.
I have a spring-mvc application running on glassfish server with Mysql db connection in which the pool idle time is set to 300 seconds but I am getting continuously the Warnings every 5 minutes even if ther is no idle session even if the application is up in the server but no one is using it:
Unexpected exception while destroying resource from pool MediaTrackPool. Exception message: WEB9031: WebappClassLoader unable to load resource [com.mysql.jdbc.ProfilerEventHandlerFactory], because it has not yet been started, or was already stopped
Error while Resizing pool MediaTrackPool. Exception : WEB9031: WebappClassLoader unable to load resource [com.mysql.jdbc.SQLError], because it has not yet been started, or was already stopped
Could someone help me in getting rid of this warnings as or restricting them when actual ideal session is encountered because getting the warnings every 5 minutes even when no one using the application is not helping is real log analysis.
Settings for connection pool are as below:
General Settings
Pool Name: MediaTrackPool
Resource Type: javax.sql.DataSource
Datasource Classname:com.mysql.jdbc.jdbc2.optional.MysqlDataSource
Pool Settings
Initial and Minimum Pool Size: 8
Maximum Pool Size: 32
Pool Resize Quantity: 2
Idle Timeout: 300
Max Wait Time: 60000
I belive that there is a mismatch between the connection pool properties and the actual timeouts at the my sql server.
Can you check whats the value of connect_timeout, interactive_timeout and wait_timeout.
More info on setting these timouts is here.