MySQL is not responding in my Rails App - mysql

I am trying to load a file in to database using Ruby. Its a large file about 15 Mb ... it copied the records properly for some time .... but after copying few records, there is no error but it does not insert records in to database ......... and when I connect to Msql prompt in a separate console ... i get an error :
mysql> desc testdb2.test_descriptions;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 52
after this i am able to connect to Mysql database ....... and it is now again the scripts starts writing the records to the database .....
Is there any way to maintain the connection with the database while the app is running ?
I am not sure if its a kind of time out issue or something .... please correct me ....
def simulate_datasets
Log.initialize
data_folders = ["Jun_06_2013"];
data_folders.each do |data_folder|
add(data_folder);
end
render :text => Log.dump
end
def add (data_folder)
#dataset = Dataset.initialize
#dataset.created_at=Date.new(2013,06,06)
#dataset.save
current_root = "script/datasets/"+data_folder+"/"
strip_string = "/development/A/"
population_time = {}
total_time = 0
clusters = Cluster.find(:all, :order=>"created_at DESC");
if clusters.empty?
Log.info "No Clusters found"
Cluster.initialize
clusters = Cluster.find(:all, :order=>"created_at DESC");
end
clusters.each do |cluster|
cluster_path = cluster.path
root = current_root + cluster.name+'/'
total_time += populate_file_or_folder(root+"fileListWithMLintMetrics.txt", cluster_path)
end
end
I am using populate_file_or_folder method to populate to the database
mysql> show variables like '%time%';
+----------------------------+-------------------+
| Variable_name | Value |
+----------------------------+-------------------+
| connect_timeout | 10 |
| datetime_format | %Y-%m-%d %H:%i:%s |
| delayed_insert_timeout | 300 |
| flush_time | 0 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lc_time_names | en_US |
| long_query_time | 10.000000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| slow_launch_time | 2 |
| system_time_zone | EDT |
| table_lock_wait_timeout | 50 |
| time_format | %H:%i:%s |
| time_zone | SYSTEM |
| timed_mutexes | OFF |
| timestamp | 1372869659 |
| wait_timeout | 28800 |
+----------------------------+-------------------+
20 rows in set (0.00 sec)
def self.populate_file_or_folder(fileName, cluster_path)
counter = 0
# Reading directly from the CSV library
CSV.foreach(fileName) do |line|
counter = counter+1
completePath = line[0]
completePath = cluster_path+ '/'+completePath
newStructure = FileOrFolder.new
newStructure.fullpath = path
pathbits = path.split('/')
newStructure.name = pathbits.last
newStructure.save
end
end

Related

How to debug "Lock wait timeout exceeded" in mysql (aurora)

I'm getting "Lock wait timeout exceeded" error when inserting a record in MySQL table (amazon aurora to be precise). We're inserting hundreds of records in one transaction with savepoint for every few inserts.
We're using the InnoDB engine. We have innodb_lock_wait_timeout set to 50 sec. We're getting this error only in production so I am limited in debugging ability here.
I've tried to pull info about locked transactions when insert statements runs and here is what I got:
MySQL [(none)]> SELECT w.* FROM information_schema.innodb_lock_waits w INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id INNER JOIN information_schema.processlist p on b.trx_mysql_thread_id = p.ID LIMIT 10;
+-------------------+--------------------------+-----------------+--------------------------+
| requesting_trx_id | requested_lock_id | blocking_trx_id | blocking_lock_id |
+-------------------+--------------------------+-----------------+--------------------------+
| 1177444193 | 1177444193:5437:2161:286 | 1177444168 | 1177444168:5437:2161:286 |
+-------------------+--------------------------+-----------------+--------------------------+
1 row in set (0.01 sec)
MySQL [(none)]> select * from information_schema.innodb_trx where trx_id=1177444168 limit 10;
+------------+-----------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+
| trx_id | trx_state | trx_started | trx_requested_lock_id | trx_wait_started | trx_weight | trx_mysql_thread_id | trx_query | trx_operation_state | trx_tables_in_use | trx_tables_locked | trx_lock_structs | trx_lock_memory_bytes | trx_rows_locked | trx_rows_modified | trx_concurrency_tickets | trx_isolation_level | trx_unique_checks | trx_foreign_key_checks | trx_last_foreign_key_error | trx_adaptive_hash_latched | trx_adaptive_hash_timeout | trx_is_read_only | trx_autocommit_non_locking |
+------------+-----------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+
| 1177444168 | RUNNING | 2022-09-26 12:08:49 | NULL | NULL | 1844 | 64308215 | NULL | NULL | 0 | 8 | 908 | 1136 | 2579 | 936 | 0 | READ COMMITTED | 1 | 1 | NULL | 0 | 0 | 0 | 0 |
+------------+-----------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+
1 row in set (0.01 sec)
As I see blocking tx isn't a query but rather some transaction with a lot of locked records because I see nothing in trx_query.
So my question is how can I get more info about this blocking transaction to see what exactly is blocking my insert for 50 sec until timeout?
p.s. we haven't had such problems on standalone MySQL but started to see lock wait errors when migrated to aurora.

Google Cloud functions + SQL Broken Pipe error

I have various Google Cloud functions which are writing and reading to a Cloud SQL database (MySQL). The processes work however when the functions happen to run at the same time I am getting a Broken pipe error. I am using SQLAlchemy with Python, MySQL and the processes are cloud functions and the db is a google cloud database.I have seen suggested solutions that involve setting timeout values to longer. I was wondering if this would be a good approach or if there is a better approach? Thanks for your help in advance.
Heres the SQL broken pipe error:
(pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
Here are the MySQL timeout values:
show variables like '%timeout%';
+-------------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_semi_sync_master_async_notify_timeout | 5000000 |
| rpl_semi_sync_master_timeout | 3000 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 30 |
| wait_timeout | 28800 |
+-------------------------------------------+----------+
15 rows in set (0.01 sec)
If you cache your connection, for performance, it's normal to lost the connection after a while. To prevent this, you have to deal with disconnection.
In addition, because you are working with Cloud Functions, only one request can be handle in the same time on one instance (if you have 2 concurrent requests, you will have 2 instances). Thus, set your pool size to 1 to save resource on your database side (in case of huge parallelization)

Stop MariaDB from locking proces

We run a CentOS DirectAdmin install with MariaDB 10.2.14 where on Magento is installed.
Currently our DB locks very often when a process runs, so all other processes are waiting until the current process finishes. This is quite a problem, because for example, also the adding to cart process is waiting in that case and people can not order.
How can we prevent the DB from being locked so long and solve this issue?
Server:
6x Intel Xeon
32GB RAM
500GB SSD
My.cnf:
[mysqld]
bind-address = 127.0.0.1
local-infile=0
innodb_file_per_table=1
innodb_file_format=barracuda
slow_query_log = 1
slow_query_log_file=/var/log/mysql-log-slow-queries.log
key_buffer = 250M
key_buffer_size = 250M
max_allowed_packet = 128M
table_cache = 512
sort_buffer_size = 7M
read_buffer_size = 7M
read_rnd_buffer_size = 7M
myisam_sort_buffer_size = 64M
tmp_table_size = 190M
query_cache_type = 1
query_cache_size = 220M
query_cache_limit = 512M
thread_cache_size = 150
max_connections = 225
wait_timeout = 300
innodb_buffer_pool_size = 7G
max_heap_table_size =180M
innodb_log_buffer_size = 36M
join_buffer_size = 32M
innodb_buffer_pool_instances = 7
long_query_time = 15
table_definition_cache = 4K
open_files_limit = 60K
table_open_cache = 50767
innodb_log_file_size= 128M
innodb_lock_wait_timeout = 700
Suggestions to consider for your my.cnf [mysqld] section
The following lead with # to disable or REMOVE to allow defaults to serve your requirements
Some of these are already mentioned by Rick James in earlier comment.
. key_buffer
. key_buffer_size
. table_cache
. sort_buffer_size
. read_buffer_size
. read_rnd_buffer_size
. MyISAM_sort_buffer_size
. join_buffer_size
. long_query_time
. innodb_lock_wait_timeout
make these changes or add lines to your my.cnf for
query_cache_type=0 # from 1 to turn OFF QC and conserve CPU cycles
query_cache_size=0 # from 220M to conserve RAM for more useful work
query_cache_limit=0 # from 512M to conserve RAM for more useful work
thread_cache_size=100 # from 150 V8 refman suggested CAP to avoid OOM
innodb_lru_scan_depth=100 # from 1024 to minimum to conserve CPU every SECOND
innodb_flush_neighbors=0 # from 1 no need to waste CPU cycles when using SSD
innodb_io_capacity_max=10000 # from 2000 since you have SSD
innodb_io_capacity=5000 # from 200 to use more of your SSD capability
for additional assistance, please check my profile, clk Network profile for contact info.
MySQL will wait a certain amount of time for the lock to be removed before it gives up and throws that error. If you are able to track when you are seeing these error messages down to any consistent time of the day, you should look at what else the server is doing at that time - for instance is a database backup running. By doing this you should be able to narrow down possibilities for what processes could be creating the lock although it's not always that straight forward to do - likely to be a bit of trial and error.
Sometimes deadlock issues can be caused on the database.The reason behind this issue is if you are running a lot of custom scripts and killing the scripts before the database connection gets chance to close.
If you can login to MySQL from CLI and run the following command
SHOW PROCESSLIST;
you will get the following output
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| 6794372 | db_user| 111.11.0.65:21532 | db_name| Sleep | 3800 | | NULL | 0 | 0 | 0 |
| 6794475 | db_user| 111.11.0.65:27488 | db_name| Sleep | 3757 | | NULL | 0 | 0 | 0 |
| 6794550 | db_user| 111.11.0.65:32670 | db_name| Sleep | 3731 | | NULL | 0 | 0 | 0 |
| 6794797 | db_user| 111.11.0.65:47424 | db_name | Sleep | 3639 | | NULL | 0 | 0 | 0 |
| 6794909 | db_user| 111.11.0.65:56029 | db_name| Sleep | 3591 | | NULL | 0 | 0 | 0 |
| 6794981 | db_user| 111.11.0.65:59201 | db_name| Sleep | 3567 | | NULL | 0 | 0 | 0 |
| 6795096 | db_user| 111.11.0.65:2390 | db_name| Sleep | 3529 | | NULL | 0 | 0 | 0 |
| 6795270 | db_user| 111.11.0.65:10125 | db_name | Sleep | 3473 | | NULL | 0 | 0 | 0 |
| 6795402 | db_user| 111.11.0.65:18407 | db_name| Sleep | 3424 | | NULL | 0 | 0 | 0 |
| 6795701 | db_user| 111.11.0.65:35679 | db_name| Sleep | 3330 | | NULL | 0 | 0 | 0 |
| 6800436 | db_user| 111.11.0.65:57815 | db_name| Sleep | 1860 | | NULL | 0 | 0 | 0 |
| 6806227 | db_user| 111.11.0.67:20650 | db_name| Sleep | 188 | | NULL | 1 | 0 | 0 |
| 6806589 | db_user| 111.11.0.65:36618 | db_name| Query | 0 | NULL | SHOW PROCESSLIST | 0 | 0 | 0 |
| 6806742 | db_user| 111.11.0.75:38717 | db_name| Sleep | 0 | | NULL | 0 | 0 | 0 |
| 6806744 | db_user| 111.11.0.75:38819 | db_name| Sleep | 0 | | NULL | 61 | 61 | 61 |
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
15 rows in set (0.00 sec)
You can see as an example
6794372 the command is sleep and time is 3800. This is preventing other operations.
These processes should be killed 1 by 1 using the command.
KILL 6794372;
Once you have killed all the sleeping connections, things should start working as normal again
These are deprecated; their names have changed. Remove them:
key_buffer = 250M
table_cache = 512
These are higher than they should be:
key_buffer_size = 250M
query_cache_size = 220M
thread_cache_size = 150
long_query_time = 15
table_definition_cache = 4K
table_open_cache = 50767
innodb_lock_wait_timeout = 700
The last one may be the villain. It implies that you have some loooong transactions. This is a design flaw in your code. Find a way to make the transactions shorter. If you need help, describe what you are doing to us.
I feel that 5 is plenty long for a transaction.
Do you sometimes get this?
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

how can i mysql wait_timeout to unlimited

I have this config
mysql> SHOW VARIABLES where Variable_name like '%timeout';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| connect_timeout | 5 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 7200 |
| table_lock_wait_timeout | 50 |
| wait_timeout | 28800 |
+----------------------------+-------+
10 rows in set (0.01 sec)
mysql>
I needs long time connect, want to unlimited timeout.
Look my php source.
<?php
$link = #mysql_connect("localhost","root",$pw);
...
mysql_query($query,$link);
...
// A long time flows (maybe 28,800sec)
mysql_query($query,$link); // error !!
?>
Please advise.
The answer is NO. You can not set the wait_timeout to unlimited.
You can refer MYSQL wait_timeout
However if you want to change it then you can try like this:
SET GLOBAL connect_timeout=....;
There is a limit for wait_timeout. This configuartion value can put in the configuration file my.cnf (for unix)/ my.ini (for windows).
Type Integer
Default Value 28800;
Minimum Value 1;
Maximum Value (Other) 31536000;
Maximum Value (Windows) 2147483
Assign wait_timeout in the configuration file within the above range and restart the mysql server.

Setting correct innodb_log_file_size in mysql

We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log
InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
This error rings true because we were working on a very large table that contains BLOB data types.
The best answer we found online said
To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you've moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.
and in the comments
Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.
which is I guess what happened, based on our current (default) innodb_log_file_size of 48mb?
SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+-----------------------------+----------+
So, this leads me to two pointed questions and one open-ended one:
How do we determine the largest row so we can set our innodb_log_file_size to be bigger than that?
What is the consequence of the action in step 1? I'd read about long recovery times with bigger logs.
Is there anything else I should worry about regarding migrations, considering that we have a large table (650k rows, 6169.8GB) with unrestrained, variable length BLOB fields.
We're running mysql 5.6 and here's our my.cnf.
[mysqld]
#defaults
basedir = /opt/mysql/server-5.6
datadir = /var/lib/mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = /tmp
bind-address = 0.0.0.0
#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1
innodb_buffer_pool_size = 11G
#http://stackoverflow.com/a/10866836/182484
collation-server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='SET collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake
#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```
As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes. Here's the result of getting the largest byte size:
select pen_size as bytes,·
pen_size / 1024 / 1024 as mb,·
pen_id from pen_sizes
group by pen_id
order by bytes desc
limit 40
+---------+------------+--------+
| bytes | mb | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+
This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?
I ran
SELECT table_name AS "Tables",
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = 'codepen'
+-------------------+------------+
| Tables | Size in MB |
+-------------------+------------+
...snip
| pens | 6287.89 |
...snip
0. Preliminary information
Your settings:
innodb_log_file_size = 50331648
innodb_log_files_in_group = 2
Therefore your "log group capacity" = 2 x 50331648 = 96 MB
1. How to determine the largest row
There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).
2. Impact of innodb_log_file_size
Reference manual:
The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.
3. Anything else to worry about
6169.8 GB / 650k rows = about 10 MB per row on average
This is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.