I've used WordPress site, If we give multiple requests using JMeter or on search engine boot time MySQL server getting down
I have changed the below configuration on my server
key_buffer = 25M
max_allowed_packet = 1M
thread_stack = 128K
table_cache = 25
innodb_buffer_pool_size= "64M" to "512M"
max_connections = 200;
How to fix the issue?
This issue happens only on high traffic like 200 to 500 request at a time
Related
I want to insert a large file (about 4G) into mysql. I used source command and tried several times. Every time at first everything go right like
Query OK, 1710 rows affected (0.27 sec)
Records: 1710 Duplicates: 0 Warnings: 0
But after about ten minutes I got the following error message
"ERROR 2005 (HY000): Unknown MySQL server host '--' (0)" with garbage character.
And it says that:
No connection. Trying to reconnect...
ERROR 2005 (HY000): Unknown MySQL server host 'rnrn' (0)
ERROR:
Can't connect to the server
With some garbage characters like:
ERROR 2005 (HY000): Unknown MySQL server host '2銆佹嫑鍟嗛?鍝佹姇鏀鹃〉闈㈡惌寤鸿惀閿?椿鍔ㄤ竴浣撳寲娴佺▼鎼?缓rn' (0)
ERROR:
Can't connect to the server
Could any one help?
I tried to change the parameters in mysql.ini
innodb_buffer_pool_size = 1024M
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 512M
innodb_log_buffer_size = 64M
change the number to:
innodb_buffer_pool_size = 512M
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_log_buffer_size = 32M
or change the max_allowed_package from 1M to 1024M, but it still doesn't work.
If possible, the database dump should be split into smaller chunks, and inserted one by one. The problem may be caused by a time limit on the single database connection imposed by the server, or unstable connection.
I do not know how exactly to split the file, it depends on your database structure. It is possible to create structure for all tables in one file, and insert data in others. If foreign keys exist, they should be disabled on insert, otherwise you may end up adding data, dependencies for which are not yet defined.
Alternatively, you may try uploading the dump to the server, and import it from mysql using LOAD DATA INFILE, or using local mysql. But that requires an SSH connection to your server - not all providers allow it.
Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:50; busy:50; idle:0; lastwait:10000]
Whenever we are connecting to web app with Socket, it throws this error and socket gets disconnected.
Even after doing following things, problem still persists -
Scaled up AWS EC2 from micro to large
In /etc/my.cnf
wait_timeout = 28800
interactive_timeout = 28800
Added following configurations under both Development as well as production environment.
maxActive = 50
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
has anyone faced this problem?
I recently moved a script to start running a query from the same data on a different server/DB. Both servers load the same data, but one is in the process of being decommissioned from real-time loading of current day's data. The previous server was running mysql 5.1.73, this new one is running MariaDB 10.1. The script is trying to run the following query with only the date changed (and I've obfuscated some columns and data filters, but kept col_X consistent in the query).
SELECT
count(*) as num,
sec_to_time(floor(timestamp/1000000)) as true_time,
col_A,col_B,col_C,col_D,col_E,col_F,col_G,id,
sum(if(col_H = 3, 1, 0)) as num_A,
sum(if(col_H = 4,1,0)) as num_B
FROM
`some_table` WHERE
`some_table`.`date` = 20170622 AND
(col_I not in ('VAL_A','VAL_B','VAL_C'))
GROUP BY col_A, col_D, coL_E, col_F, col_B, col_C,
sec_to_time(floor(timestamp/1000000))
HAVING count(*) >= if(col_G='A',50,if(col_G='B',50,150))
ORDER BY sec_to_time(floor(timestamp/1000000));
On the new server, after the query runs for a while i'm getting this message:
ERROR 1114 (HY000): The table '/home/mysql/tmp/#sql_61c5_0' is full
In that dir while the query is running i see 2 files that grow in size to around 1.2GB combined before this message happens. I've gone through many variables, finding none different from the old server to new. The first that seem to be mentioned are tmp_table_size and max_heap_table_size, both of which are default 16MB on old and new server, but I've tried upping anyways.
The disk is not full, although they are smaller partitions:
Filesystem Size Used Avail Use% Mounted on
/dev/md125 400G 247G 154G 62% /home
/dev/nvme0n1p1 373G 214G 160G 58% /mnt/nvme
(note /home/mysql is a symlink to /mnt/nvme/mysql, where mysql tables & tmp dir are located).
These are the only mysql variables set in /etc/my.cnf:
[mysqld]
#Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_buffer_pool_size = 25G
innodb_log_file_size = 768M
max_allowed_packet = 104857600
innodb_file_per_table = 1
max_heap_table_size = 134217728
tmp_table_size = 134217728
For the record the new server is running CentOS 7, but I can not find any OS limit that could be causing this either. Any hints as to why this could be happening would be greatly appreciated.
i am using moodle 3.2 , mysql - 5.5.49 0ubantu0.12.04.1 , Apache 2
And 48 GB RAM , 1 TB HDD, ubutu 14.04 LTS Command based OS ,
i have 500 users but when 200 users give exam at a time then server speed is very slow, server take 2 or more minute to open a question .
Please help me how can i provide a exam for 500 users at a time
Detail of mysql and apache2 and php -->
my.cnf for mysql
My.cnf
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 500
(#table_cache = 64
(#thread_concurrency = 10
query_cache_limit = 1M
query_cache_size = 16M
quick
quote-names
max_allowed_packet = 16M
key_buffer = 16M
apache2.conf for apache2
StartServers 5
MinSpareServers 5
MaxSpareServers 10
ServerLimit 400
MaxClients 350
MaxRequestsPerChild 5
and memory detail is ---: (Command Free)
Memory
Total - 49748700
used - 895628
free - 48859072
shared - 0
buffers - 164848
cached - 488188
Buffers/cache
used - 242592
free - 49506108
Swap
total - 134217724
used - 0
free - 134217724
Here is my best pratice, it will make your moodle much faster:
For apache performance - use NGINX/PHP-fpm instead.... see https://docs.moodle.org/dev/Install_Moodle_On_Ubuntu_with_Nginx/PHP-fpm
For MySQL - use mariaDB10, and use http://mysqlyuner.pl for tuning.
Which caching option do you use ? Memcache ? Redis ?
I have a MYSQL dump from a database that I am trying to move to a new db server. When I try to import my sql dump, I receive the following error:
MySQL Error 2006 (HY000) at line 406: MySQL server has gone away
I googled the problem and most people fixed the problem by changing the value of wait_timeout. However, my current value is set to 28800 (8 hours) and the error appears in less than 8 seconds when I run the import.
I also tried setting the value of max_allowed_packet to 1073741824 but that also did not fix the problem.
Looking through the mysql dump, there are quite a few blob columns in the dump, but the overall file size is only 6 MB.
Does anyone have any ideas about what else might be the problem?
Adding this answer for the benefit of future searchers, as it explains why increasing the packet size fixed the problem:
The situation is that if a client sends a SQL-statement longer than the server max_allowed_packet setting, the server will simply disconnect the client. Next query from the same client instance will find that the ‘MySQL server has gone away’.
... But it would of course be much preferable to have the ‘got packet bigger’ error [Error: 2020 (CR_NET_PACKET_TOO_LARGE)] returned if that is the problem.
Excerpted from and thanks for peter_laursen's blog post
On OSX 10.7 (Lion), I created a file, /etc/my.cnf with the following contents:
[mysqld]
max_allowed_packet = 12000000
And then stopped the mysql server:
/usr/local/bin/mysql.server stop
When it automatically restarted I was able to execute my inserts.
Increasing max_allowed_packet to 12 MB (12000000) solved the problem for me when trying to import a 130 MB file.
Change the ini file or under Options File / Networking in MySQL Workbench (MySQL restart required).
If you still get the error, try increasing even more (100 MB). Just remember to decrease it when you're done.
1) Change in MySql config file:
#
/etc/mysql/my.cnf
#section
#
[mysqld]
#
key_buffer = 32M
max_allowed_packet = 32M
thread_stack = 512K
thread_cache_size = 64
#
2) MySql deamon restart
/etc/init.d/mysql restart
Should resolve yours issues.