I recently moved a script to start running a query from the same data on a different server/DB. Both servers load the same data, but one is in the process of being decommissioned from real-time loading of current day's data. The previous server was running mysql 5.1.73, this new one is running MariaDB 10.1. The script is trying to run the following query with only the date changed (and I've obfuscated some columns and data filters, but kept col_X consistent in the query).
SELECT
count(*) as num,
sec_to_time(floor(timestamp/1000000)) as true_time,
col_A,col_B,col_C,col_D,col_E,col_F,col_G,id,
sum(if(col_H = 3, 1, 0)) as num_A,
sum(if(col_H = 4,1,0)) as num_B
FROM
`some_table` WHERE
`some_table`.`date` = 20170622 AND
(col_I not in ('VAL_A','VAL_B','VAL_C'))
GROUP BY col_A, col_D, coL_E, col_F, col_B, col_C,
sec_to_time(floor(timestamp/1000000))
HAVING count(*) >= if(col_G='A',50,if(col_G='B',50,150))
ORDER BY sec_to_time(floor(timestamp/1000000));
On the new server, after the query runs for a while i'm getting this message:
ERROR 1114 (HY000): The table '/home/mysql/tmp/#sql_61c5_0' is full
In that dir while the query is running i see 2 files that grow in size to around 1.2GB combined before this message happens. I've gone through many variables, finding none different from the old server to new. The first that seem to be mentioned are tmp_table_size and max_heap_table_size, both of which are default 16MB on old and new server, but I've tried upping anyways.
The disk is not full, although they are smaller partitions:
Filesystem Size Used Avail Use% Mounted on
/dev/md125 400G 247G 154G 62% /home
/dev/nvme0n1p1 373G 214G 160G 58% /mnt/nvme
(note /home/mysql is a symlink to /mnt/nvme/mysql, where mysql tables & tmp dir are located).
These are the only mysql variables set in /etc/my.cnf:
[mysqld]
#Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_buffer_pool_size = 25G
innodb_log_file_size = 768M
max_allowed_packet = 104857600
innodb_file_per_table = 1
max_heap_table_size = 134217728
tmp_table_size = 134217728
For the record the new server is running CentOS 7, but I can not find any OS limit that could be causing this either. Any hints as to why this could be happening would be greatly appreciated.
Related
I want to insert a large file (about 4G) into mysql. I used source command and tried several times. Every time at first everything go right like
Query OK, 1710 rows affected (0.27 sec)
Records: 1710 Duplicates: 0 Warnings: 0
But after about ten minutes I got the following error message
"ERROR 2005 (HY000): Unknown MySQL server host '--' (0)" with garbage character.
And it says that:
No connection. Trying to reconnect...
ERROR 2005 (HY000): Unknown MySQL server host 'rnrn' (0)
ERROR:
Can't connect to the server
With some garbage characters like:
ERROR 2005 (HY000): Unknown MySQL server host '2銆佹嫑鍟嗛?鍝佹姇鏀鹃〉闈㈡惌寤鸿惀閿?椿鍔ㄤ竴浣撳寲娴佺▼鎼?缓rn' (0)
ERROR:
Can't connect to the server
Could any one help?
I tried to change the parameters in mysql.ini
innodb_buffer_pool_size = 1024M
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 512M
innodb_log_buffer_size = 64M
change the number to:
innodb_buffer_pool_size = 512M
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_log_buffer_size = 32M
or change the max_allowed_package from 1M to 1024M, but it still doesn't work.
If possible, the database dump should be split into smaller chunks, and inserted one by one. The problem may be caused by a time limit on the single database connection imposed by the server, or unstable connection.
I do not know how exactly to split the file, it depends on your database structure. It is possible to create structure for all tables in one file, and insert data in others. If foreign keys exist, they should be disabled on insert, otherwise you may end up adding data, dependencies for which are not yet defined.
Alternatively, you may try uploading the dump to the server, and import it from mysql using LOAD DATA INFILE, or using local mysql. But that requires an SSH connection to your server - not all providers allow it.
I successfully installed mySQL 5.7.10 and the mySQL gem for Ruby on my OSX 10.11.3 based system. I am trying now to run following code:
require 'mysql'
require 'cgi'
class MysqlSaver
def saveWordStats(globalWordStats,time)
con = Mysql.new 'localhost', 'x', 'x', 'x'
i = 0
for word in globalWordStats.keys[0..10000]
print "#{i}\r"
i+=1
stat = globalWordStats[word]
time = time
escaped_word = Mysql.escape_string(word)
begin
escaped_word = escaped_word.gsub("\\","")
escaped_word = escaped_word.gsub("/","")
escaped_word = escaped_word.gsub("-","")
escaped_word = "#{escaped_word}_word"
con.query("CREATE TABLE IF NOT EXISTS #{escaped_word}(percent DOUBLE, time INT)")
con.query("INSERT INTO #{escaped_word}(percent,time) VALUES('#{stat}','#{time}')")
rescue
puts "#{$!}"
end
end
con.close
puts "DONE"
end
end
This code works without any errors. I'am able to create tables and store values in my mySQL database. But however, if I try to create/store >= ≈10.000 values in my database with this code I am no longer able to connect to my mySQL server, after the script finished running:
mySQL.rb:5:in `new': Lost connection to MySQL server at 'reading initial communication packet', system error: 102 (Mysql::Error)
from /Users/david/Desktop/Birta2/mySQL.rb:5:in `saveWordStats'
from run.rb:84:in `<main>'
Also a restart of the mySQL server doesn't help (only a restart of my entire mac helps!).
After the error occurs I can find this strange line in the mySQL log file:
2016-02-11T18:20:51.177054Z 0 [Warning] File Descriptor 1098 exceedeed FD_SETSIZE=1024
Is there any way to fix this error?
FD_SETSIZE is the maximum number of files you can have open at once. If you're using InnoDB, each mysqld process keeps one file open per table in the active database, so it's easy to exceed if you have a large number of tables or a large number of processes. You can change some settings in my.cnf to fix this.
table_open_cache is the number of tables MySQL will try to keep open at once:
table_open_cache = 1000
max_connections is the maximum number of simultaneous connections (mysqld processes) to allow:
max_connections = 25
If your database has N tables, it's best to keep N * table_open_cache * max_connections less than FD_SETSIZE.
We have a dedicated web server with a 50Gb partition for MySQL.
In order to do some tunning on MySQL, I turned on general_log var using:
set global general_log = 1;
set global expire_logs_days = 30;
After 2 days the general_log.CSV took over all the free space in the partition, making MySQL unavailable.
Our tech support on the hosting company , solved that by deleting the file.
Now I need to turn on general_log again and I get this error:
Can't get stat of './mysql/general_log.CSV' (Errcode: 2)
What can I do to fix it?
Thanks
SQL query:
INSERT INTO `lance_attachments` (`file_id`, `file_name`, `file_content`, `file_type`, `file_size`)
VALUES (19, 'P1010147.JPG', 0xffd8ffe1384545786966000049492a00080000000c000e010200200000009e0000000f01020018000000be0000001001020011000000d60000001201030001000000010000001a01050001000000ee0000001b01050001000000f60000002801030001000000020000003101020008000000fe00000032010200140000001e010000130203000100000002000000698704000100000026020000a5c407000401000032010000960400004f4c594d505553204449474954414c2043414d455241202020202020202020004f4c594d505553204f50544943414c20434f2e2c4c544400583230302c443536305a2c433335305a000000000000000048000000010000004800000001000000763735312d383000000000000000000000000000000000000000000000000000303030303a30303a30302030303a30303a3030005072696e74494d0030323530000014000100140014000200010000000300880000000700000000000800000000000900000000000a00000000000b00d00000000c00000000000d00000000000e00e80000000001010000000101ff00000002018300000003018300000004018300000005018300[...]
MySQL said: Documentation
#2006 - MySQL server has gone away
i cannot upload my database on my wamp server it show this error
Help
I have already change my maximum size and timeout
It looks like it choked on the size of the SQL statement which exceeds some limit.
Try loading the data by not using SQL and using a LOAD command:
LOAD DATA INFILE 'somefile' INTO lance_attatchments
If your data doesn't have all the columns, create a view on the table that has only the columns you have dara for and load into the view.
a very low default setting of max_allowed_packet - could be a reason. Raising max_allowed_packet in my.cnf (under [mysqld]) to 8 or 16M usually fixes it.
[mysqld]
max_allowed_packet=16M
OR
If you have a query that is causing a timeout you can set this variable by executing:
SET ##GLOBAL.wait_timeout=300;
SET ##LOCAL.wait_timeout=300; -- OR current session only
Where 300 is the number of seconds you think the maximum time the query could take.
I have a MYSQL dump from a database that I am trying to move to a new db server. When I try to import my sql dump, I receive the following error:
MySQL Error 2006 (HY000) at line 406: MySQL server has gone away
I googled the problem and most people fixed the problem by changing the value of wait_timeout. However, my current value is set to 28800 (8 hours) and the error appears in less than 8 seconds when I run the import.
I also tried setting the value of max_allowed_packet to 1073741824 but that also did not fix the problem.
Looking through the mysql dump, there are quite a few blob columns in the dump, but the overall file size is only 6 MB.
Does anyone have any ideas about what else might be the problem?
Adding this answer for the benefit of future searchers, as it explains why increasing the packet size fixed the problem:
The situation is that if a client sends a SQL-statement longer than the server max_allowed_packet setting, the server will simply disconnect the client. Next query from the same client instance will find that the ‘MySQL server has gone away’.
... But it would of course be much preferable to have the ‘got packet bigger’ error [Error: 2020 (CR_NET_PACKET_TOO_LARGE)] returned if that is the problem.
Excerpted from and thanks for peter_laursen's blog post
On OSX 10.7 (Lion), I created a file, /etc/my.cnf with the following contents:
[mysqld]
max_allowed_packet = 12000000
And then stopped the mysql server:
/usr/local/bin/mysql.server stop
When it automatically restarted I was able to execute my inserts.
Increasing max_allowed_packet to 12 MB (12000000) solved the problem for me when trying to import a 130 MB file.
Change the ini file or under Options File / Networking in MySQL Workbench (MySQL restart required).
If you still get the error, try increasing even more (100 MB). Just remember to decrease it when you're done.
1) Change in MySql config file:
#
/etc/mysql/my.cnf
#section
#
[mysqld]
#
key_buffer = 32M
max_allowed_packet = 32M
thread_stack = 512K
thread_cache_size = 64
#
2) MySql deamon restart
/etc/init.d/mysql restart
Should resolve yours issues.