I need to dump MySQL InnoDB-database consists of several tables. One table that causes problems has nearly 13 Million rows. A fresh install of XAMPP(V.3.2.2) dump process was successful, after that, dump process always failed with the error message "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table gv_faktur_header_history at row: 2623629". At this point here's the status :
Can not insert any value (Error 2013: Lost connection to MySQL)
Can not issue "CHECK TABLE" command (Error 2013: Lost connection to MySQL)
Can not alter this table (add column)
Can select data from this table
Can select row 2623629 (select * from table limit 2623629 ,1)
Can run "show table status" command
I repeat this process several times like this :
Reinstalling xampp
Importing database using this method
> set global net_buffer_length = 1000000;
> set global max_allowed_packet = 1000000000;
> SET foreign_key_checks = 0;
> SET UNIQUE_CHECKS = 0;
> SET AUTOCOMMIT = 0;
> use db_name;
> source backup-file.sql SET
> foreign_key_checks = 1;
> SET UNIQUE_CHECKS = 1;
> SET AUTOCOMMIT = 1;
Dump the database w/o --skip-extending-insert (success)
Dump the database w/o --skip-extending-insert (failed)
Dump the database w/o --skip-extending-insert (failed)
mysqldump command :
mysqldump -u root -p --skip-extended-insert --max-allowed-packet=1G --net-buffer-length=32704 rent_scaff header_history > D:\dobol
Environment specifications :
i5 8 gen
Ram 8 GB (3 Gb unuse as seen in task manager)
SSD Storage 512G
mysqldump Ver 10.16 Distrib 10.1.10-MariaDB
here's my.ini configuration
[client]
# password = your_password
port = 3306
socket = "C:/xampp/mysql/mysql.sock"
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "C:/xampp/mysql/data"
pid_file = "mysql.pid"
key_buffer = 16M
max_allowed_packet = 1G
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
plugin_dir = "C:/xampp/mysql/lib/plugin/"
server-id = 1
innodb_data_home_dir = "C:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data"
innodb_buffer_pool_size = 1G
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 250M
innodb_log_buffer_size = 250M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 1G
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
enter code here
Mysql Log :
Server version: 10.1.10-MariaDB
key_buffer_size=16777216
read_buffer_size=262144
max_used_connections=2
max_threads=1001
thread_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 787099 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0x3eee2178
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
mysqld.exe!my_parameter_handler()
mysqld.exe!my_mb_ctype_mb()
mysqld.exe!??2Geometry##SAPAXIPAX#Z()
mysqld.exe!??2Geometry##SAPAXIPAX#Z()
mysqld.exe!?propagate_equal_fields#Item_func_expr_str_metadata##UAEPAVItem##PAVTHD##ABVContext#Value_source##PAVCOND_EQUAL###Z()
mysqld.exe!??2Geometry##SAPAXIPAX#Z()
mysqld.exe!??2Geometry##SAPAXIPAX#Z()
mysqld.exe!??2Geometry##SAPAXIPAX#Z()
mysqld.exe!??0Alter_table_prelocking_strategy##QAE#XZ()
mysqld.exe!?mysql_alter_table##YA_NPAVTHD##PAD1PAUHA_CREATE_INFO##PAUTABLE_LIST##PAVAlter_info##IPAUst_order##_N#Z()
mysqld.exe!?execute#Sql_cmd_alter_table##UAE_NPAVTHD###Z()
mysqld.exe!?mysql_execute_command##YAHPAVTHD###Z()
mysqld.exe!?mysql_parse##YAXPAVTHD##PADIPAVParser_state###Z()
mysqld.exe!?dispatch_command##YA_NW4enum_server_command##PAVTHD##PADI#Z()
mysqld.exe!?do_command##YA_NPAVTHD###Z()
mysqld.exe!?threadpool_process_request##YAHPAVTHD###Z()
mysqld.exe!?tp_end##YAXXZ()
KERNEL32.DLL!SetUserGeoName()
ntdll.dll!TpCheckTerminateWorker()
ntdll.dll!TpCallbackIndependent()
KERNEL32.DLL!BaseThreadInitThunk()
ntdll.dll!RtlGetAppContainerNamedObjectPath()
ntdll.dll!RtlGetAppContainerNamedObjectPath()
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Please help me how to Dump (backup) a large database, at least how to safely dump the database, so when the process failed, the table remains usable.
What is happening is that either one command is taking forever and it timing out, or that there is too much data to fit in the cache at one time.
I would try using a program, something like HeidiSQL, to do the backup of the database. This would allow the backup to be in stages instead of one giant request.
This would be the example to do this with HeidiSQL. Feel free to replace 127.0.0.1 with a different IP address if the SQL server is on a second computer, or any other setting that you think is different.
You would start by setting up the connection:
HeidiSQL Sesssion Manager Setup
Then, click on OPEN
Right click on your database, located in the top left of the screen
Click EXPORT DATABASE AS SQL
First we'll backup the structure of your database:
Click the checkbox on the left side
Click on SQL export on the right-side tab
Database -> Drop (unchecked), Create (checked)
Table -> Drop (unchecked), Create (checked)
Data -> No Data
Output -> Single .sql file
Filename -> Click the folder icon, pick a location and name the file something like DB_Structure.sql
Finally, click Export
Next, we will backup the data from the database:
-Same steps as before, in SQL Export do:
Database -> Drop (uncheck), Create (uncheck)
Table -> Drop (uncheck), Create (uncheck)
Data -> Change from No Data to Insert
Max INSERT Size -> 1024
Output -> Change from Single .sql file to ZIP compressed .sql file
Filename -> Click the folder icon, Pick a location and name the file something like DB_Data.zip
Finally, click Export
This is how you would Use those backup files:
Click on the tab named â–ºQuery
Press Ctrl + O
In the Open File dialog box, navigate to your backup SQL file for the structure
Then click Open
Next, press F9 to run the SQL commands
When this is finished, you do the same process for the data SQL backup file.
Related
I have looked at many similar questions to this but I can't seem to find the answer. I would like to set up the slow query log for my MySQL database. I have seen many answers saying I should access the MySQL command line tool. I am not sure exactly how to find this tool but I tried accessing it by going to:
c:/xampp/mysql/bin/mysql -u root -p -h localhost
But here I get MariaDB, which seems to be different from any other answers/tutorials I have seen before. Typing in:
set log_slow_queries = ON;
gives me the error
ERROR 1193 (HY000): Unknown system variable 'log_slow_queries'
SET GLOBAL slow_query_log=1;
The Slow Query Log consists of log events for queries taking up to long_query_time seconds to finish. For instance, up to 10 seconds to complete. To see the time threshold currently set, issue the following:
SELECT ##long_query_time;
+-------------------+
| ##long_query_time |
+-------------------+
| 10.000000 |
+-------------------+
It can be set as a GLOBAL variable, in my.cnf or my.ini file. Or it can be set by the connection, though this is unusual. The value can be set between 0 to 10 (seconds). What value to use?
10 is so high as to be almost useless;
2 is a compromise;
0.5 and other fractions are possible;
0 captures everything; this could fill up disk dangerously fast, but can be very useful.
The capturing of slow queries is either turned on or off. And the file logged to is also specified. The below captures these concepts:
SELECT ##slow_query_log; -- Is capture currently active? (1=On, 0=Off)
SELECT ##slow_query_log_file; -- filename for capture. Resides in datadir
SELECT ##datadir; -- to see current value of the location for capture file
SET GLOBAL slow_query_log=0; -- Turn Off
-- make a backup of the Slow Query Log capture file. Then delete it.
SET GLOBAL slow_query_log=1; -- Turn it back On (new empty file is created)
For more information, please see the MySQL Manual Page The Slow Query Log
Note: The above information on turning on/off the slowlog was changed in 5.6(?); older version had another mechanism.
The "best" way to see what is slowing down your system:
long_query_time=...
turn on the slowlog
run for a few hours
turn off the slowlog (or raise the cutoff)
run pt-query-digest to find the 'worst' couple of queries. Or mysqldumpslow -s t
Go to xampp control panel click on config button for mysql and select my.ini then add these lines in my.ini file
slow_query_log = 1
slow-query-log-file=/path/of/the/log/file.log
I put above two lines under the log_error = "mysql_error.log". the modified part of the my.ini file should look like this
# The MySQL server
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "C:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
slow_query_log = 1
slow-query-log-file=/var/log/mysql-slow.log
Then Restart the MySQL server in xampp control panel. and now slow_query_log should be enabled, you can confirm it by running following command in the MySQL shell
show variables like '%slow%';
It might be obvious but it took me time before I realized my mistake: in the my.ini file you should put the slow_query_log settings in the [mysqld] group, not simply at the end of the my.ini file....
I am trying to import a old Vbulletin database but always getting this error
ERROR 1114 (HY000) at line 4734: The table 'session' is full
this database backup size is 2GB and my server have 8GB ram. I tried to add innodb_data_file_path=ibdata1:10M:autoextend and innodb_file_per_table to my.cnf but not solved my problem.
my complete my.cnf
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
#max_connections = 100
#table_cache = 64
#thread_concurrency = 10
query_cache_limit = 1M
query_cache_size = 16M
expire_logs_days = 10
max_binlog_size = 100M
innodb_data_file_path=ibdata1:10M:autoextend
innodb_file_per_table
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
I am very familiar with Vbulletin database schema and tables. I have converted dozens of VB sites to Innodb storage engine.
The reason that you are getting that error is because the session table is a Memory table. The session table(s) must have been quite full when you took this backup that you are trying to restore. That, coupled with your overhead IO during the import, is filling up your RAM. However, for VB to work properly, you do not actually need this table to be a Memory table.
In order to get around this you can convert your session table to InnoDB.
I would open the .sql file up in a text editor (if possible) and change where it says ENGINE = MEMORY for your session table to InnoDB, or use string replace:
sed -i 's/MEMORY/INNODB/g' yourfilename.sql
I am importing data into InnoDB table with huge number of records. The command mysqlimport says successfully imported as shown below. I am importing the data on an empty table.
C:\xampp\mysql\bin>mysqlimport.exe --fields-terminated-by=','
--columns="invoice_id,customername,department,city" --local -u root -p invoice11954 "c:\11954\invoice.txt"
Enter password:
invoice11954.invoice: Records: 1540252 Deleted: 0 Skipped: 0 Warnings: 0
But when I check the count of records in the table invoices, I see only 1534408 records. I have truncated the table and reimported again and again. There is always a mismatch in the record count and difference of records is not always constant.
Is it something to do with my configuration? I have given high values for configuration but still no luck.
# The MySQL server
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "C:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 32M
max_allowed_packet = 32M
sort_buffer_size = 1024K
net_buffer_length = 16K
read_buffer_size = 512K
read_rnd_buffer_size = 1024K
myisam_sort_buffer_size = 16M
log_error = "mysql_error.log"
[mysqldump]
quick
max_allowed_packet = 128M
My configuration doesn't have this section [mysqlimport]
Can someone help me what's going wrong here.
How are you checking the "count of records" in the invoices table? Obviously, we're assuming that you're executing a SQL statement like this:
SELECT COUNT(*) FROM invoice ;
And that you aren't relying on the estimated number of rows in information_schema.tables to give you a precise count.
There's not enough information in the question, in its current form to give an answer for the behavior you are observing. As I noted in my comment, it's not clear whether the table has any unique constraints, of whether you've specified the --replace or --ignore options, because we don't see the command you are running, the [mysqlimport] section of the my.cnf file, or how you are checking the number of rows on the table after it's loaded.
I don't know if this problem is specific to my set up but when I add the line
log = /var/log/mysql.log
to the mysqld section of a copied my-large.cnf file and try to restart the mysql server I get the error
Starting MySQL. ERROR! The server quit without updating PID file (/var/lib/mysql/centos-server.pid).
I've created the file /var/log/mysql.log, set its owner and group to mysql and set the permissions on /var/log to 777 (for the moment)
I'm on centos, with mysql 5.6.5 m8 (the development release).
This is a snippet of the my.cnf file
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-external-locking
key_buffer_size = 256M
max_allowed_packet = 1M
table_open_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
log = /var/log/mysql.log
Seems like the log directive is outdated and my.cnf requires the directive
general-log = 1
If specified like this the log file will be created in a default location (which on centos is /var/lib/mysql/centos-server.log)
I have a table which has 38.406.168 rows and according to size in phpmyadmin 4.5GB. I want to see the last row of the table. Unfortunately I couldn't use select * from ... limit 38.406.166,1 or even I couldn't use select count(*) from ... function.
I changed my.ini in wamp server, but still I get mysql server has gone away error while attempting execute one of these queries. BTW; I couldn't even set an index on ID to make these processes much quicker.
My last try was to export the table to look at the last row. However, It just shows me 123MB of the file.
What should I do? Please help me. The features of the computer is 2.93 GHz, 3.50GB
Here is my my.ini file:
# The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 2000M
table_cache = 4096
sort_buffer_size = 2000M
net_buffer_length = 8K
read_buffer_size = 2000M
read_rnd_buffer_size = 2000M
myisam_sort_buffer_size = 2000M
basedir=c:/wamp/bin/mysql/mysql5.1.36
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.1.36/data
(.. these parts are deleted, since there is nothing to set as value)
# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = C:\mysql\data/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = C:\mysql\data/
#innodb_log_arch_dir = C:\mysql\data/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 384M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 10M
#innodb_log_buffer_size = 64M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 180
[mysqldump]
quick
max_allowed_packet = 160M
Thank you so much for your help
I tried a lot of stuff and ended up with those 2 working:
Simply mirror the database via mysql's internal master-slave-functions (try google, you ll find good tutorials) onto a simple backup server (most cheap hosting packages will work if they have ssh access)
Try http://www.mysqldumper.net/, the best tool to copy & split huge databases into 100mb-parts. This simple open source tool did everything that "professional" backup scripts couldn't do.
You will want to use the mysqldump command to do this. Here is what I do in linux, but I think it will translate to Windows (I see that you're running WAMP).
mysqldump --opt --force -Q --user=[your_user] -p [database_name] > dump.sql
You may need to change directory to where the mysqldump file is located:
cd c:\path\to\mysql\bin