I/O operations are blocking mysql transactions - mysql

here we have a x64 Debian Lenny with MySQL 5.1.47 and some InnoDB databases on it. ibdata files and other stuff is on the same file system (ext3). I've noticed, that on some situations there are many processes in MySQL process list which hang on "freeing items" state. This happens when I do the following on shell (file1 and file2 are about 2.5gb)
cat file1 file2 >new_file
or execute the following SQL statement
SELECT 'name' AS col UNION SELECT col FROM db_name.table_name INTO OUTFILE ('/var/xxx/yyy')
When one of these two things is running, then I can see many MySQL processes running endless with "freeing items" state (I'm using innotop). When killing this shell process (or SQL statement) then these blocked transactions disappear.
In internet I found some hints to disable InnoDB adaptive hash index and common query cache, but this doesn't help. Is there someone who has the same experience?
Regards

We've found turning on the deadline i/o scheduler to be of great help keeping our db from starving during hi external load on the file system. Try
echo deadline > /sys/block/sda/queue/scheduler
And test if the problem becones smaller. (Replace sda for the device your db is on)

Just stepping back a bit, have you done some basic InnoDB tuning? The MySQL defaults can be pretty limiting. A good overview:
http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/

The interesting is:
all databases on this server are kept in one InnoDB file (one_file_per_table is disabled)
we've written a small shell script, inserting 10 rows per second in a new table in a new database on this server
then we've copied some big files via "cat" (like described above) parallel...
and: nothing happend, no locks, no endless "freeing items"
so we've changed the shell script to insert the lines in a new table in the existing database, which causes this problems and copied the files again
and: the locks are back! huh?
last try: we've copied the "buggy" database on the server itself into a new database (with full structure and data) and started shell script again
and: no "freeing items" processes!?
I don't understand this behavior: inserting in the old database let these "freeing items" processes appear, but inserting into a new database (full copy of the old database) is fine?

Related

MYSQL/MardiaDB 'recycle bin'

I'm using MardiaDB and i'm wondering if there is a way how to install a 'recycle bin' on my server where if someone deleted a table or anything it gets shifted to the recycle bin and restoring it is easy.
Not talking about mounting things to restore it and all that stuff but litterly a 'save place' where it gets stored (i have more then enough space) until i decide to delete it or just keep it there for 24 hours.
Any thoughts?
No such feature exists. http://bugs.mysql.com takes "feature requests".
Such a feature would necessarily involve MySQL; it cannot be done entirely in the OS's filesystem. This is because a running mysql caches information in RAM that the FS does not know about. And because the information about a table/db/proc/trigger/etc is not located entirely in a single file. Instead extra info exists in other, more general, files.
With MyISAM, your goal was partially possible in the fs. A MyISAM table was composed of 3 files: .frm, .MYD',.MYI`. Still MySQL would need to flush something to forget that it know about the table before the fs could move the 3 files somewhere else. MyISAM is going away; so don't even think about using that 'Engine'.
In InnoDB, a table is composed of a .ibd file (if using file_per_table) plus a .frm file, plus some info in the communal ibdata1 file. If the table is PARTITIONed, the layout is more complex.
In version 8.0, most of the previous paragraph will become incorrect -- a major change is occurring.
"Transactions" are a way of undoing writes to a table...
BEGIN;
INSERT/UPDATE/DELETE/etc...
if ( change-mind )
then ROLLBACK;
else COMMIT;
Effectively, the undo log acts as a recycle bin -- but only at the record level, and only until you execute COMMIT.
MySQL 8.0 will add the ability to have DDL statements (eg, DROP TABLE) in a transaction. But, again, it is only until COMMIT.
Think of COMMIT as flushing the recycle bin.

MySQL can take more than an hour to start

I have a mysql (Percona) 5.7 instance with over 1Million tables.
When I start the database, it can take more than an hour to start.
Errorlog doesn't show anything, but when I trace mysqld_safe, I found out that MySQL is getting a stat on every file in the DB.
Any idea why this may happen?
Also, please no suggestion to fix my schema, this is a blackbox.
Thanks
This turned out to be 2 issues (other than millions of tables)!
When MySQL start, and a crash recovery is needed, as of 5.7.17, it needs to traverse your datadir to build it's dictionary. This will be lifted in future releases(8.0), as MySQL will have it's own catalog, and will not rely on datadir content anymore. Doc states that this isn't done anymore. It's true and false. It doesn't read the 1st page of ibd files, but It does a file stat. Filed Bug
Once it finished (1), it starts a new process, "Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine.". That of course open all the files again. Use disable-partition-engine-check if you think you don't need it. Doc
All this can be observed using sysdig. very powerful handy dtrace like tool.
sysdig proc.name=mysqld | grep "open fd="
Ok, now, it's time to reduce the number of files.

MySql stops writing, but keeps reading

I have a magento store running on Debian, with LAMP, the server is a VPN with 1GB RAM, 1 Core Processor.
MySQL randomly but often stops writing new data to tables, magento doesn't show any error, says that it successfully saved the data. It can read the tables without problem, the website keeps running okay, it just doesn't save any new data.
If I restart MySQL it starts saving new data again, then randomly (I think, couldn't relate it to any action) stops writing some time later, it can be days or hours.
I've turned on query log, but not sure what to look for on it,
I found this common error after mysql stops writing: INSERT INTO index_process_event (process_id,event_id,status) VALUES ('7', '10453', 'error') ON DUPLICATE KEY UPDATE status = VALUES(status);
I've tried to reindex the whole process table as suggested by Henry, but no success.
After reindexing the event_id changed.
I don't believe the problem is low RAM, the website only get around 200 sessions/day, hardly more than 2 users online at the same time.
Thanks, I appreciate any help.
try to see if there is any space left on your storage.
also try to see in system log in addition of mysql log.
grep / find for any line that have "error" string.

Mysql MyISAM table crashes with lots of inserts and selects, what solution should I pick?

I have the following situation:
MySQL MyISAM database on Amazon EC2 instance with PHP on a apache webserver. We need to store incomming packages in json in MySql. For this I use a staging database where a cronjob each minutes moves old data with a where DateTime > 'date - 2min' query to another table (named stage2).
The stage1 table has only actual information and contains 35k rows at normal and can contain up to 100k when it's busy. We can reach 50k new rows a minute, which should be about 10k insert queries. The insert looks like this:
INSERT DELAYED IGNORE INTO stage1 VALUES ( ... ), (....), (....), (....), (...)
Then we have 4 scripts running about each 10second doing the following:
grab max RowID from stage1 (primary key)
export data till that rowID and from the previous max RowId
a) 2 scripts are in bash and using the mysql export commandline method
b) 1 script in node.js and is using the export method with into outfile
c) 1 script in php which using the default mysql select statement and loop through each row.
send data to external client
write last send time and last rowid to a mysql table so it knows where it is next time.
Then we have one cronjob each minute moving old data form stage1 to stage2.
So everything worked well for a long time but now we are increasing in users and during our rush hours the stage1 table is crashing now and then. We can easily repair it but that's not the right way because we will be down for some time. Memory and CPU are ok during the rush hours but when stage1 is crashing everything is crashing.
Also worth to say: I don't care if I'm missing rows because of a failure, so I don't need any special backup plans just in case something went wrong.
What I did so far:
Adding delayed and ignore to the insert statements.
Tried switching to innoDB but this was even worse, mainly think of the large memory it needed. My EC2 currently is a t2.medium which has 4gb memory and 2 vCPU with burst capacity. Following: https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-innodb-buffer-pool-size and running this query:
SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
FROM information_schema.tables WHERE engine='InnoDB') A;
it returned 11gb, I tried 3gb which is the max for my instance (80%). And since it was more instable I switched every table back to myISAM yesterday
recreate the stage1 table structure
What are my limitations?
I cannot change all 4 scripts to one export because the output to the client is different. for example some use json others xml.
Options I'm considering
m3.xlarge instance with 15GB memory is 5 times more expensive, but if this is needed Im willing to do the offer. Then switch to innoDB again and see if it's stable ?
can I just move stage1 to innoDB and run it with 3gb buffer pool size? So the rest will be myISAM ?
Try doing it with a nosql database or a in memory type database. Should that work?
Queue the packages in memory and have the 4 scripts get the data from memory and save all later when done to mysql. Is there some kind of tool for this?
Move stage1 to a RDS instance with innoDB
Love to get some advice and help on this! Perhaps I'm missing the easy answer ? or what options should I not consider.
Thanks,
Sjoerd Perfors
Today we fixed these issues with the following setup:
AWS Loadbalancer going to a T2.Small instance "Worker" where Apache en PHP handeling the request and sending to a EC2 instance mysql system calling the "Main".
When CPU of the T2.small instance is above 50% automatically new instances are launched connecting to the loadbalancer.
"Main" EC2 has mysql running with innodb.
All updated to Apache 2.4 and php 5.5 with performance updates.
Fixed one script acting a lot faster.
Innodb has now 6GB
Things we did try-out but didn't worked:
- Setting up a DynamoDB but sending to this DB did cost almost 5seconds.
Things we are considering:
- Removing the stage2 database and doing backups directly from Stage1. Seems having this kind of rows isn't bad for the performance.

Table files transfered between servers flags table as crashed

Work has a web site that uses large data sets, load balanced between two MySQL 5.6.16-64.2 servers using MyISAM, running on Linux (2.6.32-358.el6.x86_64 GNU/Linux.) This data is being updated hourly from a text based file set that is received from a MS-SQL database. To avoid disruption on reads from the web site and at the same time make sure the updates doesn't take too long following process was put in place:
Have the data one a third Linux box (only used for data update processing,) update the different data tables as needed, move a copy of the physical table files to the production servers under a temporary name, and then do a table swap by MySQL TABLE RENAME.
But every time the table (under the temporary name) is seen by the destination MySQL servers as being crashed and require repair. The repair takes too long, so it cannot be forced to do a repair before doing the table swap.
The processing is programmed in Ruby 1.8.7 by having a thread for each server (just as a FYI, this also happens if not doing it in a thread to a single server.)
The steps to perform file copy is as follows:
Use Net::SFTP to transfer the files to a destination folder that is not the database folder (done due to permissions.) Code example of the file transfer for the main table files (if table also has partition files then they are transferred separately and rspFile is assigned differently to match the temporary name.) For speed it is parallel uploaded:
Net::SFTP.start(host_server, host_user, :password => host_pwd) do |sftp|
uploads = fileList.map { |f|
rcpFile = File.basename(f, File.extname(f)) + prcExt + File.extname(f)
sftp.upload(f, "/home/#{host_user}/#{rcpFile}")
}
uploads.each { |u| u.wait }
end
Then assign the files the owner and group to the mysql user and to move the files to the MySQL database folder, by using Net::SSH to execute sudo shell commands:
Net::SSH.start(host_server, host_user, :port => host_port.to_i, :password => host_pwd) do |ssh|
doSSHCommand(ssh, "sudo sh -c 'chown mysql /home/#{host_user}/#{prcLocalFiles}'", host_pwd)
doSSHCommand(ssh, "sudo sh -c 'chgrp mysql /home/#{host_user}/#{prcLocalFiles}'", host_pwd)
doSSHCommand(ssh, "sudo sh -c 'mv /home/#{host_user}/#{prcLocalFiles} #{host_path}'", host_pwd)
end
The doSSHCommand method:
def doSSHCommand(ssh, cmd, pwd)
result = ""
ssh.open_channel do |channel|
channel.request_pty do |c, success|
raise "could not request pty" unless success
channel.exec "#{cmd}" do |c, success|
raise "could not execute command '#{cmd}'" unless success
channel.on_data do |c, data|
if (data[/\[sudo\]|Password/i]) then
channel.send_data "#{pwd}\n"
else
result += data unless data.nil?
end
end
end
end
end
ssh.loop
result
end
If done manually by using scp to move the files over, do the owner/group changes, and move the files, then it never crashes the table. By checking the file sizes compared between scp and Net::SFTP there are no difference.
Other process methods has been tried, but experience they take too long compared to using the method described above. Anyone have an idea of why the tables are being crashed and if there a solution to avoid table crash without having to do a table repair?
The tables are marked as crashed because you're probably getting race conditions as you copy the files. That is, there are writes pending to the tables during your execution of your Ruby script, so the resulting copy is incomplete.
The safer way to copy MyISAM tables is to run the SQL commands FLUSH TABLES followed by FLUSH TABLES WITH READ LOCK first, to ensure that all pending changes are written to the table on disk, and then block any further changes until you release the table lock. Then perform your copy script, and then finally unlock the tables.
This does mean that no one can update the tables while you're copying them. That's the only way you can ensure you get uncorrupt files.
But I have to comment that it seems like you're reinventing MySQL replication. Is there any reason you're not using that? It could probably work faster, better, and more efficiently, incrementally and continually updating only the parts of the tables that have changed.
The issue was found and solved:
The process database had the table files copied from one of the production databases, and did not show crashed on the process server and no issues when query and updating the data.
While searching the web following SO answer was found: MySQL table is marked as crashed
So by guessing that when the tables was copied from production to the process server, that the header info stayed the same and might interfere when copied back to the production servers during the processor. So it was tried by repairing the table on the process server and then run a few tests on our staging environment where the issue was also experienced. And surely enough that corrected the issue.
So the final solution was to repair the tables once on the process server before having the process script run hourly.
I see you've already found an answer, but two things struck me about this question.
One, you should look at rsync which gives you many more options, not the least of which is a speedier transfer, that may better suit this problem. File transfer between servers is basically why rsync exists as a thing.
Second, and I'm not trying to re engineer your system but you may have outgrown MySQL. It may not be the best fit for this problem. This problem may be better served by Riak where you have multiple nodes, or Mongo where you can deal with large files and have multiple nodes. Just two thoughts I had while reading your question.