I have a MySQL table named requests. It's growing so fast. Currently, it has 3 million rows. Also, the current table engine is "InnoDB".
Few days ago, I got this error:
ERROR 1114 (HY000): The table is full
I've resolved the problem (temporary I guess) by adding this to MySQL configuration:
innodb_data_file_path = ibdata1:10M:autoextend
But still sometimes my serves goes to down and I have to restart it to make it available. Recently, I've made that table empty and I got rid of being down.
But it will be full again soon. Any idea how can I fix that issue? Should I use another engine for that table? Should I use compressed type-row for it? or what?
Noted that, sometimes I need to select from that table and it has one index on a column (in addition to pk)
You are limited by disk space. You must keep an eye on that and take action before you get to "table full".
If you store the data in a MyISAM table, you can go twice as long before getting "table full". If you simply write to a disk file (no database, no queries; etc), you can squeeze a little more in before "disk full".
Do you need to "purge" old data? That way you could continue to receive new data without ever hitting "table full". The best way to do that is via InnoDB and PARTITION BY RANGE(TO_DAYS(...)). If you purge anything over a month old, use daily partitions. For, say, 90 days, use weekly partitions. More discussion: http://mysql.rjweb.org/doc.php/partitionmaint
What will you do with the data? Analyze it? Search it? What SQL queries do you envision? Answer those; there could be other tips.
Related
everyone! I'm trying to avoid breaking of my database in the phpbb3.1 forum. It was crushed twice this month.
So I have two questions:
1) Is it safe to convert MyISAM to InnoDB? I mean will extensions work fine? Will forum be workable after updating to next version?
2) In which way I can avoid base corrupting?
p.s.
I also posted this question here:
https://www.phpbb.com/community/viewtopic.php?f=466&t=2436326
I'll venture a guess. You had a power failure, and when it came back up, MySQL was complaining that some index on some table was corrupted? And that table was MyISAM?
Use myisamchk to repair the tables.
Review the gotchas in http://mysql.rjweb.org/doc.php/myisam2innodb to see if conversion to InnoDB will add new woes. There probably won't be any. A 2-part PRIMARY KEY is about the only thing that is not implemented in InnoDB. Also, if you have too old a version of MySQL, InnoDB may not yet have FULLTEXT indexes (if you need them).
Change my.cnf: key_buffer_size = 20M and innodb_buffer_pool_size equal to about half of available memory.
ALTER TABLE xx ENGINE=InnoDB; for each table xx.
I think (but am not sure) that each update/delete/insert marks the table as possibly corrupt. It writes the changes, but does not clear the mark. When mysqld shuts down cleanly, everything is flushed to disk and these flags are cleared. When mysqld comes back up, it complains about the flags that did not get cleared. So...
Whether or not an index is marked as corrupt depends solely on whether you modified that index and crashed. (Every table has some index, yes?)
Normally, MySQL manages to flush changes to disk before a crash. Only occasionally does the crash happen at a time where the index will really be corrupt. There is a "quick" mode on the repair that simply clears the flag -- you could try that. But if you ever get a mysterious "can't find record" when you know the records exists, you'd better REPAIR it.
I have largish (InnoDB) tables in a database; apparently the users are capable of making SELECTs with JOINs that result in temporary, large (and thus on-disk) tables. Sometimes, those are so large that they exhaust disk space, leading to all sorts of weird issues.
Is there a way to limit temp table maximum size for an on-disk table, so that the table doesn't overgrow the disk? tmp_table_size only applies to in-memory tables, despite the name. I haven't found anything relevant in the documentation.
There's no option for this in MariaDB and MySQL.
I ran into the same issue as you some months ago, I searched a lot and I finally partially solved it by creating a special storage area on the NAS for themporary datasets.
Create a folder on your NAS or a partition on an internal HDD, it will be by definition limited in size, then mount it, and in the mysql ini, assign the temporary storage to this drive: (choose either windows/linux)
tmpdir="mnt/DBtmp/"
tmpdir="T:\"
mysql service should be restarted after this change.
With this approach, once the drive is full, you still have "weird issues" with on-disk queries, but the other issues are gone.
There was a discussion about an option disk-tmp-table-size, but it looks like the commit did not make it through review or got lost for some other reason (at least the option does not exist in the current code base anymore).
I guess your next best try (besides increasing storage) is to tune MySQL to not make on-disk temp tables. There are some tips for this on DBA. Another attempt could be to create a ramdisk for the storage of the "on-disk" temp tables, if you have enough RAM and only lack disk storage.
While it does not answer the question for MySQL, MariaDB has tmp_disk_table_size and potentially also useful max_join_size settings. However, tmp_disk_table_size is only for MyISAM or Aria tables, not for InnoDB. Also, max_join_size works only on the estimated row count of the join, not the actual row count. On the bright side, the error is issued almost immediately.
I've recently been thrust into the position of db admin for our server so I'm having to learn as I go. We recently found that one of our tables had maxed out the id column and needs to be migrated to bigint.
This is for an INNODB table with roughly roughly 301GB of data. We are running mysql version 5.5.38. The command I'm running to migrate the table is
ALTER TABLE tb_name CHANGE id id BIGINT NOT NULL;
I kicked off the migration and we are now 18 hours into the migration, but I'm not seeing our disk space on the server change at all which makes me think nothing is happening. We have plenty of memory so no concern there, but it still shows the following message state when I run "show processlist;"
copy to tmp table
Does anyone have any ideas or know what I'm doing incorrectly? Please ask if you need more information.
Yes, it will take a looooong time. The disks are probably spinning as fast as they can. (SSDs employ faster hamsters.)
You can kill the ALTER, since all it is doing is, as it says, "copying to tmp table", after which it will rename the tmp table to be the real table and drop the old copy.
I hope you had innodb_file_per_table = ON when you started the ALTER. Else it will be expanding ibdata1, which won't shrink afterwards.
pt-online-schema-change is an alternative. It will still take a loooooong time (with one extra 'o' because it will be slightly slower). It will do the job without blocking other activity.
This might have been a good time to check all the columns and indexes in the table:
Could some INTs be turned into MEDIUMINT or something smaller?
Are some of the INDEXes unused?
How about normalizing some of the VARCHARs?
Maybe even PARTITIONing (but not without a good reason)? Time-series is a typical use for Data Warehousing.
Summarize the data, and toss at least the older data?
If you would like further guidance, please provide SHOW CREATE TABLE.
Looking for some help and advice please from Super Guru MySQL/PHP pros who can spare a moment of their time.
I have a web application in PHP/MySQL which has grown over the years and gets alot of searches on it. Its hitting bottlenecks now when the various daily data dumps of new rows get processed using MySQL LOAD DATA INFILE.
Its a large MyISAM table with about 1.5 million rows and all the SELECT queries occur on it. When these take place during the LOAD DATA INFILE of about 600k rows (and deletion of out dated data) they just get backed up and take about 30+ minutes to be freed up making any of those searches fruitless.
I need to come up with a way to get that table updated while retaining the ability to provide SELECT results in a reasonable timeframe.
Im completely out of ideas and have not been able to come up with a solution myself as its the first time ive encountered this sort of issue.
Any helpful advice, solutions or pointers from similar past experiences would be greatly appreciated as I would love to learn to resolve this sort of problem.
Many thanks everyone for your time! J
You can use the CONCURRENT keywords for LOAD DATA INFILE. This way, when you load the data, the table is still able to server SELECTs.
Concerning the delete, this is more complicated. I would personally add a column called 'status' INT(1), who will define if the line is active or not( = deleted), and then partition my table with a rule based on this column status.
This way, it will be easier to delete all rows where status=0 :P I haven;t tested this last solution, I may do that in a near future.
The CONCURRENT keywords will work if your table is optimized. If there is any FREE_SPACE, then the LOAD DATA INFILE will lock the table.
MyISAM doesn't support row-level locking, so operations like mysqldump are forced to lock the entire table to guarantee a consistent dump. Your only practical options are to switch to another table like (like InnoDB) that supports row-level locking, and/or split your dump up into smaller pieces. The small dumps will still lock the table while they're dumping/reloading, but the lock periods would be shorter.
A hairier option would be to have "live" and "backup" tables. Do the dump/load operations on the backup table. When they're copmlete, swap it out for the live table (rename tables, or have your code dynamically change which table they're using).. If you can live with a short window of potential stale data, this could be a better option.
You should switch your table storage engine from MyISAM to InnoDB. InnoDB provides row-locking (as opposed to MyISAM's table-locking) meaning while one query is busy updating or inserting a row, another query can update a different row at the same time.
I've a database with 3 of the tables having rows in excess of 20 million each. I've used GUIDs as primary keys (unfortunately). Now our database is about 20GB and growing 5GB per month.
It takes about 2 hrs to take full backup of the database, and 30hrs to restore on a box with 4GB RAM.
We once have all the tables from database disappeared. other mysql databases in same server were alright except one - for which only data was disappeared leaving empty tables.
A select query (among many slow queries) - which get max of a date column in one of 20m table takes about 5 mins to return result. This query used pretty frequently.
What I'm looking answers for
recommended db design changes
ways to improved select query performance - max date column on 20m records
other queries' performance
how to go about handling future db growth
Thanks all for your attention.
I've seen setups of larger size (with InnoDB as storage engine and a GUID as a primary key), and there were no such problems.
We once have all the tables from database disappeared. other mysql databases in same server were alright except one - for which only data was disappeared leaving empty tables.
The tables may seem empty if the system LSN has gone below the each page's LSN. This may happen if the InnoDB logfiles are corrupt. InnoDB, however, will issue a warning in this case.
A select query (among many slow queries) - which get max of a date column in one of 20m table takes about 5 mins to return result. This query used pretty frequently.
Create an index on this column, the query will be instant.
Please post the exact query and I'll tell you how to create the best index.
I see no problem in the DB design as such, most probably it's something with your server.
Is it possible to reproduce this behavior on another server with a clean vanilla MySQL installation?
You may also want to try to split data between the tables. Set innodb_file_per_table and restore from the backup.
A free alternaive to innodb hot backup is Percona XtraBackup Tool.
For backup, you could use the innodb hot backup tool. This not only lets you do consistent backups while your database is up, but the restore is much faster than the one you're doing (I'm assuming mysqldump?). It does cost money.
You might also try Mydumper: http://www.mydumper.org/
It is a great tool and is free and open source