I need to set a maximum limit of rows in my MySQL table. Documentation tell us that one can use following SQL code to create table:
CREATE TABLE `table_with_limit`
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB MAX_ROWS=100000
But MAX_ROWS property is not a hard limit ("store not more then 100 000 rows and delete other") but a hint for database engine that this table will have AT LEAST 100 000 rows.
The only possible way I see to solve the problem is to use BEFORE INSERT trigger which will check the count of rows in table and delete the older rows. But I'm pretty sure that this is a huge overheat :/
Another solution is to clear the table with cron script every N minutes. This is a simplest way, but still it needs another system to watch for.
Anyone knows a better solution? :)
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL // RESIGNAL statement to raise an error.
Create a table with 100,000 rows.
Pre-fill one of the fields with a
"time-stamp" in the past.
Select oldest record, update "time-stamp"
when "creating" (updating) record.
Only use select and update - never use insert or delete.
Reverse index on "time-stamp" field makes
the select/update fast.
There is no way to limit the maximum number of a table rows in MySQL, unless you write a Trigger to do that.
I'm just making up an answer off the top of my head. My assumption is that you want something like a 'bucket' where you put in records, and that you want to empty it before it hits a certain record number count.
After an insert statement, run SELECT LAST_INSERT_ID(); which will get you the auto increment of a record id. Yes you still have to run an extra query, but it will be low resource intensive. Once you reach a certain count, truncate the table and reset the auto increment id.
Otherwise you can't have a 'capped' table in mysql, as you would have to have pre-defined actions like (do we not allowe the record, do we truncate the table? etc).
Related
This fairly obvious question has very few (couldnt find any) solid answers.
I do simple select from table of 2 million rows.
select count(id) as total from big_table
Any machine I try this query on, usually takes at least 5 seconds to complete. This is unacceptable for realtime queries.
The reason I need an exact value of rows fetched is for precise statistical calculations later on.
Using the last auto increment value is unfortunately not an options because rows also get deleted periodically.
It can indeed be slow when running on an InnoDB engine. As stated in section 14.24 of the MySQL 5.7 Reference Manual, “InnoDB Restrictions and Limitations”, 3rd bullet point:
InnoDB InnoDB does not keep an internal count of rows in a table because concurrent transactions might “see” different numbers of rows at the same time. Consequently, SELECT COUNT(*) statements only count rows visible to the current transaction.
For information about how InnoDB processes SELECT COUNT(*) statements, refer to the COUNT() description in Section 12.20.1, “Aggregate Function Descriptions”.
The suggested solution is a counter table. This is a separate table with one row and column, having the current record count. It could be kept updated via triggers. Something like this:
create table big_table_count (rec_count int default 0);
-- one-shot initialisation:
insert into big_table_count select count(*) from big_table;
create trigger big_insert after insert on big_table
for each row
update big_table_count set rec_count = rec_count + 1;
create trigger big_delete after delete on big_table
for each row
update big_table_count set rec_count = rec_count - 1;
You can see here a fiddle, where you should alter the insert/delete statements in the build section to see the effect on:
select rec_count from big_table_count;
You could extend this for several tables, either by creating such a table for each, or to reserve a row per table in the above counter table. It would then be keyed by a column "table_name".
Improving concurrency
The above method does have an impact if you have many concurrent sessions inserting or deleting records, because they need to wait for each other to complete the update of the counter.
A solution is to not let the triggers update the same, single record, but to let them insert a new record, like this:
create trigger big_insert after insert on big_table
for each row
insert into big_table_count (rec_count) values (1);
create trigger big_delete after delete on big_table
for each row
insert into big_table_count (rec_count) values (-1);
The way to get the count then becomes:
select sum(rec_count) from big_table_count;
Then, once in a while (e.g. daily) you should re-initialise the counter table to keep it small:
truncate table big_table_count;
insert into big_table_count select count(*) from big_table;
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I set a maximum number of rows in MySQL table?
Is it possible ( and how ) to put a limit on a MySQL table ( let's say 100'000 ) and deleting the old entries when limit reaches?
Meaning, when I have 100'000 entries and the 100'001 appears, the entry with the smallest ID is deleted and the new one is created ( with the new ID of course ).
I want MySQL to handle this on it's own, so no outside scripts need to interfere.
I need this for logging purposes, meaning, I want to keep logs only for a certain time period, let's say a week. Maybe it is possible for MySQL just to delete entries, that are older then 1 week on it's own?
I propose triggers. This is the best way of insuring that at each insert the maximum table size is being taken into account.
Possible duplicate of How can I set a maximum number of rows in MySQL table?
From that accepted answer:
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL statement to raise an error.
I need to set a maximum limit of rows in my MySQL table. Documentation tell us that one can use following SQL code to create table:
CREATE TABLE `table_with_limit`
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB MAX_ROWS=100000
But MAX_ROWS property is not a hard limit ("store not more then 100 000 rows and delete other") but a hint for database engine that this table will have AT LEAST 100 000 rows.
The only possible way I see to solve the problem is to use BEFORE INSERT trigger which will check the count of rows in table and delete the older rows. But I'm pretty sure that this is a huge overheat :/
Another solution is to clear the table with cron script every N minutes. This is a simplest way, but still it needs another system to watch for.
Anyone knows a better solution? :)
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL // RESIGNAL statement to raise an error.
Create a table with 100,000 rows.
Pre-fill one of the fields with a
"time-stamp" in the past.
Select oldest record, update "time-stamp"
when "creating" (updating) record.
Only use select and update - never use insert or delete.
Reverse index on "time-stamp" field makes
the select/update fast.
There is no way to limit the maximum number of a table rows in MySQL, unless you write a Trigger to do that.
I'm just making up an answer off the top of my head. My assumption is that you want something like a 'bucket' where you put in records, and that you want to empty it before it hits a certain record number count.
After an insert statement, run SELECT LAST_INSERT_ID(); which will get you the auto increment of a record id. Yes you still have to run an extra query, but it will be low resource intensive. Once you reach a certain count, truncate the table and reset the auto increment id.
Otherwise you can't have a 'capped' table in mysql, as you would have to have pre-defined actions like (do we not allowe the record, do we truncate the table? etc).
MySQL 4.0 doesn't have information_schema and 'show table status from db' only gives approximate row count for innodb tables.
So whats the quickest way to get the count of innodb tables, of course other than count(*), which could be slower with big tables.
Updated
When using InnoDB the only accurate count of rows in your entire table is COUNT(*). Since your upgrade from 4.0 to 5.0 will only occur once, you'll just have to deal with the speed.
This applies to all versions of MySQL. As other commenters have pointed out - there is no fast SELECT COUNT(*) in InnoDB. Part of the reason for this is that InnoDB is multi-versional, and it will depend on the context of your transaction how many rows there are supposed to be in a table.
There are some workarounds:
1) If you never delete, SELECT MAX(id) should return the right number of rows.
2) Instead of deleting rows, you can archive them to a 'deleted rows table' (a lot of people seem to want to keep everything these days). Assuming that the delete is a much smaller subset of still current, you may be able to subtract count(*) from deleted_rows from SELECT max(id) from not_deleted.
3) Use triggers. This sucks for performance.
There's quite a technical discussion on this problem here:
http://mysqlha.blogspot.com/2009/08/fast-count-for-innodb.html
Wait! There is a fast way! Use a trigger and a meta table..
CREATE TABLE meta (
`name` char(32) NOT NULL ,
`value_int` int ,
unique (name)
) ENGINE = INNODB;
insert into meta (name, value_int) values ('mytable.count', 0);
then
set delimiter |
CREATE TRIGGER mytablecountinsert AFTER INSERT ON mytable
FOR EACH ROW BEGIN
update meta set value_int=value_int+1 where name='mytable.count';
END;
|
CREATE TRIGGER mytablecountdelete AFTER DELETE ON mytable
FOR EACH ROW BEGIN
update meta set value_int=value_int-1 where name='mytable.count';
END;
|
Well, using * is definitely not optimal but how about just 1 column. I usually use the id column to count # of rows.
I have an id column which is a primary key with AUTO_INCREMENT. I need the value that is generated to be inserted into the id column, as well as another column (which isn't set to AUTO_INCREMENT, and isnt unique.
Currently I use the mysqld_isnert_id() function to get the id, and simply run an update query after the insert, but I was wondering if I could do this without running the 2nd update query.
after insert Trigger?
If I recall correctly, the automatically generated ID isn't even created until after the insert has been performed. Your two query way is probably the only way without diving into perhaps a stored procedure.
You could define a trigger along the lines of:
delimiter //
CREATE TRIGGER upd_check AFTER INSERT ON mainTable
FOR EACH ROW
BEGIN
UPDATE dependingTable
SET dependingTable.column = NEW.id
END;//
delimiter ;
I am not exactly sure WHEN the AUTO_INCREMENT value is generated, but you could try the following, since if it works it'll save you an update (If the column you want the value replicated to is in the same row as the inserted row):
CREATE TRIGGER upd_check BEFORE INSERT ON mainTable
FOR EACH ROW
SET NEW.column = NEW.id
The only way I can see you doing it with a single query is to use the information schema. In the information schema there is a table called 'tables', there you access the column auto_increment. That contains the NEXT insert id for that table, you can access this via a nested select, just give the user used to connect to the database read access to that table. This will only work with innodb engines as far as I can tell as that way the nested select you'll do to populate the second id field will be part of the greater transaction of the insert.
That's what your query might look like:
INSERT INTO fooTable VALUES (0, (SELECT AUTO_INCREMENT FROM information_schema.TABLES));
Also if you're worried about read access and security issues, just remember this is the same info you can get by running a show table status. Speaking of which, I tried to see if you could project the show commands/queries via a select and you can't, which totally sucks, because that would have been a much cleaner solution.