I have a column "processed_at" on table. This can get reset from multiple places in the code in order to indicate to a job that this row needs to be processed. I would like to find out how processed_at is set to null.
What is the easiest way to do this? Ideally I would know how often this happens by row id, but it would also be ok to just know a number for all rows combined over a certain period.
Can this be done like this:
A trigger that reacts to the update and then stores id and reset timestamp to a separate table?
Would this have a noticeable effect on the performance of the original query?
Something like this:
create table mytable_resets (
id serial primary key,
mytable_id bigint unsigned not null,
reset_at datetime not null
);
delimiter ;;
create trigger t after update on mytable
for each row begin
if NEW.processed_at is null then
insert into mytable_resets values (default, NEW.id, NOW());
end if;
end;;
delimiter ;
Yes, it will impact the performance of the original query.
The cost of database writes is roughly proportional to the number of indexes it updates. If your query execute a trigger to insert into another table, it adds another index update. In this case, the primary key index of the mytable_resets table.
But it shouldn't be significantly greater overhead than if your mytable table had one more index.
Related
I want to restrict the number of the ID between multiple tables. Let's say I have 4 tables with products. Each table will have an ID, but I want to restrict the number of ID so I can't duplicate it that number already exists on one of these tables. Let's say I have the ID 43 in one table, I want to "deny" it on another table so I can't use it anymore because it's already created on that one.
Thanks!
Have a 5th table for doling out ids. There, the id can be AUTO_INCREMENT (if that is suitable).
get new number from 5th table
INSERT into the appropriate of the 4 tables.
The two steps do not need to be in the same 'transaction'.
A sample sequence generator:
-- Setup:
CREATE TABLE `seq` (
`id` INT UNSIGNED NOT NULL,
`code` char(1) CHARACTER SET ascii NOT NULL,
PRIMARY KEY (`code`)
) ENGINE=InnoDB;
INSERT INTO seq VALUES (0, 'a'); -- The only row for the table
-- Increment and get next value:
UPDATE seq
SET id = LAST_INSERT_ID(id + 1)
WHERE code = 'a';
SELECT LAST_INSERT_ID();
Note: the UPDATE and SELECT can (should) be done outside any transaction; be in autocommit=ON mode. The SELECT is specific to the connection, so there is no chance of mixing up numbers with another connection.
When you INSERT new row, you should, LOCK all four tables (for data insert), start transaction, read max value of ID of each table and choose maximum, add 1 to that maximum and INSERT new row, then finish transaction and unlock tables.
This is basic solution, you can try to optimize it by saving in some place last maximum ID. The table locking ensure that ID will be unique when many concurrent threads (e.g. from php) will insert rows.
May be you will able to do it using triggers (so no on application side but on DB side).
I have 2 tables - sensors and readings. There is a one to many relation from sensors to readings.
I need to query for all rows from sensors and then get the newest (i.e MAX timestamp) data from readings for each row. I've tried with:
SELECT sensors.*, readings.value, readings.timestamp
FROM sensors
LEFT JOIN readings ON readings.sensor_id = sensors.id
GROUP BY readings.sensor_id
The problem is, I have 6 million rows of data and the query is taking nearly two minutes to execute. Is there a more effecient way I can get hold of the last reading/value for each sensor?
This is how I'd go about the problem:
it involves a trigger that populates latest_readings table
it involves another table that I named latest_readings.
The table
I made sensor_id unique because I assumed you have one reading per sensor. This can be categorized by types by adding an additional column.
Reason for unique index: we'll be using MySQL's INSERT INTO ... ON DUPLICATE KEY UPDATE to have all the hard work done for us. If there's a reading for a particular sensor, it gets updated - otherwise, it gets inserted (in one query).
You can also make sensor_id a foreign key. I skipped that part.
CREATE TABLE latest_readings (
id int unsigned not null auto_increment,
sensor_id int unsigned not null,
reading_id int unsigned not null,
primary key(id),
unique (sensor_id)
) ENGINE = InnoDB;
The trigger
Trigger type is after insert. I will assume that the table is named readings and that it contains sensor_id column. Adjust accordingly.
DELIMITER $$
CREATE
TRIGGER `readings_after_insert` AFTER INSERT ON `readings`
FOR EACH ROW BEGIN
INSERT INTO readings
(sensor_id, reading_id)
VALUES
(NEW.sensor_id, NEW.id)
ON DUPLICATE KEY UPDATE reading_id = NEW.id
;
END;
$$
DELIMITER ;
How to query for latest sensor reading
Once more, I assumed what column names were, so adjust accordingly.
SELECT
r.reading_value
FROM readings r
INNER JOIN latest_readings latest
ON latest.sensor_id = r.sensor_id
WHERE r.sensor_id = 12345;
Disclaimer: this is just an example and it probably contains bugs, which means it's not a copy paste solution. If something doesn't work, and it's easy to fix - please do it :)
There is a necessity when inserting into a table of values to change the auto-increment field to another that no two of the same id in these tables. It is necessary for the data output from the third table based on the recording and going to not add to the table a column indicating. Here's my trigger, but it does not work
CREATE TRIGGER `update_id` AFTER INSERT ON `table1`
FOR EACH ROW BEGIN
ALTER TABLE `table2` AUTO_INCREMENT = NEW.id;
END;
It's not entirely clear what problem you are trying to solve.
But it sounds as if you have two tables with an id column, and you want to ensure that the same value of id is not used in both tables. That is, if id value 42 exists in table1, you want to ensure that 42 is not used as an id value in table2.
Unforunately, MySQL does not provide any declarative constraint for this.
It sounds as if you want an Oracle-style SEQUENCE object. And unfortunately, MySQL doesn't provide an equivalent.
But what we can do is emulate that. Create an extra "sequence" table that contains an AUTO_INCREMENT column. The purpose of this table is to be used to generate id values, and to keep track of the highest generated id value:
CREATE TABLE mysequence (id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY);
Then, we'd remove the AUTO_INCREMENT attribute from the id columns of the two tables we want to generate distinct id values for.
For those tables, we'd create BEFORE INSERT triggers that will obtain distinct id values and assign it to the id column. To generate a unique value, we can insert a row to the new mysequence table, and then retrieve the auto_increment value using the LAST_INSERT_ID function.
Something like this:
CREATE TRIGGER table1_bi
BEFORE INSERT ON table1
FOR EACH ROW
BEGIN
DECLARE generated_id INT UNSIGNED;
-- do we need to generate a value for id column?
IF NEW.id IS NULL THEN
-- generate unique id value with insert into sequence table
INSERT INTO mysequence (id) VALUES (NULL);
-- retrieve inserted id value
SELECT LAST_INSERT_ID() INTO generated_id;
-- assign the retrieved value to the id columns of the row being inserted
SET NEW.id = generated_id;
END IF
END$$
(That's just a rough outline, likely there's at least one syntax error in there somewhere.)
You'd need to create a BEFORE INSERT trigger for each of the tables.
This is one approach to generating distinct values for the id columns.
Note that it wouldn't be necessary to keep ALL of the rows in the mysequence table, it's only necessary to keep the row with the largest id value.
Also note that this doesn't enforce any constraint on either tables; some session could supply a value for id that is already in the other table. To prevent that, the trigger could raise an error if a non-NULL id value is supplied. It might also be possible to allow non-NULL values, and to perform a query to check if the supplied id value already exists in the other table, and raise an error if it does. But that query would be subject to a race condition... two concurrent sessions doing inserts to the tables, and you'd need to implement some concurrency killing locking mechanisms to prevent concurrent inserts.
I need to set a maximum limit of rows in my MySQL table. Documentation tell us that one can use following SQL code to create table:
CREATE TABLE `table_with_limit`
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB MAX_ROWS=100000
But MAX_ROWS property is not a hard limit ("store not more then 100 000 rows and delete other") but a hint for database engine that this table will have AT LEAST 100 000 rows.
The only possible way I see to solve the problem is to use BEFORE INSERT trigger which will check the count of rows in table and delete the older rows. But I'm pretty sure that this is a huge overheat :/
Another solution is to clear the table with cron script every N minutes. This is a simplest way, but still it needs another system to watch for.
Anyone knows a better solution? :)
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL // RESIGNAL statement to raise an error.
Create a table with 100,000 rows.
Pre-fill one of the fields with a
"time-stamp" in the past.
Select oldest record, update "time-stamp"
when "creating" (updating) record.
Only use select and update - never use insert or delete.
Reverse index on "time-stamp" field makes
the select/update fast.
There is no way to limit the maximum number of a table rows in MySQL, unless you write a Trigger to do that.
I'm just making up an answer off the top of my head. My assumption is that you want something like a 'bucket' where you put in records, and that you want to empty it before it hits a certain record number count.
After an insert statement, run SELECT LAST_INSERT_ID(); which will get you the auto increment of a record id. Yes you still have to run an extra query, but it will be low resource intensive. Once you reach a certain count, truncate the table and reset the auto increment id.
Otherwise you can't have a 'capped' table in mysql, as you would have to have pre-defined actions like (do we not allowe the record, do we truncate the table? etc).
I need to set a maximum limit of rows in my MySQL table. Documentation tell us that one can use following SQL code to create table:
CREATE TABLE `table_with_limit`
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB MAX_ROWS=100000
But MAX_ROWS property is not a hard limit ("store not more then 100 000 rows and delete other") but a hint for database engine that this table will have AT LEAST 100 000 rows.
The only possible way I see to solve the problem is to use BEFORE INSERT trigger which will check the count of rows in table and delete the older rows. But I'm pretty sure that this is a huge overheat :/
Another solution is to clear the table with cron script every N minutes. This is a simplest way, but still it needs another system to watch for.
Anyone knows a better solution? :)
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL // RESIGNAL statement to raise an error.
Create a table with 100,000 rows.
Pre-fill one of the fields with a
"time-stamp" in the past.
Select oldest record, update "time-stamp"
when "creating" (updating) record.
Only use select and update - never use insert or delete.
Reverse index on "time-stamp" field makes
the select/update fast.
There is no way to limit the maximum number of a table rows in MySQL, unless you write a Trigger to do that.
I'm just making up an answer off the top of my head. My assumption is that you want something like a 'bucket' where you put in records, and that you want to empty it before it hits a certain record number count.
After an insert statement, run SELECT LAST_INSERT_ID(); which will get you the auto increment of a record id. Yes you still have to run an extra query, but it will be low resource intensive. Once you reach a certain count, truncate the table and reset the auto increment id.
Otherwise you can't have a 'capped' table in mysql, as you would have to have pre-defined actions like (do we not allowe the record, do we truncate the table? etc).