I have two similar table new data insert to table_1 with. i want write a trigger in phpMyAdmin to update old data in table_2 with new data that insert to table_1 and delete table_1 new data. ech row have unique id.
It look like simple but I do not have MySQL knowledge.
Thanks.
You can not update the table on which the trigger is executed. MySQL locks the table when inserting to it and does not allow updating it when it is locked (which may cause a deadlock)
I think the better solution for you is to use a stored procedure. And also it seems weird to me what you are trying to achieve , you can just update table_2 with new data which is more performance wise and makes sense.
Related
i created a huge table in mysql,
say
table1
i will be querying on this table for my results, continuously
once a week i will be flushing the values in table1 and insert new values.
(this process takes 3 hours of time.)
So my issue is my querying will be stopped for 3 hours(while the new table1 is being generated) , and my querying should be continuous.
i was thinking of creating a like table
create like table temp_table1 from table1
and till the time new table1 is getting populated, i will be using temp_table1 for querying.
But for that i also wanted to set an automated trigger for changing between tables.
Is there any better way to achieve this?
also creating like table for a huge table would take a lot of time?
You can do it other way actually...
create table temp_table1 same as table1
Do your process in temp_table1 instead of table1
Once process completes insert the data to main table using insert into .. select from construct
That way your main table is free for querying and won't be blocked. The final insert could be fast enough depending on the select performance.
I want to insert values,and if it exist "insert .. on duplicate update" will update it but it will not update autoincrement val with new inserted one.I heard that insert on duplicate update has some bug WHICH it always generate new id and if exists it will delete it but I want to get exactly this deleted id.(when I use last_inserted_id it just return last id of updated column which is not new inserted one)I want to update id with new inserted query should act like :insert if exists delete exist one and insert new one(with auto increment).I heard that there are replace into but it is so slow and I want just update id with new inserted id.
With respect, you're playing with fire if you try to carry out fancy update logic with autoincrement columns.
If you need a new autoincrement id in a row when you update it, then just delete the old row and insert the new one like Gordon said.
Under some database workloads that may be slightly slower than using insert on duplicate key update. But unless your table contains at least 100 megarows, or unless you're doing at least 10 of these operations per second all day and all night, the performance difference will be trivial. If you do have that kind of database size or workload, ask your database administrator for advice.
REPLACE = DELETE, then INSERT. IODKU never deletes. Each requires you to specify the columns of some UNIQUE key so that it knows what row to work with. A subtle point: If there are multiple UNIQUE keys, REPLACE may delete multiple rows, then insert only one.
IODKU can get the id (either existing or new) by using ... UPDATE id = LAST_INSERT_ID(id), ....
I'm trying to, once a table is updated with some new information from an insert, update at the same time another specific column(s) of other table(s) on the same database.
Is like, ok, I save in table1 my age, my country and my name and I have to save at the same time my age in table2, field age2 and my name in table3, field name2. Something like that. But doing it automatically.
I read about the triggers but also read that with triggers you CAN'T specify the name of the table.
Can anyone please help me? I'm pretty lost.
You definitely can and must specify the name of the table when creating a trigger!
Triggers are defined for data modifications on tables.
You should use
CREATE TRIGGER `copy_on_new_data` AFTER INSERT ON `your_table_name` FOR EACH ROW BEGIN [...] END;
and insert the desired data to your other tables using the new qualifier, like
INSERT INTO `other_table`(name) values (new.name);
I'm having an issue with finding and deleting duplicate records, I have a table with IDs called CallDetailRecordID which I need to scan and delete records, the reason there are duplicates is that I'm exporting data to special arching engine works with MySQL and it doesn't support indexing.
I tried using "Select DISTINCT" but it dosn't work, is there is another way? I'm hoping I can create a store procedure and have it run weekly to perform clean up.
your help is highly appreciated.
Thank you
CREATE TABLE tmp_table LIKE table
INSERT INTO tmp_table (SELECT * FROM table GROUP BY CallDetailRecordID)
RENAME table TO old_table
RENAME tmp_table to table
Drop the old table if you want, add a LOCK TABLES statement at the beginning to avoid lost inserts.
I am collecting readings from several thousand sensors and storing them in a MySQL database. There are several hundred inserts per second. To improve the insert performance I am storing the values initially into a MEMORY buffer table. Once a minute I run a stored procedure which moves the inserted rows from the memory buffer to a permanent table.
Basically I would like to do the following in my stored procedure to move the rows from the temporary buffer:
INSERT INTO data SELECT * FROM data_buffer;
DELETE FROM data_buffer;
Unfortunately the previous is not usable because the data collection processes insert additional rows in "data_buffer" between INSERT and DELETE above. Thus those rows will get deleted without getting inserted to the "data" table.
How can I make the operation atomic or make the DELETE statement to delete only the rows which were SELECTed and INSERTed in the preceding statement?
I would prefer doing this in a standard way which works on different database engines if possible.
I would prefer not adding any additional "id" columns because of performance overhead and storage requirements.
I wish there was SELECT_AND_DELETE or MOVE statement in standard SQL or something similar...
I beleive this will work but will block until insert is done
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
INSERT INTO data (SELECT * FROM data_buffer FOR UPDATE);
DELETE FROM data_buffer;
COMMIT TRANSACTION;
A possible way to avoid all those problems, and to also stay fast, would be to use two data_buffer tables (let's call them data_buffer1 and data_buffer2); while the collection processes insert into data_buffer2, you can do the insert and delete on data_buffer2; than you switch, so collected data goes into data_buffer2, while data is inserted+deleted from data_buffer1 into data.
How about having a row id, get the max value before insert, make the insert and then delete records <= max(id)
This is a similar solution to #ammoQ's answer. The difference is that instead of having the INSERTing process figure out which table to write to, you can transparently swap the tables in the scheduled procedure.
Use RENAME in the scheduled procedure to swap tables:
CREATE TABLE IF NOT EXISTS data_buffer_new LIKE data_buffer;
RENAME TABLE data_buffer TO data_buffer_old, data_buffer_new TO data_buffer;
INSERT INTO data SELECT * FROM data_buffer_old;
DROP TABLE data_buffer_old;
This works because RENAME statement swaps the tables atomically, thus the INSERTing processes will not fail with "table not found". This is MySQL specific though.
I assume the tables are identical, with the same columns and primary key(s)? If that is the case, you could nestled select inside a where clause...something like this:
DELETE FROM data_buffer
WHERE primarykey IN (SELECT primarykey FROM data)
This is a MySQL specific solution. You can use locking to prevent the INSERTing processes from adding new rows while you are moving rows.
The procedure which moves the rows should be as follows:
LOCK TABLE data_buffer READ;
INSERT INTO data SELECT * FROM data_buffer;
DELETE FROM data_buffer;
UNLOCK TABLE;
The code which INSERTs new rows in the buffer should be changed as follows:
LOCK TABLE data_buffer WRITE;
INSERT INTO data_buffer VALUES (1, 2, 3);
UNLOCK TABLE;
The INSERT process will obviously block while the lock is in place.