I am trying to create a generic trigger that will be used for about 20 different tables in a mysql database. I know that there will be some table-specific in each trigger, but I would like to keep that to a minimum.
Currently, I am trying to iterate through all columns, but it doesn't seem to be possible. Is it possible to iterate all columns and get the new value?
I am thinking, something like: NEW[column-name]..
Where "column-name" is the result from a select-query in information_schema table.
Edit:
Thanks!
I was asked to add more information about the tables.
Each table is more or less unique. The only thing that might be the same is the column "id", which is a unique row-id. Auto-increment.
Thats why I want to dynamically iterate all columns in the trigger.I know that for 20 tables, there will be 20 triggers, But I want to keep the number of unique parameters in each trigger as low as possible.
So for example
Table1:
* id, user_id, value, timestamp
Table2:
* id, class_id, user_name, value_1, value_2, value_3, time, date
And pseudo code:
For each row:
For each 'column' in table_columns:
something[column] = NEW[column];
send(something);
send() is a user-defined function that I've created.
Thanks!
Related
If I have a table that has these rows:
animal (primary)
-------
man
dog
cow
and I want to delete all the rows and insert my new rows (that may contain some of the same data), such as:
animal (primary)
-------
dog
chicken
wolf
I could simply do something like:
delete from animal;
and then insert the new rows.
But when I do that, for a split second, 'dog' won't be accessible through the SELECT statement.
I could simply insert ignore the new data and then delete the rest, one by one, but that doesn't feel like the right solution when I have a lot of rows.
Is there a way to insert the new data and then have MySQL automatically delete the rest afterward?
I have a program that selects data from this table every 5 minutes (and the code I'm writing now will be updating this table once every 30 minutes), so I would like to be as accurate as possible at all times, and I would rather have too many rows for a split second than too few rows for the same time.
Note: I know that this may seem like it is unnecessary but I just feel like if I leave too many of those unlikely possibilities in different places, there will be times where things go wrong.
You may want to use TRUNCATE instead of DELETE here. TRUNCATE is faster than DELETE and resets the table back to its empty state (meaning IDENTITY columns are reset to original values as well).
Not sure why you're having problems with selecting a value that was deleted and re-added, maybe I'm missing some context. But if you're wiping the table clean, you might want to use truncate instead.
You could add another column timestamp and change the select statement to accommodate this scenario where it needs to check for the latest value.
If this is for school, I would argue that you need a timestamp and that is what your professor is looking for. You shouldn't need to truncate a table to get the latest values, you need to adjust the thinking behind the table and how you are querying data. Hope this helps!
Check out these:
How to make a mysql table with date and time columns?
Why not update values instead?
My other questions would be:
How are you loading this into the table?
What does that code look like?
Can you change the way you Select from the table?
What values are being "updated" and change in such a way that you need to truncate the entire table?
If you don't want to add new column, there is an other method.
1. At first step, update table in any way that mark all existing rows for deletion in future. For example:
UPDATE `table_name` SET `animal`=CONCAT('MUST_BE_DELETED_', `animal`)
At second step, insert new rows.
On final step, remove all marked rows:
DELETE FROM `table_name` WHERE `animal` LIKE 'MUST_BE_DELETED_%'
You could implement this by having the updated_on column as timestamp and you may even utilize some default values, but let's go with an example without them.
I presume the table would look something like this:
CREATE TABLE `new_table` (
`animal` varchar(255) NOT NULL,
`updated_on` timestamp,
PRIMARY KEY (`animal`)
) ENGINE=InnoDB
This is just a dummy table example. What's important are the two queries later on.
You would simply perform a query to insert the data, such as:
insert into my_table(animal)
select animal from my_view where animal = 'dogs'
on duplicate key update
updated_on = current_timestamp;
Please notice that my_view is your table/view/query by which you supply the values to insert into your table. Also notice that you need to have primary/unique key constraint on your animal column in this example, in order to work.
Then, you proceed with the following query, to "purge" (delete) the old values:
delete from my_table
where updated_on < (
select *
from (
select max(updated_on) from my_table
) as max_date
);
Please notice that you could make a separate view in order to obtain this max_date value for updated_on entry. This entry should indicate the timestamp for your last updated/inserted values in a previous query, so you could proceed with utilizing it in a where clause in order to issue deletion of old records that you don't want/need anymore.
IMPORTANT NOTE:
Since you are doing multiple queries and it's supposed to be a single operation, I'd advise you to utilize it within a single trancations and to utilize a proper rollback on various potential outcomes (i.e. in case of mysql exceptions). You might wish to utilize a proper stored procedure for that.
Been searching on Google for a while now without finding the answer to my problem. I have like 10 tables where 5 of them contains 150 rows. I want to add 15 rows to these 5 tables, is there any simple solution for this? I know it's easy to add the rows manually but I want to know anyway. What I'm looking for is something like this:
INSERT INTO all_tables VALUES (col1, col2, col3) WHERE row_number() = '150'
Is it possible? Thanks in advance!
You can only target updates to one table at a time, which must always be specified by name. Also, you cannot specify a WHERE clause on an INSERT. Your best bet is probably to write one INSERT and copy and paste for the rest.
You could:
Loop through a list of the relevant table names.
Run a dynamic query like select count(*) into #c1 from SpecifiedTable against the relevant table, returning the count into a declared variable.
If the returned value is 150, run another dynamic query to insert the relevant values into the specified table.
You can find out more about dynamic queries and returning values from them in MySQL here. If this is a once-off, you will probably find it easier to do it manually.
I have a table that stores the summed values of a large table. I'm not calculating them on the fly as I need them frequently.
What is the best way to update these values?
I could delete the relevant rows from the table, do a full group by sum on all the relevant lines and then insert the new data.
Or I could index a timestamp column on the main table, and then only sum the latest values and add them to the existing data. This is complicated because some sums won't exist so both an insert and an update query would need to run.
I realize that the answer depends on the particulars of the data, but what I want to know is if it is ever worth doing the second method; if there are millions of rows being summed in the first example and only tens in the second, would the second be significantly faster to execute?
You can try with triggers on update/delete. Then you check inserted or deleted value and according to it modify the sum in second table.
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
For me there is several ways :
Make a view which should be up-to-date (i don't know if you can do concrete views in mysql)
Make a table which will be up-to-date using a trigger (on update/delete/insert as example) or using a batch during (night, so data will be 1 day old)
Make a stored procedure which will be retrieving and computing only the data needed.
I would do something like this (INSERT UPDATE):
mysql_query("
INSERT INTO sum_table (col1, col2)
SELECT id, SUM(value)
FROM table
GROUP BY id
ON DUPLICATE KEY UPDATE col2 = VALUES(col2)
");
Please let me know if you need more examples.
I have two tables, each one has a primary ID column as key. I want the two tables to share one increasing key counter.
For example, when the two tables are empty, and counter = 1. When record A is about to be inserted to table 1, its ID will be 1 and the counter will be increased to 2. When record B is about to be inserted to table 2, its ID will be 2 and the counter will be increased to 3. When record C is about to be inserted to table 1 again, its ID will be 3 and so on.
I am using PHP as the outside language. Now I have two options:
Keep the counter in the database as a single-row-single-column table. But every time I add things to table A or B, I need to update this counter table.
I can keep the counter as a global variable in PHP. But then I need to initialize the counter from the maximum key of the two tables at the start of apache, which I have no idea how to do.
Any suggestion for this?
The background is, I want to display a mix of records from the two tables in either ASC or DESC order of the creation time of the records. Furthermore, the records will be displayed in page-style, say, 50 records per page. Records are only added to the database rather than being removed. Following my above implementation, I can just perform a "select ... where key between 1 and 50" from two tables and merge the select datasets together, sort the 50 records according to IDs and display them.
Is there any other idea of implementing this requirement?
Thank you very much
Well, you will gain next to nothing with this setup; if you just keep the datetime of the insert you can easily do
SELECT * FROM
(
SELECT columnA, columnB, inserttime
FROM table1
UNION ALL
SELECT columnA, columnB, inserttime
FROM table2
)
ORDER BY inserttime
LIMIT 1, 50
And it will perform decently.
Alternatively (if chasing last drop of preformance), if you are merging the results it can be an indicator to merge the tables (why have two tables anyway if you are merging the results).
Or do it as SQL subclass (then you can have one table maintain IDs and other common attributes, and the other two reference the common ID sequence as foreign key).
if you need creatin time wont it be easier to add a timestamp field to your db and sort them according to that field?
i believe using ids as a refrence of creation is bad practice.
If you really must do this, there is a way. Create a one-row, one-column table to hold the last-used row number, and set it to zero. On each of your two data tables, create an AFTER INSERT trigger to read that table, increment it, and set the newly-inserted row number to that value. I can't remember the exact syntax because I haven't created a trigger for years; see here http://dev.mysql.com/doc/refman/5.0/en/triggers.html
I have an Access 2003 table with ~4000 records which was made from 17 different tables. Roughly half of these records are duplicates. There is no unique identifying column (id, name etc). There is an id column which was auto filled when the tables were combined meaning that the duplicates aren't completely identical (though this column could be removed if it makes things easier).
I have used the Access Find Duplicates Query Wizard which gives me a list of the duplicated records but won't let me delete them (seriously what use is this query if I can't delete them?). I've tried converting the generated query to a remove query but that changes the number of rows that it finds. I'd alter the sql by hand but it's a bit beyond me and is 7 lines long.
Does anyone know a good way of getting rid of the duplicates?
The reason the find duplicates query won't let you delete the records is because it is basically just an aggregate query, it is counting the number of duplicates it finds and returning the cases where the count is greater than 1.
Consider that if you did make a delete query based on the find duplicates, it would delete all rows that have duplicate values, which is maybe not what you want. You want to delete all but one of the duplicates.
You should try to delete all duplicates of a record apart from one, excluding the ID column in your comparison. I suggest the simplest way to do this is to make a make-table query of all the unique values (Select Distinct Field1, Field2... from MyTable) instead for every field except for the ID field, using the results in a to create a new table of around 2000 records (if half are duplicates).
Then, create an ID column on your new table, use an update query to update this ID to the first matching ID in the original table (you could do this using DLookup, which will return the first EXPRESSION value where CRITERIA is true in DOMAIN).
The DLookup() function returns one
value from a single field even if more
than one record satisfies the
criteria. If no record satisfies the
criteria, or if the domain contains no
records, DLookup() returns a Null.
Since you are identifying the first matching ID based on all the other fields, which are unique values, the unmatched IDs will belong to duplicates. You will be reversing the PK relation, identifying the first matching key given a set of unique fields. After that, you should set the ID to be PK. Of course this assumes the ID has no inherent meaning, and you don't care about keeping one particular ID for a given duplicated row over any of the IDs belonging to the other duplicated rows. This assumes you care about the data in the ID column so you want to preserve it for all remaining rows, otherwise just ignore the DLookup step and do a Select Distinct on all columns apart from the ID.
Use a select with all columns except the ID column:
SELECT DISTINCTROW Column1, Column2, Column3
INTO MYNEWTABLE
FROM TABLE
You can simply swap the names.
This solution will give you a new table with non duplicates.
The following will preserve original IDs and do it in one step:
DELETE FROM table_with_duplicates
WHERE table_with_duplicates.id NOT IN
(SELECT max(id)
FROM table_with_duplicates
GROUP BY duplicated_field_1, duplicated_field_2, ...
)
Now you have the original table with no duplicates and preserved ids.
And always remember to backup you data before trying large DELETEs.
DELETE * FROM table_with_duplicates
WHERE table_with_duplicates.ID In
(SELECT max(ID)
FROM table_with_duplicates
GROUP BY [duplicated_field_1]
HAVING Count(*)>1
)
Actually I Found A very simple solution took a while but it all of your fields across are the same like a complete duplicate record then just make one query with every field and sort by "Group BY". Thus the duplicates will combine and you can just append this information to a new table and rename it the same as the existing table. If you have a primary key field you could just ignore it in the query and then it would still combine the data (assuming that you don't care about the data in the primary field). I don't know why no one has mentioned this solution took me 5 hr. to come up with. :)