Been searching on Google for a while now without finding the answer to my problem. I have like 10 tables where 5 of them contains 150 rows. I want to add 15 rows to these 5 tables, is there any simple solution for this? I know it's easy to add the rows manually but I want to know anyway. What I'm looking for is something like this:
INSERT INTO all_tables VALUES (col1, col2, col3) WHERE row_number() = '150'
Is it possible? Thanks in advance!
You can only target updates to one table at a time, which must always be specified by name. Also, you cannot specify a WHERE clause on an INSERT. Your best bet is probably to write one INSERT and copy and paste for the rest.
You could:
Loop through a list of the relevant table names.
Run a dynamic query like select count(*) into #c1 from SpecifiedTable against the relevant table, returning the count into a declared variable.
If the returned value is 150, run another dynamic query to insert the relevant values into the specified table.
You can find out more about dynamic queries and returning values from them in MySQL here. If this is a once-off, you will probably find it easier to do it manually.
Related
So,
I know how to do the 1st part
insert into Table name (policy_id, target_type, target_id)
values
(16758, 7, 810) where target_type = 7
The thing is, that the policy_id will be different in every row
So, I want to populate the policy_id with the existing policy ID recursively through every record but insert the same data for target_type and target_id which are constant.
does that make sense to anyone :-)
TIA
So the idea is to duplicate some existing policy IDs, but with new values in other fields, is that right? Providing a sample of the existing data and then a matching sample of the expected result of your query would have helped us a lot to understand precisely what you mean.
Based on what you have told us, I suspect you're looking for INSERT...SELECT syntax, and you probably want a query something like this:
INSERT INTO Table_name (policy_id, target_type, target_id)
SELECT policy_id, 7, 810
FROM Table_name
WHERE target_type = 7
See https://dev.mysql.com/doc/refman/8.0/en/insert-select.html,
and http://www.mysqltutorial.org/mysql-insert-into-select/ for more info.
P.S. you probably didn't mean "recursively", that was a bit confusing. Recursion is quite a specific thing in programming and it doesn't really apply to SQL queries.
I have a column containing 123 unique Raw Material Names and another column called OutgoingRawMaterial, the idea being that we can look at the table and see that 270 units of Ingredient A left the company etc.
I wanted to create an update query which would update the related OutgoingRawMaterial with the figure from the query 'qrySumManufacturingRawMaterials' which contains the figures I want. The Update Query works fine for individual records, as below:
Field: OutgoingRawMaterial
Table: tblRawMaterialsManufactured
Update to: [tblSumManufacturingRawMaterials Query].[Expr1]
Criteria: [RawMaterial] Like "Raw Material 1*"
The problem is that I want to do the same for all 123 records and don't know how to do this, short of creating 123 queries and running them all from a macro or VBA. I was also going to create a single query with all 123 Raw Materials, using a different 'LIKE' to isolate them, but get the "Duplicate Output Destination 'tblRawMaterialsManufactured.OutgoingRawMaterial'" I can confirm that I only have one column named 'OutgoingRawMaterial'.
UPDATE:
The answer below is how you would normally solve this. The way you're trying to do it by having 2 columns with a bunch of records and a 3rd column with the sum is actually considered a bad design because your third column is not related to the first 2 columns. You'll have tell me more to understand how that third column will work - does it contain 123 sum (aggregate) values while the first 2 columns contains say hundreds of individual values?
The easiest approach would be to delete everything and recreate the table. For example, let's say that 'qrySumManufacturingRawMaterials' outputs something like this:
Raw1 | 1
Raw2 | 2
...
And OutgoingRawMaterial has the same format then you can just do a INSERT INTO SELECT:
DELETE FROM OutgoingRawMaterial;
INSERT INTO OutgoingRawMaterial SELECT 'qrySumManufacturingRawMaterials'
There are many variations on this - for example if the query outputs different column names you can change the query to INSERT INTO OutgoingRawMaterial (Col1, Col2, ...) so if you need more help please update the question.
I am trying to create a generic trigger that will be used for about 20 different tables in a mysql database. I know that there will be some table-specific in each trigger, but I would like to keep that to a minimum.
Currently, I am trying to iterate through all columns, but it doesn't seem to be possible. Is it possible to iterate all columns and get the new value?
I am thinking, something like: NEW[column-name]..
Where "column-name" is the result from a select-query in information_schema table.
Edit:
Thanks!
I was asked to add more information about the tables.
Each table is more or less unique. The only thing that might be the same is the column "id", which is a unique row-id. Auto-increment.
Thats why I want to dynamically iterate all columns in the trigger.I know that for 20 tables, there will be 20 triggers, But I want to keep the number of unique parameters in each trigger as low as possible.
So for example
Table1:
* id, user_id, value, timestamp
Table2:
* id, class_id, user_name, value_1, value_2, value_3, time, date
And pseudo code:
For each row:
For each 'column' in table_columns:
something[column] = NEW[column];
send(something);
send() is a user-defined function that I've created.
Thanks!
So I know in MySQL it's possible to insert multiple rows in one query like so:
INSERT INTO table (col1,col2) VALUES (1,2),(3,4),(5,6)
I would like to delete multiple rows in a similar way. I know it's possible to delete multiple rows based on the exact same conditions for each row, i.e.
DELETE FROM table WHERE col1='4' and col2='5'
or
DELETE FROM table WHERE col1 IN (1,2,3,4,5)
However, what if I wanted to delete multiple rows in one query, with each row having a set of conditions unique to itself? Something like this would be what I am looking for:
DELETE FROM table WHERE (col1,col2) IN (1,2),(3,4),(5,6)
Does anyone know of a way to do this? Or is it not possible?
You were very close, you can use this:
DELETE FROM table WHERE (col1,col2) IN ((1,2),(3,4),(5,6))
Please see this fiddle.
A slight extension to the answer given, so, hopefully useful to the asker and anyone else looking.
You can also SELECT the values you want to delete. But watch out for the Error 1093 - You can't specify the target table for update in FROM clause.
DELETE FROM
orders_products_history
WHERE
(branchID, action) IN (
SELECT
branchID,
action
FROM
(
SELECT
branchID,
action
FROM
orders_products_history
GROUP BY
branchID,
action
HAVING
COUNT(*) > 10000
) a
);
I wanted to delete all history records where the number of history records for a single action/branch exceed 10,000. And thanks to this question and chosen answer, I can.
Hope this is of use.
Richard.
Took a lot of googling but here is what I do in Python for MySql when I want to delete multiple items from a single table using a list of values.
#create some empty list
values = []
#continue to append the values you want to delete to it
#BUT you must ensure instead of a string it's a single value tuple
values.append(([Your Variable],))
#Then once your array is loaded perform an execute many
cursor.executemany("DELETE FROM YourTable WHERE ID = %s", values)
I have a table that stores the summed values of a large table. I'm not calculating them on the fly as I need them frequently.
What is the best way to update these values?
I could delete the relevant rows from the table, do a full group by sum on all the relevant lines and then insert the new data.
Or I could index a timestamp column on the main table, and then only sum the latest values and add them to the existing data. This is complicated because some sums won't exist so both an insert and an update query would need to run.
I realize that the answer depends on the particulars of the data, but what I want to know is if it is ever worth doing the second method; if there are millions of rows being summed in the first example and only tens in the second, would the second be significantly faster to execute?
You can try with triggers on update/delete. Then you check inserted or deleted value and according to it modify the sum in second table.
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
For me there is several ways :
Make a view which should be up-to-date (i don't know if you can do concrete views in mysql)
Make a table which will be up-to-date using a trigger (on update/delete/insert as example) or using a batch during (night, so data will be 1 day old)
Make a stored procedure which will be retrieving and computing only the data needed.
I would do something like this (INSERT UPDATE):
mysql_query("
INSERT INTO sum_table (col1, col2)
SELECT id, SUM(value)
FROM table
GROUP BY id
ON DUPLICATE KEY UPDATE col2 = VALUES(col2)
");
Please let me know if you need more examples.