Is it possible in sql to delete since one row until the end of the table ?
for instance :
delete from mytable where oneDate ='2017-06-06';
and since this row delete all following rows?
Yes it is.
Example:
DELETE FROM [table] WHERE [table.ID]>=50
Assuming the table has 100 records it would delete from 50 to 100, keeping the first 50. Same with dates and having into consideration some criteria to filter from some specific point on (in my example, the ID).
Related
I need to perform an update on a given table with a condition C, with a java driver.
If there is no row matching the condition C, i need to insert a new entity in the table.
If the row exists, then the update is enought.
To do so, is it possible to return from an update query the two following informations :
-Matched rows count
-Updated rows count
I believe the executeUpdate only return the number of rows updated.
The problem is that it might be zero if the update query doesn't update anything, so i have no way to know if 0 means no match (and i'll need to perform an insert) or no update.
Note : A workaround could be to insert a random field (or date), that would be updated everytime, but i'd prefer a better solution.
Thanks
The number of updated and matched rows are the same. Even if the row already has the values passed in the update, it will be counted as an updated row:
id name
1 foo
update mytable set name = 'foo' where id = 1;
--> 1 row updated
So I know in MySQL it's possible to insert multiple rows in one query like so:
INSERT INTO table (col1,col2) VALUES (1,2),(3,4),(5,6)
I would like to delete multiple rows in a similar way. I know it's possible to delete multiple rows based on the exact same conditions for each row, i.e.
DELETE FROM table WHERE col1='4' and col2='5'
or
DELETE FROM table WHERE col1 IN (1,2,3,4,5)
However, what if I wanted to delete multiple rows in one query, with each row having a set of conditions unique to itself? Something like this would be what I am looking for:
DELETE FROM table WHERE (col1,col2) IN (1,2),(3,4),(5,6)
Does anyone know of a way to do this? Or is it not possible?
You were very close, you can use this:
DELETE FROM table WHERE (col1,col2) IN ((1,2),(3,4),(5,6))
Please see this fiddle.
A slight extension to the answer given, so, hopefully useful to the asker and anyone else looking.
You can also SELECT the values you want to delete. But watch out for the Error 1093 - You can't specify the target table for update in FROM clause.
DELETE FROM
orders_products_history
WHERE
(branchID, action) IN (
SELECT
branchID,
action
FROM
(
SELECT
branchID,
action
FROM
orders_products_history
GROUP BY
branchID,
action
HAVING
COUNT(*) > 10000
) a
);
I wanted to delete all history records where the number of history records for a single action/branch exceed 10,000. And thanks to this question and chosen answer, I can.
Hope this is of use.
Richard.
Took a lot of googling but here is what I do in Python for MySql when I want to delete multiple items from a single table using a list of values.
#create some empty list
values = []
#continue to append the values you want to delete to it
#BUT you must ensure instead of a string it's a single value tuple
values.append(([Your Variable],))
#Then once your array is loaded perform an execute many
cursor.executemany("DELETE FROM YourTable WHERE ID = %s", values)
I don't think this is possible as I couldn't find anything but I thought I would check on here in case I am not searching for the correct thing.
I have a settings table in my database which has two columns. The first column is the setting name and the second column is the value.
I need to update all of these at the same time. I wanted to see if there was a way to update these values at the same time one query like the following
UPDATE table SET col1='setting name' WHERE col2='1 value' AND SET col1='another name' WHERE col2='another value';
I know the above isn't a correct SQL format but this is the sort of thing that I would like to do so was wondering if there was another way that this can be done instead of having to perform separate SQL queries for each setting I want to update.
Thanks for your help.
You can use INSERT INTO .. ON DUPLICATE KEY UPDATE to update multiple rows with different values.
You do need a unique index (like a primary key) to make the "duplicate key"-part work
Example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE b = VALUES(b), c = VALUES(c);
-- VALUES(x) points back to the value you gave for field x
-- so for b it is 2 and 5, for c it is 3 and 6 for rows 1 and 4 respectively (if you assume that a is your unique key field)
If you have a specific case I can give you the exact query.
UPDATE table
SET col2 =
CASE col1
WHEN 'setting1'
THEN 'value'
ELSE col2
END
, SET col1 = ...
...
I decided to use multiple queries all in one go. so the code would go like
UPDATE table SET col2='value1' WHERE col1='setting1';
UPDATE table SET col2='value2' WHERE col1='setting1';
etc
etc
I've just done a test where I insert 1500 records into the database. Do it without starting a DB transaction and it took 35 seconds, blanked the database and did it again but starting a transaction first, then once the 1500th record inserted finish the transaction and the time it took was 1 second, so definetely seems like doing it in a db transaction is the way to go.
You need to run separate SQL queries and make use of Transactions if you want to run as atomic.
UPDATE table SET col1=if(col2='1 value','setting name','another name') WHERE col2='1 value' OR col2='another value'
#Frits Van Campen,
The insert into .. on duplicate works for me.
I am doing this for years when I want to update more than thousand records from an excel import.
Only problem with this trick is, when there is no record to update, instead of ignoring, this method inserts a record and on some instances it is a problem. Then I need to insert another field, then after import I have to delete all the records that has been inserted instead of update.
Here is what im trying to do explained in a query
DELETE FROM table ORDER BY dateRegistered DESC LIMIT 1000 *
I want to run such query in a script which i have already designed. Every time it finds older records that are 1001th record or above it deletes
So kinda of setting Max Row size but deleting all the older records.
Actually is there a way to set that up in the CREATE statement.
Therefore: If i have 9023 rows in the database, when i run that query it should delete 8023 rows and leave me with 1000
If you have a unique ID for rows here is the theoretically correct way, but it is not very efficient (not even if you have an index on the dateRegistered column):
DELETE FROM table
WHERE id NOT IN (
SELECT id FROM table
ORDER BY dateRegistered DESC
LIMIT 1000
)
I think you would be better off by limiting the DELETE directly by date instead of number of rows.
I don't think there is a way to set that up in the CREATE TABLE statement, at least not a portable one.
The only way that immediately occurs to me for this exact job is to do it manually.
First, get a lock on the table. You don't want the row count changing while you're doing this. (If a lock is not practical for your app, you'll have to work out a more clever queuing system rather than using this method.)
Next, get current row count:
SELECT count(*) FROM table
Once you have that, you should with simple maths be able to figure out how many rows need deleting. Let's say it said 1005 - you need to delete 5 rows.
DELETE FROM table ORDER BY dateRegistered ASC LIMIT 5
Now, unlock the table.
If a lock isn't practical for your scenario, you'll have to be a bit more clever - for example, select the unique ID of all the rows that need deleting, and queue them for gradual deletion. I'll let you work that out yourself :)
Is there a way to remove all repeat rows from a MySQL database?
A couple of years ago, someone requested a way to delete duplicates. Subselects make it possible with a query like this in MySQL 4.1:
DELETE FROM some_table WHERE primaryKey NOT IN
(SELECT MIN(primaryKey) FROM some_table GROUP BY some_column)
Of course, you can use MAX(primaryKey) as well if you want to keep the newest record with the duplicate value instead of the oldest record with the duplicate value.
To understand how this works, look at the output of this query:
SELECT some_column, MIN(primaryKey) FROM some_table GROUP BY some_column
As you can see, this query returns the primary key for the first record containing each value of some_column. Logically, then, any key value NOT found in this result set must be a duplicate, and therefore it should be deleted.
These questions / answers might interest you :
How to delete duplicate records in mysql database?
How to delete Duplicates in MySQL table.
And idea that's often used when you are working with a big table is to :
Create a new table
Insert into that table the unique records (i.e. only one version of the duplicates in the original table, generally using a select distinct)
and use that new table in your application ; or drop the old table and rename the new one to the old name.
Good thing with this principle is you have the possibility to verify what's in the new table before dropping the old one -- always nice to check that sort of thing ^^