I explain the problem, and the solution I tried to implement. I have a table with a lot of data, and I want to delete 4 rows each 5 rows. The aim is to have a lighter table.
This is my request :
SET #var_name = -1;
DELETE FROM myTable
WHERE id IN
(
SELECT id FROM myTable HAVING (#var_name := #var_name +1) % 5 !=0
)
The SELECT operation works properly, but together with the DELETE operation I get this message
#1093 - Table 'myTable' is specified twice, both as a target for 'DELETE' and as a separate source for data
I understand the meaning : I can't delete the table as it in the request. A workaround is possible : get the full list in a console, and execute the DELETE operation. It's better to perform it in one line.
Thanks for you help.
Thanks a lot
Do you try this one?
SET #var_name = -1;
DELETE FROM myTable WHERE (#var_name := #var_name +1) % 5 = 0
Related
I'm using MySQL 5.6 and I have this issue.
I'm trying to improve my bulk update strategy for this case.
I have a table, called reserved_ids, provided by an external company, to assign unique IDs to its invoices. There is no other way to make this; I can't use auto_increment fields or simulated sequences.
I have this PL pseudocode to make this assignment:
START TRANSACTION;
OPEN invoice_cursor;
read_loop: LOOP
FETCH invoice_cursor INTO internalID;
IF done THEN
LEAVE read_loop;
END IF;
SELECT MIN(SECUENCIAL)
INTO v_secuencial
FROM RESERVED_IDS
WHERE COUNTRY_CODE = p_country_id AND INVOICE_TYPE = p_invoice_type;
DELETE FROM RESERVED_IDS WHERE SECUENCIAL = v_secuencial;
UPDATE MY_INVOICE SET RESERVED_ID = v_secuencial WHERE INVOICE_ID = internalID;
END LOOP read_loop;
CLOSE invoice_cursor;
COMMIT;
So, it's take one - remove - assign, then take next - remove - assign... and so on.
This works, but it's very very slow.
I don't know if there is any approach to make this assignment in a faster way.
I'm looking for something like INSERT INTO SELECT..., but with UPDATE statement, to assign 1000 or 2000 IDs directly, and no one by one.
Please, any suggestion is very helpful for me.
Thanks a lot.
EDIT 1: I have added WHERE clause details, because it was requested by user #vmachan . In the UPDATE...INVOICE clause, I don't filter by other criteria, because I have the direct and indexed invoice ID, which I want to update. Thanks
Finally, I have this solution. It's much faster than my initial approach.
The UPDATE query is
set #a=0;
set #b=0;
UPDATE MY_INVOICE
INNER JOIN
(
select
F.invoice_id,
I.secuencial as RESERVED_ID,
CONCAT_WS(/* format your final invoice ID */) AS FINAL_MY_INVOICE_NUMBER
FROM
(
select if(#a, #a:=#a+1, #a:=1) as current_row, internal_id
from MY_INVOICE
where reserved_id is null
order by internal_id asc
limit 2000
) F
INNER JOIN
(
SELECT if(#b, #b:=#b+1, #b:=1) as current_row, secuencial
from reserved_ids
order by secuencial asc
limit 2000
) I USING (CURRENT_ROW)
) TEMP MY_INVOICE.internal_id=TEMP.INTERNAL_ID
SET MY_INVOICE.RESERVED_ID = TEMP.RESERVED_ID, MY_INVOICE.FINAL_MY_INVOICE_NUMBER=TEMP.FINAL_MY_INVOICE_NUMBER
So, with autogenerated and correlated secuencial numbers #a and #b, we can join two different and no related tables like MY_INVOICE and RESERVED_IDs.
If you want to check this solution, please execute this tricky update following these steps:
Execute #a and then the first inner select in an isolated way: select if(#a, #a:=#a+1, ...
Execute #b and then the second inner select in an isolated way: select if(#b, #b:=#b+1, ...
Execute #a, #b and the big select that builds the TEMP auxiliar table: select F.invoice_id, ...
Execute the UPDATE
Finally, remove the assigned IDs from RESERVED_ID table.
Assignation time reduced drastically. My initial solution was one by one; with this, you assign 2000 (or more) in one single (ok, and a little tricky) update.
Hope this helps.
I need to limit records based on percentage but MYSQL does not allow that. I need 10 percent User Id of (count(User Id)/max(Total_Users_bynow)
My code is as follows:
select * from flavia.TableforThe_top_10percent_of_the_user where `User Id` in (select distinct(`User Id`) from flavia.TableforThe_top_10percent_of_the_user group by `User Id` having count(distinct(`User Id`)) <= round((count(`User Id`)/max(Total_Users_bynow))*0.1)*count(`User Id`));
Kindly help.
Consider splitting your problem in pieces. You can use user variables to get what you need. Quoting from this question's answers:
You don't have to solve every problem in a single query.
So... let's get this done. I'll not put your full query, but some examples:
-- Step 1. Get the total of the rows of your dataset
set #nrows = (select count(*) from (select ...) as a);
-- --------------------------------------^^^^^^^^^^
-- The full original query (or, if possible a simple version of it) goes here
-- Step 2. Calculate how many rows you want to retreive
-- You may use "round()", "ceiling()" or "floor()", whichever fits your needs
set #limrows = round(#nrows * 0.1);
-- Step 3. Run your query:
select ...
limit #limrows;
After checking, I found this post which says that my above approach won't work. There's, however, an alternative:
-- Step 1. Get the total of the rows of your dataset
set #nrows = (select count(*) from (select ...) as a);
-- --------------------------------------^^^^^^^^^^
-- The full original query (or, if possible a simple version of it) goes here
-- Step 2. Calculate how many rows you want to retreive
-- You may use "round()", "ceiling()" or "floor()", whichever fits your needs
set #limrows = round(#nrows * 0.1);
-- Step 3. (UPDATED) Run your query.
-- You'll need to add a "rownumber" column to make this work.
select *
from (select #rownum := #rownum+1 as rownumber
, ... -- The rest of your columns
from (select #rownum := 0) as init
, ... -- The rest of your FROM definition
order by ... -- Be sure to order your data
) as a
where rownumber <= #limrows
Hope this helps (I think it will work without a quirk this time)
I have a mysql command:
update table_demo SET flag= 1 where flag=0 ORDER BY id ASC LIMIT 10
and need the same command in Postgres, I get this error:
ERROR: syntax error at or near 'ORDER'
To update 10 first rows (that actually need the update):
UPDATE table_demo t
SET flag = 1
FROM (
SELECT table_demo_id -- use your actual PK column(s)
FROM table_demo
WHERE flag IS DISTINCT FROM 1
ORDER BY id
LIMIT 10
FOR UPDATE
) u
WHERE u.table_demo_id = t.table_demo_id;
FOR UPDATE (row level locks) are only needed to protect against concurrent write access. If your transaction is the only one writing to that table, you don't need it.
If flag is defined NOT NULL, you can use WHERE flag <> 0.
Related answers with more explanation and links:
Update top N values using PostgreSQL
How do I (or can I) SELECT DISTINCT on multiple columns?
I'm having a trouble with something that looks like simple thing. I'm trying to find first row that satisfies WHERE part of query and UPDATE it.
UPDATE Donation SET Available=0 WHERE Available != 0 and BloodGroup='" + bloodGroup + "' LIMIT 1"
bloodGroup is variable that gets filled automatically using C# and it keeps string value of selected blood group.
When I try to run this I get incorrect syntax near 'limit'.
What I'm doing wrong? Is it possible using LIMIT like during UPDATE query?
During debugging I got query like this:
UPDATE Donation SET Available=0 WHERE Available != 0 AND BloodGroup='AB-' LIMIT 1
Because C# is often used with SQL Server, perhaps the question is mistagged. The syntax looks fine for MySQL.
In SQL Server, you can do this as:
UPDATE TOP (1) Donation
SET Available = 0
WHERE Available <> 0 AND BloodGroup = 'AB-';
Note that this chooses an arbitrary matching row, as does your original query (there is no order by).
It is not safe to use limit in update queries.
Please refer
http://bugs.mysql.com/bug.php?id=42415
The documentation states that any UPDATE statement with LIMIT clause is considered unsafe since the order of the rows affected is not defined: http://dev.mysql.com/doc/refman/5.1/en/replication-features-limit.html
However, if "ORDER BY PK" is used, the order of rows is defined and such a statement could be logged in statement format without any warning.
You can use like this way limit in Update Queries like these
UPDATE messages SET test_read=1
WHERE id IN (
SELECT id FROM (
SELECT id FROM messages
ORDER BY date_added DESC
LIMIT 5, 5
) tmp
);
Also please
Can you try it? way of getting row_number
UPDATE Donation d1 join (SELECT id,(SELECT #Row:=0) as row,(#Row := #Row + 1) AS row_number FROM Donation where Available <> 0 AND BloodGroup='AB-') d2
ON d1.id=d2.id
SET d1.Available='three'
WHERE d1.Available <> 0 AND d1.BloodGroup='AB-' AND d2.row_number='1'
I have a table like this (MySQL 5.0.x, MyISAM):
response{id, title, status, ...} (status: 1 new, 3 multi)
I would like to update the status from new (status=1) to multi (status=3) of all the responses if at least 20 have the same title.
I have this one, but it does not work :
UPDATE response SET status = 3 WHERE status = 1 AND title IN (
SELECT title FROM (
SELECT DISTINCT(r.title) FROM response r WHERE EXISTS (
SELECT 1 FROM response spam WHERE spam.title = r.title LIMIT 20, 1)
)
as u)
Please note:
I do the nested select to avoid the famous You can't specify target table 'response' for update in FROM clause
I cannot use GROUP BY for performance reasons. The query cost with a solution using LIMIT is way better (but it is less readable).
EDIT:
It is possible to do SELECT FROM an UPDATE target in MySQL. See solution here
The issue is on the data selected which is totaly wrong.
The only solution I found which works is with a GROUP BY:
UPDATE response SET status = 3
WHERE status = 1 AND title IN (SELECT title
FROM (SELECT title
FROM response
GROUP BY title
HAVING COUNT(1) >= 20)
as derived_response)
Thanks for your help! :)
MySQL doesn't like it when you try to UPDATE and SELECT from the same table in one query. It has to do with locking priorities, etc.
Here's how I would solve this problem:
SELECT CONCAT('UPDATE response SET status = 3 ',
'WHERE status = 1 AND title = ', QUOTE(title), ';') AS sql
FROM response
GROUP BY title
HAVING COUNT(*) >= 20;
This query produces a series of UPDATE statements, with the quoted titles that deserve to be updated embedded. Capture the result and run it as an SQL script.
I understand that GROUP BY in MySQL often incurs a temporary table, and this can be costly. But is that a deal-breaker? How frequently do you need to run this query? Besides, any other solutions are likely to require a temporary table too.
I can think of one way to solve this problem without using GROUP BY:
CREATE TEMPORARY TABLE titlecount (c INTEGER, title VARCHAR(100) PRIMARY KEY);
INSERT INTO titlecount (c, title)
SELECT 1, title FROM response
ON DUPLICATE KEY UPDATE c = c+1;
UPDATE response JOIN titlecount USING (title)
SET response.status = 3
WHERE response.status = 1 AND titlecount.c >= 20;
But this also uses a temporary table, which is why you try to avoid using GROUP BY in the first place.
I would write something straightforward like below
UPDATE `response`, (
SELECT title, count(title) as count from `response`
WHERE status = 1
GROUP BY title
) AS tmp
SET response.status = 3
WHERE status = 1 AND response.title = tmp.title AND count >= 20;
Is using GROUP BY really that slow ? The solution you tried to implement looks like requesting again and again on the same table and should be way slower than using GROUP BY if it worked.
This is a funny peculiarity with MySQL - I can't think of a way to do it in a single statement (GROUP BY or no GROUP BY).
You could select the appropriate response rows into a temporary table first then do the update by selecting from that temp table.
you'll have to use a temporary table:
create temporary table r_update (title varchar(10));
insert r_update
select title
from response
group
by title
having count(*) < 20;
update response r
left outer
join r_update ru
on ru.title = r.title
set status = case when ru.title is null then 3 else 1;