Query execution was interrupted, error #1317 - mysql

What I have is a table with a bunch of products (books, in this case). My point-of-sale system generates me a report that has the ISBN (unique product number) and perpetual sales.
I basically need to do an update that matches the ISBN from one table with the ISBN from the other and then add the sales from the one table to the other.
This needs to be done for about 30,000 products.
Here is the SQL statement that I am using:
UPDATE `inventory`,`sales`
SET `inventory`.`numbersold` = `sales`.`numbersold`
WHERE `inventory`.`isbn` = `sales`.`isbn`;
I am getting MySQL Error:
#1317 SQLSTATE: 70100 (ER_QUERY_INTERRUPTED) Query execution was interrupted
I am using phpMyAdmin provided by GoDaddy.com

I've probably come to this a bit late, but... It certainly looks like the query is being interrupted by an execution time limit. There may be no easy way around this, but here's a couple of ideas:
Make sure that inventory.isbn and sales.isbn are indexed. If they aren't, adding an index will reduce your execution time dramatically.
if that doesn't work, break the query down into blocks and run it several times:
UPDATE `inventory`,`sales`
SET `inventory`.`numbersold` = `sales`.`numbersold`
WHERE `inventory`.`isbn` = `sales`.`isbn`
AND substring(`inventory`.sales`,1,1) = '1';
The AND clause restricts the search to ISBNs starting with the digit 1. Run the query for each digit from '0' to '9'. For ISBNs you might find selecting on the last character gives better results. Use substring(inventory.sales,-1)`

try to use INNER JOIN in the two tables like that
UPDATE `inventory`
INNER JOIN `sales`
ON `inventory`.`isbn` = `sales`.`isbn`
SET `inventory`.`numbersold` = `sales`.`numbersold`

UPDATE inventory,sales
SET inventory.numbersold = sales.numbersold
WHERE inventory.isbn = sales.isbn
AND inventory.id < 5000
UPDATE inventory,sales
SET inventory.numbersold = sales.numbersold
WHERE inventory.isbn = sales.isbn
AND inventory.id > 5000 inventory.id < 10000
...
If the error, you can try to reduce the number to 1000, for example

Related

SQL Update + Inner Join on a range of rows

I have table "Temp" and table "Today", with same column names ("url" and "date").
I want to update "date" column of "Temp" table when url match.
But my tables are quite big (30K elements) and phpmyadmin does not want to execute the following - right - query :
update Temp Tp
inner join Today Ty on
Tp.url = Ty.url
set Tp.date = Ty.date
I get a "Query execution was interrupted, error #1317"
Why ? I expect this is because I pay for a mutualized server (OVH) and I am not able to execute queries longer than 2-3 seconds.
Anyway, now I want to execute this query range by range. First 1000 rows, 1000-2000 etc.
I tried the following :
update Temp Tp
inner join
(
select Tp2.date
from Temp Tp2
inner join Today Ty2
on Tp2.url = Ty2.url
limit 1000
) Ty on Tp.url = Ty.url
set Tp.date = Ty.date
BUT I get the following error : #1054 - Unknown column 'Ty.url' in 'on clause'
I couldn't find out why ?
As far as I can see, there are two problems here. First, as already mentioned by #pmbAustin, you're missing a column in your subquery.
Secondly, I think your subquery should be selecting the date from Ty2, rather than Tp2:
update Temp Tp
inner join
(
select Ty2.date, Tp2.url
from Temp Tp2
inner join Today Ty2
on Tp2.url = Ty2.url
limit 1000
) Ty on Tp.url = Ty.url
set Tp.date = Ty.date
See SQLFiddle (although for practical reasons, this demo is limited to 2).
Although you haven't specifically asked this (and you're probably aware already), for completeness it should be mentioned that for subsequent queries, LIMIT should be used alongside OFFSET (or just use the shortcut LIMIT 1000, 1000, LIMIT 2000,1000, LIMIT <offset>, <limit>, etc.

MySQL Query gets too complex for me

I'm trying to write a MYSQL Query that updates a cell in table1 with information gathered from 2 other tables;
The gathering of data from the other 2 tables goes without much issues (it is slow, but that's because one of the 2 tables has 4601537 records in it.. (because all the rows for one report are split in a separate record, meaning that 1 report has more than 200 records)).
The Query that I use to Join the two tables together is:
# First Table, containing Report_ID's: RE
# Table that has to be updated: REGI
# Join Table: JT
SELECT JT.report_id as ReportID, REGI.Serienummer as SerialNo FROM Blancco_Registration.TrialTable as REGI
JOIN (SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92) AS JT ON JT.Value_string = REGI.Serienummer
WHERE REGI.HardwareType="PC" AND REGI.BlanccoReport=0 LIMIT 100
This returns 100 records (I limit it because the database is in use during work hours and I don't want to steal all resources).
However, I want to use these results in a Query that updates the REGI table (which it uses to select the 100 records in the first place).
However, I get the error that I cannot select from the table itself while updateing it (logically). So I tried selecting the select statement above into a temp table and than Update it; however, then I get the issue that I get to much results (logically! I only need 1 result and get 100) however, I'm getting stuck in my own thougts.. I ultimately need to fill the ReportID into each record of REGI.
I know it should be possible, but I'm no expert in MySQL.. is there anybody that can point me into the right direction?
Ps. fixing the table containing 400k records is not an option, it's a program from an external developer and I can only read that database.
The errors I'm talking about are as follows:
Error Code: 1093. You can't specify target table 'TrialTable' for update in FROM clause
When I use:
UPDATE TrialTable SET TrialTable.BlanccoReport =
(SELECT JT.report_id as ReportID, REGI.Serienummer as SerialNo FROM Blancco_Registration.TrialTable as REGI
JOIN (SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92) AS JT ON JT.Value_string = REGI.Serienummer
WHERE REGI.HardwareType="PC" AND REGI.BlanccoReport=0 LIMIT 100)
WHERE TrialTable.HardwareType="PC" AND TrialTable.BlanccoReport=0)
Then I tried:
UPDATE TrialTable SET TrialTable.BlanccoReport = (SELECT ReportID FROM (<<and the rest of the SQL>>> ) as x WHERE X.SerialNo = TrialTable.Serienummer)
but that gave me the following error:
Error Code: 1242. Subquery returns more than 1 row
Haveing the Query above with a LIMIT 1, gives everything the same result
Firstly, your query seems to be functionally identical to the following:
SELECT RE.report_id ReportID
, REGI.Serienummer SerialNo
FROM Blancco_Registration.TrialTable REGI
JOIN Blancco_new.mc_report_Entry RE
ON RE.Value_string = REGI.Serinummer
WHERE REGI.HardwareType = "PC"
AND REGI.BlanccoReport=0
AND RE.path_id=92
LIMIT 100
So, why not use that?
EDIT:
I still don't get it. I can't see what part of the problem the following fails to solve...
UPDATE TrialTable REGI
JOIN Blancco_new.mc_report_Entry RE
ON RE.Value_string = REGI.Serinummer
SET TrialTable.BlanccoReport = RE.report_id
WHERE REGI.HardwareType = "PC"
AND REGI.BlanccoReport=0
AND RE.path_id=92;
(This is not an answer, but maybe a pointer towards a few points that need further attention)
Your JT sub query looks suspicious to me:
(SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92
GROUP BY RE.report_id)
You use group by but don't actually use any aggregate functions. The column RE.Value_string should strictly be something like MAX(RE.Value_string) instead.

Need help in writing Efficient SQL query

I have the following query, written inside perl script:
insert into #temp_table
select distinct bv.port,bv.sip,avg(bv.bv) bv, isnull(avg(bv.book_sum),0) book_sum,
avg(bv.book_tot) book_tot,
check_null = case when bv.book_sum = null then 0 else 1 end
from table_bv bv, table_group pge, table_master sm
where pge.a_p_g = '$val'
and pge.p_c = bv.port
and bv.r = '$r'
and bv.effective_date = '$date'
and sm.sip = bv.sip
query continued -- need help below (can some one help me make this efficient, or rewriting, I am thinking its wrong)
and ((sm.s_g = 'FE')OR(sm.s_g='CH')OR(sm.s_g='FX')
OR(sm.s_g='SH')OR(sm.s_g='FD')OR(sm.s_g='EY')
OR ((sm.s_t = 'TA' OR sm.s_t='ON')))
query continued below
group by bv.port,bv.sip
query ends
explanation: some $val that contain sip with
s_g ('FE','CH','FX','SH','FD','EY') and
s_t ('TA','ON') have book_sum as null. The temp_table does not take null values,
hence I am inserting them as zero ( isnull(avg(bv.book_sum),0) ) where ever it encounters a null for the following s_g and s_m ONLY.
I have tried making the query as follows but it made my script to stop wroking:
and sm.s_g in ('FE', 'CH','FX','SH','FD','EY')
or sm.s_t in ('TA','ON')`
I know this should be a comment, but I don't have the rep. To me, it looks like it's hanging because you lost your grouping at the end. I think it should be:
and (
sm.s_g in ('FE', 'CH','FX','SH','FD','EY')
or
sm.s_t in ('TA','ON')
)
Note the parentheses. Otherwise, you're asking for all of the earlier conditions, OR that sm.s_t is one of TA or ON, which is a much larger set than you're anticipating, which may cause it to spin.

Long time exection in update table with join in SQL server 2008

i'm facing a big problem when trying to update a table containing stock data put in join with a table containing product classification. This operation is taking long time for execution.
Table dw_giacenze (having flag_nomatch parameter equal to T) a is put on inner join with dw_key_prod z on ecat_key field.
a contains up to 3 milions records, z 150k records.
It takes more than 2 hours in execution.
Below the update query I'm using.
update dw_giacenze
set cate_ecat_key = z.cate_ecat_key,
sottocat_ecat_key = z.sottocat_ecat_key,
marchio_key = z.marchio_key,
sottocat_bi_key = z.sottocat_bi_key,
gruppo_bi_key = z.gruppo_bi_key,
famiglia_bi_key = z.famiglia_bi_key,
flag_nomatch = NULL
from dw_giacenze a
inner join dw_key_prod z on
z.ecat_key = a.ecat_key
where
a.flag_nomatch = 'T';
Can anyone help me in optimizing it?
Thanks in advance!
Enrico
I would suggest focusing in on a.flag_nomatch = 'T'.
A great way to get a really clear picture of what's going on is to use SQL Server Profiler. If this shows that your reads equals the number of rows in the table, then that's definitely an issue. Adding an index on flag_nomatch.
Alternatively, you could separate this out and update things individually (to start with)
UPDATE dw_giacenze
set sottocat_ecat_key = (SELECT sottocat_ecat_key
FROM dw_key_prod
WHERE dw_key_prod.ecat_key = dw_giacenze.ecat_key)
where
dw_giacenze.flag_nomatch = 'T';
I did notice that the first parameter in your set statement is actually the same parameter in your join. That means that you are setting it to the same exact value, so you should be able to remove that anyway.

using ssis to perform operation with high performance

Im trying to make an operation of creating user network based on call detail records in my CDR table.
To make things simple lets say Ive got CDR table :
CDRid
UserAId
UserBId
there is more than 100 mln records so table is quite big.
I reated user2user table:
UserAId
UserBId
NumberOfConnections
then using curos I iterate through each row in the table, then I make select statement:
if in user2user table there is record which has UserAId = UserAId from CDR record and UserBId = UserBId from CDR record then increase NumberOfConnections.
otherwise insert such a row which NumebrOfConnections = 1.
Quite simple task and it works as I said using cursor but it is very bad in performance (estimated time at my computer ~60 h).
I heard about Sql Server Integration Services that it has got better performance when we are talking about such big tables.
Problem is that I have no idea how to customize SSIS package for creating such task.
If anyone has got any idea how to help me, any good resources etc I would be really thankful.
Maybe there is any other good solution to make it work faster. I used indexes and variable tables and so on and performance is still pure.
thanks for help,
P.S.
This is script which I wrote and execution of this takes sth like 40 - 50 h.
DECLARE CDR_cursor CURSOR FOR
SELECT CDRId, SubscriberAId, BNumber
FROM dbo.CDR
OPEN CDR_cursor;
FETCH NEXT FROM CDR_cursor
INTO #CdrId, #SubscriberAId, #BNumber;
WHILE ##FETCH_STATUS = 0
BEGIN
--here I check if there is a user with this number (Cause in CDR i only have SubscriberAId --and BNumber so that I need to check which one user is this (I only have users from
--network so that each time I cant find this user I add one which is outide network)
SELECT #UserBId = (Select UserID from dbo.Number where Number = #BNumber)
IF (#UserBId is NULL)
BEGIN
INSERT INTO dbo.[User] (ID, Marked, InNetwork)
VALUES (#OutUserId, 0, 0);
INSERT into dbo.[Number](Number, UserId) values (#BNumber, #OutUserId);
INSERT INTO dbo.User2User
VALUES (#SubscriberAId, #OutUserId, 1)
SET #OutUserId = #OutUserId - 1;
END
else
BEGIN
UPDATE dbo.User2User
SET NumberOfConnections = NumberOfConnections + 1
WHERE User1ID = #SubscriberAId AND User2ID = #UserBId
-- Insert the row if the UPDATE statement failed.
if(##ROWCOUNT = 0)
BEGIN
INSERT INTO dbo.User2User
VALUES (#SubscriberAId, #UserBId, 1)
END
END
SET #Counter = #Counter + 1;
if((#Counter % 100000) = 0)
BEGIN
PRINT Cast (#Counter as NVarchar(12));
END
FETCH NEXT FROM CDR_cursor
INTO #CdrId, #SubscriberAId, #BNumber;
END
CLOSE CDR_cursor;
DEALLOCATE CDR_cursor;
The thing about SSIS is that it probably won't be much faster than a cursor. It's pretty much doing the same thing: reading the table record by record, processing the record and then moving to the next one. There are some advanced techniques in SSIS like sharding the data input that will help if you have heavy duty hardware, but without that it's going to be pretty slow.
A better solution would be to write an INSERT and an UPDATE statement that will give you what you want. With that you'll be better able to take advantage of indices on the database. They would look something like:
WITH SummaryCDR AS (UserAId, UserBId, Conns) AS
(
SELECT UserAId, UserBId, COUNT(1) FROM CDR
GROUP BY UserAId, UserBId)
UPDATE user2user
SET NumberOfConnections = NumberOfConnections + SummaryCDR.Conns
FROM SummaryCDR
WHERE SummaryCDR.UserAId = user2user.UserAId
AND SummaryCDR.UserBId = user2user.UserBId
INSERT INTO user2user (UserAId, UserBId, NumberOfConnections)
SELECT CDR.UserAId, CDR.UserBId, Count(1)
FROM CDR
LEFT OUTER JOIN user2user
ON user2user.UserAId = CDR.UserAId
AND user2user.UserBId = CDR.UserBId
WHERE user2user.UserAId IS NULL
GROUP BY CDR.UserAId, CDR.UserBId
(NB: I don't have time to test this code, you'll have to debug it yourself)
is this what you need?
select
UserAId, UserBId, count(CDRid) as count_connections
from cdr
group by UserAId, UserBId
Could you break the conditional update/insert into two separate statements and get rid of the cursor?
Do the INSERT for all the NULL rows and the UPDATE for all the NOT NULL rows.
Why are you even considering doing row-by-row processing on a table that size? You know you can use the merge statment and insert or update and it will be faster. Or you could write an update to insert all rows that need updating in one set-based stament and an insert to insert alll rows when the row doesn't exist in one set-based statement.
Stop using the values clause and use an insert with joins instead. Same thing with updates. If you need extra complexity the case stamenet will probably give you all you need.
In general stop thinking of row-by-row processing. If you can write a select for the cursor, you can write a set-based statement to do the work 99.9% of the time.
You may still want a cursor with a table this large but one to process batches of data (for instance a 1000 records at time) not one to run ro-by-row.