I am confused about how to copy a column from one table to another table using where. I wrote SQL query but it says transaction lock time exceeded or query returns more than one row.
using mysql
Basically,
I have:
Table 1: Results
BuildID platform_to_insert
Table 2: build
BuildID correct_platform
update results set results.platform_to_insert
= (select correct_platform from
build where results.BuildID = build.BuildID)
I do not believe you need a sub query.
UPDATE results, build
SET results.platform_to_insert = build.correct_platform
WHERE results.BuildID = build.BuildID
There are two options here:
update your tables to use BuildID as a primary key (to avoid duplicates)
update your subquery to only return one result
UPDATE results SET results.platform_to_insert = (
SELECT correct_platform
FROM build
WHERE results.BuildID=build.BuildID LIMIT 1
);
Related
I want to update a MySQL table from matlab in bulk. The current logic that I use iterates over the array and inserts it one-by-one which takes way too long.
Here is my current implementation-
function update_table(customer_id_list, cluster_id_list, write_conn)
num_customers = size(customer_id_list, 1);
for idx=1:num_customers+1
customer_id = customer_id_list(idx);
cluster_id = cluster_id_list(idx);
sql = strcat(sql, 'UPDATE table SET cluster_id = ', num2str(cluster_id), ' WHERE customer_id = ', num2str(customer_id));
exec(write_conn, sql);
end
end
Tried to look for documentation to do bulk update/insert, but haven't found anything yet.
Do an "upjoin" using a temporary table.
Build your update specification as a Matlab table array with all the cluster_id and customer_id pairs that specify the new values.
Create a SQL temporary table that contains columns for the key columns you'll be matching on and the columns to update.
CREATE TEMPORARY TABLE my_temp_table SELECT customer_id, cluster_id FROM table WHERE 1 = 0
Batch-insert your update specification data from Matlab into the temporary table using Matlab Database Toolbox's datainsert or sqlwrite.
Update the target table en masse by joining it to the temp table: UPDATE table SET targ.cluster_id = upd.cluster_id FROM table targ INNER JOIN my_temp_table upd ON targ.customer_id = upd.customer_id.
Drop the temp table.
Boom. If you're going to do this a lot, wrap it up in a generic upjoin() function.
See the Matlab documentation for datainsert and sqlwrite. Do not use fastinsert; despite its name, it is much slower than datainsert and sqlwrite.
This is my code
UPDATE ks_tidy SET announcedWeek = (
SELECT week(dateAnnounced) as weekAnnounced
FROM ks_tidy ) WHERE announcedWeek = ''
and the problem I'm running into is it doesn't work when I'm trying to update more than 1 row?
1242 - Subquery returns more than 1 row
My table has two columns: dateAnnounced (DATE[yyyy-mm-dd], UNIQUE, PRIMARY KEY), weekAnnounced (INT). The weekAnnounced column was added to store weekAnnounced, which I am trying to "calculate" using the WEEK() SQL function. I have thousands of rows of data and this table gets updated regularly so manually doing this is not an option.
I don't see why you think you need a sub query for this
UPDATE ks_tidy SET announcedWeek = week(dateAnnounced)
WHERE announcedWeek = ''
If this is not the case please add sample data and expected output as text to the question. AND should you be testing for null rather than ''? AND do you really need to store an easily derived value?
I am trying to insert records into MySQL database from a MS SQL Server using the "OPENQUERY" but what I am trying to do is ignore the duplicate keys messages. so when the query run into a duplicate then ignore it and keep going.
What ideas can I do to ignore the duplicates?
Here is what I am doing:
pulling records from MySQL using "OpenQuery" to define MySQL "A.record_id"
Joining those records to records in MS SQL Server "with a specific criteria and not direct id" from here I find a new related "B.new_id" record identifier in SQL Server.
I want to insert the found results into a new table in MySQL like so A.record_id, B.new_id Here in the new table I have A.record_id set as a primary key for that table.
The problem is that when joining table A to Table B some times I find 2+ records into table B matching the criteria that I am looking for which causes the value A.record_id to 2+ times in my data set before inserting that into table A which causes the problem. Note I can use aggregate function to eliminate the records.
I don't think there is a specific option. But it is easy enough to do:
insert into oldtable(. . .)
select . . .
from newtable
where not exists (select 1 from oldtable where oldtable.id = newtable.id)
If there is more than one set of unique keys, you can add additional not exists statements.
EDIT:
For the revised problem:
insert into oldtable(. . .)
select . . .
from (select nt.*, row_number() over (partition by id order by (select null)) as seqnum
from newtable nt
) nt
where seqnum = 1 and
not exists (select 1 from oldtable where oldtable.id = nt.id);
The row_number() function assigns a sequential number to each row within a group of rows. The group is defined by the partition by statement. The numbers start at 1 and increment from there. The order by clause says that you don't care about the order. Exactly one row with each id will have a value of 1. Duplicate rows will have a value larger than one. The seqnum = 1 chooses exactly one row per id.
If you are on SQL Server 2008+, you can use MERGE to do an INSERT if row does not exist, or an UPDATE.
Example:
MERGE
INTO dataValue dv
USING tmp_holding_DataValue t
ON t.dateStamp = dv.dateStamp
AND t.itemId = dv.itemId
WHEN NOT MATCHED THEN
INSERT (dateStamp, itemId, value)
VALUES (dateStamp, itemId, value)
In DB2, I need to do a SELECT FROM UPDATE, to put an update + select in a single transaction.
But I need to make sure to update only one record per transaction.
Familiar with the LIMIT clause from MySQL's UPDATE option
places a limit on the number of rows that can be updated
I looked for something similar in DB2's UPDATE reference but without success.
How can something similar be achieved in DB2?
Edit: In my scenario, I have to deliver 1000 coupon codes upon request. I just need to select (any)one that has not been given yet.
The question uses some ambiguous terminology that makes it unclear what needs to be accomplished. Fortunately, DB2 offers robust support for a variety of SQL patterns.
To limit the number of rows that are modified by an UPDATE:
UPDATE
( SELECT t.column1 FROM someschema.sometable t WHERE ... FETCH FIRST ROW ONLY
)
SET column1 = 'newvalue';
The UPDATE statement never sees the base table, just the expression that filters it, so you can control which rows are updated.
To INSERT a limited number of new rows:
INSERT INTO mktg.offeredcoupons( cust_id, coupon_id, offered_on, expires_on )
SELECT c.cust_id, 1234, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 30 DAYS
FROM mktg.customers c
LEFT OUTER JOIN mktg.offered_coupons o
ON o.cust_id = c.cust_id
WHERE ....
AND o.cust_id IS NULL
FETCH FIRST 1000 ROWS ONLY;
This is how DB2 supports SELECT from an UPDATE, INSERT, or DELETE statement:
SELECT column1 FROM NEW TABLE (
UPDATE ( SELECT column1 FROM someschema.sometable
WHERE ... FETCH FIRST ROW ONLY
)
SET column1 = 'newvalue'
) AS x;
The SELECT will return data from only the modified rows.
You have two options. As noted by A Horse With No Name, you can use the primary key of the table to ensure that one row is updated at a time.
The alternative, if you're using a programming language and have control over cursors, is to use a cursor with the 'FOR UPDATE' option (though that may be probably optional; IIRC, cursors are 'FOR UPDATE' by default when the underlying SELECT means it can be), and then use an UPDATE statement with the WHERE CURRENT OF <cursor-name> in the UPDATE statement. This will update the one row currently addressed by the cursor. The details of the syntax vary with the language you're using, but the raw SQL looks like:
DECLARE CURSOR cursor_name FOR
SELECT *
FROM SomeTable
WHERE PKCol1 = ? AND PKCol2 = ?
FOR UPDATE;
UPDATE SomeTable
SET ...
WHERE CURRENT OF cursor_name;
If you can't write DECLARE in your host language, you have to do manual bashing to find the equivalent mechanism.
I have a query written very poorly in SQL Server 2008
UPDATE PatientChartImages
SET PatientChartImages.IsLockDown = #IsLockdown
WHERE PatientChartImages.IsLockDown = #IsNotLockdown
AND PatientChartId IN (
SELECT PatientCharts.PatientChartId
FROM PatientCharts
WHERE ( PatientCharts.ChartStatusID = #ChartCompletedStatusID
OR PatientCharts.ChartStatusID = #ChartOnBaseStatusID
)
AND PatientCharts.IsLockDown = #IsNotLockdown
AND PatientCharts.CompletedOn IS NOT NULL
AND DATEDIFF(MINUTE, PatientCharts.CompletedOn, GETUTCDATE()) >= ( SELECT
tf.LockUpInterval
FROM
#tblFacCOnf tf
WHERE
tf.facilityId = PatientCharts.FacilityId
) )
This query locks the main table and results in TimeOut. IF i create a CTE first of all the updateable records and then update the main table by joining to the CTE. Will it help ??
First thing i advice you to do is to substitute IN condition with EXISTS. Second is to move all this conditional logic into CTE. Third is to substitute sub-select with #tblFacCOnf with join.
Last advice depends on your business logic and is not so important in my opinion
So at the end you will get something as
WITH search_cte as (
SELECT PatientCharts.PatientChartId
FROM PatientCharts
JOIN #tblFacCOnf tf on tf.facilityId = PatientCharts.FacilityId
WHERE ( PatientCharts.ChartStatusID = #ChartCompletedStatusID
OR PatientCharts.ChartStatusID = #ChartOnBaseStatusID
)
AND PatientCharts.IsLockDown = #IsNotLockdown
AND PatientCharts.CompletedOn IS NOT NULL
AND DATEDIFF(MINUTE, PatientCharts.CompletedOn, GETUTCDATE()) >= tf.LockUpInterval
) --cte end
UPDATE PatientChartImages
SET PatientChartImages.IsLockDown = #IsLockdown
WHERE PatientChartImages.IsLockDown = #IsNotLockdown
AND EXISTS (select 1 from PatientChartImages where PatientChartImages.PatientChartId = search_cte.PatientChartId)
One additional thing I might suggest if the other suggestions don't get you enough speed is not to use a table variable. Temp Tables are often faster for large data sets and can be indexed if need be.
The update lock is being held the time it takes to compute the CTE and the time for the update. The CTE time is probably causing the time out.
To reduce the lock time to the minimum required to update the target table. I suggest you create a temp table with two columns. Col1 is the primary key or cluster key of the target table and Col2 is the value you want in the target table. Wrap the temp table creation and fill the table with values according to your business logic within a transaction. Update the target table using a join to the temp table and the value from the temp table in a seperate transaction. After update drop the temp table.
I think you should create an SQL script (or stored procedure, if you will use it from a higher level) where you store the results of your selection into a cursor (you'll only have to find the PatientCartId's of the rows to be updated) and then you should use it in your update, so, the answer is yes.
It's easy to test this, you should put these commands into a transaction, rollback the transaction and before the rollback you should perform a selection to test your results. Good luck.