I got a query inserting some data into a table. The query has a join to another table.
Will this other table be locked while the query is running?
-e-
here is a query like the one i'm using:
INSERT INTO table_1
SELECT t3.first_row,
t3.second_row
FROM table_2 t2
INNER JOIN
table_3 t3
ON t2.t3_fk = t3.id
WHERE t3.id IN (1, 2, 3, 4)
AND t2.created_at <= '2014-12-21 22:59:59'
The query is running in a rails transaction.
it will be locked while inserting. Insertion through transaction will be much better.
I found a solution. When setting transaction-isolation= to "READ-COMMITTED", read statements won't lock a table.
See: http://harrison-fisk.blogspot.ch/2009/02/my-favorite-new-feature-of-mysql-51.html
Related
I need to process a queue and MySQL for update is the goal for that.
Everything is working fine, except the fact for update is locking joined tables too.
I have 4 functions that are called simultaneously and 2 of them joins the same table.
The problem is that it seems it's locking all joined table and not specific joined rows.
My code is something like that (just example):
Query #1:
SELECT t1.id, t1.name, t2.balance FROM t1
LEFT JOIN t2 ON t2.user_id = t1.id
WHERE t2.balance < 0
LIMIT 10
FOR UPDATE
SKIP LOCKED
Query #2:
SELECT t3.id, t3.action, t2.balance FROM t3
LEFT JOIN t2 ON t2.user_id = t3.user_id
WHERE t2.balance < 0
LIMIT 10
FOR UPDATE
SKIP LOCKED
I didn't find anything related to this.
Is there a way to avoid that behavior?
I'm trying to update a table of my database with values from another table and database, using a common field.
However none of my statements are working and I can't figure out why.
They are running for ten minutes (way too long) and then the server disconnects.
First try:
UPDATE database1.table1 t1, database2.table2 t2
SET t1.field1 = t2.field1
WHERE t1.field2 = t2.field2;
Second try:
UPDATE database1.table1 t1
INNER JOIN database2.table2 t2
ON t1.field2 = t2.field2
SET t1.field1 = t2.field1;
Can somebody push me in the right direction?
The WHERE/ON-conditions are working in separate select statements. As a workaround I created new tables with a SELECT JOIN, but that is very slow work.
field1 is unique in table2, but not table1. I want to update multiple entries in table1 with a unique value from table2.
I like to join two tables T1 and T2.
T1 is my left table and T2 is my right.
`SELECT
DISTINCT T1.name AS 'name',
T1.volume AS 'volume',
T1.vserver AS 'vserver',
T1.cluster AS 'cluster',
T1.snapmirror_label AS 'snapmirror_label',
T1.timestamp AS 'timestamp'
FROM
schema3.snapmirror_policy_rule snapmirror_policy_rule,
schema3.vserver vserver,
schema3.cluster cluster,
schema1.T1 T1
LEFT JOIN
schema2.T2
ON schema1.T1.name = schema2.T2.name
....
`
My question is, if the table T2 doesn't exist, how will I perform?
My idea is to join two tables if both the tables (T1 as well as T2) exist, else based on few conditions, I will select rows from my T1 table (which exists always) only. I'm looking for the query in these contexts.
PS:- In my working env, procedure like complex thing will not work. Looking for simple straight forward MySql query.
Ok, I am using Mysql DB. I have 2 simple tables.
Table1
ID-Text
12-txt1
13-txt2
42-txt3
.....
Table2
ID-Type-Text
13- 1 - MuTxt1
42- 1 - MuTxt2
12- 2 - Xnnn
Now I want to join these 2 tables to get all data for Type=1 in table 2
SQL1:
Select * from
Table1 t1
Join
(select * from Table2 where Type=1) t2
on t1.ID=t2.ID
SQL2:
Select * from
Table1 t1
Join
Table2 t2
on t1.ID=t2.ID
where t2.Type=1
These 2 queries give the same result, but which one is faster?
I don't know how Mysql does the Join (or How the Join works in Mysql) & that why I wonder this!!
Exxtra info, Now if i don't want type=1 but want t2.text='MuTxt1', so Sql2 will become
Select * from
Table1 t1
Join
Table2 t2
on t1.ID=t2.ID
where t2.text='MuTxt1'
I feel like this query is slower??
Sometimes the MySQL query optimizer does a pretty decent job and sometimes it sucks. Having said that, there are exception to my answer where the optimizer optimizes something else better.
Sub-Queries are generally expensive as MySQL will need to execute and store results seperately. Normally if you could use a sub-query or a join, the join is faster. Especially when using sub-query as part of your where clause and don't put a limit to it.
Select *
from Table1 t1
Join Table2 t2 on t1.ID=t2.ID
where t2.Type=1
and
Select *
from Table1 t1
Join Table2 t2
where t1.ID =t2.ID AND t2.Type=1
should perform equally well, while
Select *
from Table1 t1
Join (select *
from Table2
where Type=1) t2
on t1.ID=t2.ID
most likely is a lot slower as MySQL stores the result of select * from Table2 where Type=1 into a temporary table.
Generally joins work by building a table comprised of all combinations of rows from both table and afterwards removing lines which do not match the conditions. MySQL of course will try to use indexes containing the columns compared in the on clause and specified in the where clause.
If you are interested in which indexes are used, write EXPLAIN in front of your query and execute.
As per my view 2nd query is more better than first query in terms of code readability and performance. You can include filter condition in Join clause also like
Select * from
Table1 t1
Join
Table2 t2 on t1.ID=t2.ID and t2.Type=1
You can compare execution time for all queries in SQL fiddle here :
Query 1
Query 2
My Query
I think this question is hard to answer since we don't exactly know the internals of the query parser in the database. Usually these kind of constructions are evaluated by the database in a similar way (it can see that the first and second query are identical so parses it correctly, or not).
I would write the second one since it is more clear what is happening.
I am trying to spot some broken records in a MS-SQL Database.
In a simplified example, the scenerio is this:
I have 2 tables, simply put:
Table_1 : Id,Date,OpId
Table_2 : Date,OpId,EventName
And I have this business rule: If there is a record in Table_1 THEN at least 1 row should exist in the Table_2 for the Table_1.Date and Table.OpId.
If there is a row in Table_1 and if there is no row matching with that row in Table_2 THEN there is a broken data -whatever the reason-.
To find out the incorrect data, I use:
SELECT *
FROM table_1 t1
LEFT JOIN table_2 t2 ON t1.Date = t2.Date AND t1.OpId = t2.OpId
WHERE t2.OpId IS NULL -- So, if there is no
-- matching row in table_2 then this is a mistake
But it takes too long to have the query completed.
Can there be a faster or better way to approach similar scenerios?
To do an anti semi join NOT EXISTS in SQL Server is usually better than or equal to in performance the other options (NOT IN, OUTER JOIN ... NULL, EXCEPT)
SELECT *
FROM table_1 t1
WHERE NOT EXISTS (SELECT *
FROM table_2 t2
WHERE t1.Date = t2.Date
AND t1.OpId = t2.OpId)
See Left outer join vs NOT EXISTS. You may well be missing a useful index though.
If you use proper indexing there is nothing to do with it (may be use NOT EXISTS instead of LEFT JOIN will be a little bit faster),
BUT
if the Table_1 is has relatively small amount of data and there is no any FKeys or other such a stuff, and this is a one time procedure, then you can use trick like this to drop incorrect lines:
SELECT table_1.*
INTO tempTable
FROM table_1 t1
WHERE EXISTS(SELECT * FROM table_1 t1 WHERE t1.Date = t2.Date AND t1.OpId = t2.OpId)
drop table Table_1
exec sp_rename 'tempTable', 'Table_1'
This may be faster