Periodic "Opening tables" on MySQL Insert - mysql

I have a long-running insert that shows a status that toggles between "NULL" and "Opening tables"
| 5 | mckelvey | mushroom.jpl.nasa.gov:57050 | smap_ampcs_v5_2_0 | Query | 7105 | Opening tables | INSERT INTO ChannelValue
| 5 | mckelvey | mushroom.jpl.nasa.gov:57050 | smap_ampcs_v5_2_0 | Query | 7114 | NULL | INSERT INTO ChannelValue
It does this continually. Show open tables shows 10 opened tables, 5 in use.
Global status "Opened_tables" is 190 and does not increase. table_open_cache is 1024.
Is something wrong? Why does it constantly open tables for an insert already in progress?
The insert is below. The tables in the v4 database are MyISAM, and v5_2 are InnoDB. ComboParent is temporary and MyISAM. No errors in the server log.
INSERT INTO ChannelValue
SELECT
cv.sessionId,
cv.hostId,
cv.sessionFragment,
1 AS id,
cv.id AS channelDataId,
cp.parentId,
NULL AS uniqueId,
IF (cv.dnUnsignedValue IS NOT NULL,
CONVERT(cv.dnUnsignedValue, SIGNED),
cv.dnIntegerValue) AS dnPackedValue,
cv.dnDoubleValue,
sv.stringId,
cv.eu,
SET_FLAGS(cv.dnDoubleFlag,
cv.euFlag,
cv.dnAlarmState,
cv.euAlarmState,
cv.dnUnsignedValue) AS flags
FROM smap_ampcs_v4_0_0.ChannelValue AS cv
STRAIGHT_JOIN ComboParent AS cp
ON ((cv.sessionId = cp.sessionId) AND
(cv.hostId = cp.hostId) AND
(cv.sessionFragment = cp.sessionFragment) AND
(cv.sclkCoarse = cp.sclkCoarse) AND
(cv.sclkFine = cp.sclkFine) AND
(cv.ertCoarse = cp.ertCoarse) AND
(cv.ertFine = cp.ertFine) AND
(cv.scetCoarse = cp.scetCoarse) AND
(cv.scetFine = cp.scetFine) AND
(cv.dssId = cp.dssId) AND
(cv.vcid <=> cp.vcid) AND
(cv.isRealtime = cp.isRealtime))
LEFT JOIN StringValue AS sv
ON (
(cv.hostId = sv.hostId) AND
(cv.sessionId = sv.sessionId) AND
(cv.sessionFragment = sv.sessionFragment) AND
(0 = sv.fromSse) AND
(cv.dnStringValue = sv.stringValue)
)
WHERE (cv.packetId IS NULL)

Related

MySQL Where IS NOT ALL NULL?

How can a query with ability to verify all selected records are not null?
For example with the following query,
verify all rows with field is_field_null is not NULL from returned level1?
SELECT * FROM (
SELECT tb_a.A, tb_b.is_field_null FROM tb_a, tb_b, tb_c
WHERE tb_a.a = "some" AND tb_a.b = "thing" AND tb_c.c = "else"
AND tb_a.b = tb_b.a
AND tb_c.b = tb_b.c
) AS level1
Suppose there are some rows with is_field_null in level1 are indeed NULL and the tables are
tb_a
a | b | A
------------------------------------------
"some" | "thing" | "XXX"
"some" | "thing" | "YYY"
tb_b
a | c | is_field_null
----------------------------------------------
"thing" | "else" | "I have things here"
"thing" | "else" | NULL
tb_c
b | c | mapper
----------------------------------------------
"else" | "else" | "ZZZ"
"else" | "else" | "KKK"
I have tried the following which it returns some row(s) with is_field_null is not null. E.g.
A | is_field_null
-----------------------------
"XXX" | "I have things here"
"YYY" | "I have things here"
SELECT * FROM (
SELECT tb_a.A, tb_b.is_field_null FROM tb_a, tb_b, tb_c
WHERE tb_a.a = "some" AND tb_a.b = "thing" AND tb_c.c = "else"
AND tb_a.b = tb_b.a
AND tb_c.b = tb_b.c
) AS level1
WHERE level1.is_field_null IS NOT NULL
I would expect an empty table. How can I do it?
E.g.
SELECT * FROM (
SELECT tb_a.A, tb_b.is_field_null FROM tb_a, tb_b, tb_c
WHERE tb_a.a = "some" AND tb_a.b = "thing" AND tb_c.c = "else"
AND tb_a.b = tb_b.a
AND tb_c.b = tb_b.c
) AS level1
WHERE level1.is_field_null IS NOT ALL NULL ??
If you are running MySQL 8.0, a simple method uses window functions:
SELECT *
FROM (
SELECT
tb_a.A,
tb_b.is_field_null,
MAX(tb_b.is_field_null IS NULL) has_null
FROM tb_a
INNER JOIN tb_b ON tb_a.b = tb_b.a
INNER JOIN tb_c ON tb_c.b = tb_b.c
WHERE
tb_a.a = 'some'
AND tb_a.b = 'thing'
AND tb_c.c = 'else'
) t
WHERE has_null = 0
Note that this uses standard, explicit joins rather than old-school, implicit joins - this ancient syntax should not be used in new code.
Also I would recommend single quotes instead of double quotes for literal strings (this is the MySQL syntax).

Loading quoted numbers into snowflake table from CSV with COPY TO <TABLE>

I have a problem with loading CSV data into snowflake table. Fields are wrapped in double quote marks and hence there is problem with importing them into table.
I know that COPY TO has CSV specific option FIELD_OPTIONALLY_ENCLOSED_BY = '"'but it's not working at all.
Here are some pices of table definition and copy command:
CREATE TABLE ...
(
GamePlayId NUMBER NOT NULL,
etc...
....);
COPY INTO ...
FROM ...csv.gz'
FILE_FORMAT = (TYPE = CSV
STRIP_NULL_VALUES = TRUE
FIELD_DELIMITER = ','
SKIP_HEADER = 1
error_on_column_count_mismatch=false
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
)
ON_ERROR = "ABORT_STATEMENT"
;
Csv file looks like this:
"3922000","14733370","57256","2","3","2","2","2019-05-23 14:14:44",",00000000",",00000000",",00000000",",00000000","1000,00000000","1000,00000000","1317,50400000","1166,50000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000",",00000000"
I get an error
'''Numeric value '"3922000"' is not recognized '''
I'm pretty sure it's because NUMBER value is interpreted as string when snowflake is reading "" marks, but since I use
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
it shouldn't even be there... Does anyone have some solution to this?
Maybe something is incorrect with your file? I was just able to run the following without issue.
1. create the test table:
CREATE OR REPLACE TABLE
dbNameHere.schemaNameHere.stacko_58322339 (
num1 NUMBER,
num2 NUMBER,
num3 NUMBER);
2. create test file, contents as follows
1,2,3
"3922000","14733370","57256"
3,"2",1
4,5,"6"
3. create stage and put file in stage
4. run the following copy command
COPY INTO dbNameHere.schemaNameHere.STACKO_58322339
FROM #stageNameHere/stacko_58322339.csv.gz
FILE_FORMAT = (TYPE = CSV
STRIP_NULL_VALUES = TRUE
FIELD_DELIMITER = ','
SKIP_HEADER = 0
ERROR_ON_COLUMN_COUNT_MISMATCH=FALSE
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
)
ON_ERROR = "CONTINUE";
4. results
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
| file | status | rows_parsed | rows_loaded | error_limit | errors_seen | first_error | first_error_line | first_error_character | first_error_column_name |
|-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------|
| stageNameHere/stacko_58322339.csv.gz | LOADED | 4 | 4 | 4 | 0 | NULL | NULL | NULL | NULL |
+-----------------------------------------------------+--------+-------------+-------------+-------------+-------------+-------------+------------------+-----------------------+-------------------------+
1 Row(s) produced. Time Elapsed: 2.436s
5. view the records
>SELECT * FROM dbNameHere.schemaNameHere.stacko_58322339;
+---------+----------+-------+
| NUM1 | NUM2 | NUM3 |
|---------+----------+-------|
| 1 | 2 | 3 |
| 3922000 | 14733370 | 57256 |
| 3 | 2 | 1 |
| 4 | 5 | 6 |
+---------+----------+-------+
Can you try with a similar test as this?
EDIT: A quick look at your data shows many of your numeric fields appear to start with commas, so something definitely amiss with the data.
Assuming your numbers are European formatted , decimal place, and . thousands, reading the numeric formating help, it seems Snowflake does not support this as input. I'd open a feature request.
But if you read the column in as text then use REPLACE like
SELECT '100,1234'::text as A
,REPLACE(A,',','.') as B
,TRY_TO_DECIMAL(b, 20,10 ) as C;
gives:
A B C
100,1234 100.1234 100.1234000000
safer would be to strip placeholders first like
SELECT '1.100,1234'::text as A
,REPLACE(A,'.') as B
,REPLACE(B,',','.') as C
,TRY_TO_DECIMAL(C, 20,10 ) as D;

Two methods of performing cohort analysis in MySQL using joins

I make a cohort analysis processor. Input parameters: time range and step, condition (initial event) to exctract cohorts, additional condition (retention event) to check after each N hours/days/months. Output parameters: cohort analysis grid, like this:
0h | 16h | 32h | 48h | 64h | 80h | 96h |
cohort #00 15 | 6 | 4 | 1 | 1 | 2 | 2 |
cohort #01 1 | 35 | 8 | 0 | 2 | 0 | 1 |
cohort #02 0 | 3 | 31 | 11 | 5 | 3 | 0 |
cohort #03 0 | 0 | 4 | 27 | 7 | 6 | 2 |
cohort #04 0 | 1 | 1 | 4 | 29 | 4 | 3 |
Basically:
fetch cohorts: unique users who did something 1 in every period from time_begin every time_step.
find how many of them (in each cohort) did something 2 after N seconds, N*2 seconds, N*3, and so on until now.
In short - I have 2 solutions. One works too slow and includes a heavy select with joins for each data step: 1 day, 2 day, 3 day, etc. I want to optimize it by joining result for every data step to cohorts - and it's the second solution. It looks like it works but I'm not sure it's the best way and that it will give the same result even if cohorts will intersect. Please check it out.
Here's the whole story.
I have a table of > 100,000 events, something like this:
#user-id, timestamp, event_name
events_view (uid varchar(64), tm int(11), e varchar(64))
example input row:
"user_sampleid1", 1423836540, "level_end:001:win"
To make a cohort analisys first I extract cohorts: for example, users, who send special event '1st_launch' in 10 hour periods starting from 2015-02-13 and ending with 2015-02-16. All code in this post is simplified and shortened to see the idea.
DROP TABLE IF EXISTS tmp_c;
create temporary table tmp_c (uid varchar(64), tm int(11), c int(11) );
set beg = UNIX_TIMESTAMP('2015-02-13 00:00:00');
set en = UNIX_TIMESTAMP('2015-02-16 00:00:00');
select min(tm) into t_start from events_view ;
select max(tm) into t_end from events_view ;
if beg < t_start then
set beg = t_start;
end if;
if en > t_end then
set en = t_end;
end if;
set period = 3600 * 10;
set cnt_c = ceil((en - beg) / period) ;
/*works quick enough*/
WHILE i < cnt_c DO
insert into tmp_c (
select uid, min(tm), i from events_view where
locate("1st_launch", e) > 0 and tm > (beg + period * i)
AND tm <= (beg + period * (i+1)) group by uid );
SET i = i+1;
END WHILE;
Cohorts may consist the same user ids, though usually one user is exist only in one cohort. And in each cohort users are unique.
Now I have temp table like this:
user_id | 1st timestamp | cohort_no
uid1 1423836540 0
uid2 1423839540 0
uid3 1423841160 1
uid4 1423841460 2
...
uidN 1423843080 M
Then I need to again divide time range on periods and calculate for each period how many users from each cohort have sent event "level_end:001:win".
For each small period I select all unique users who have sent "level_end:001:win" event and left join them to tmp_c cohorts table. So I have something like this:
user_id | 1st timestamp | cohort_no | user_id | other fields...
uid1 1423836540 0 uid1
uid2 1423839540 0 null
uid3 1423841160 1 null
uid4 1423841460 2 uid4
...
uidN 1423843080 M null
This way I see how many users from my cohorts are in those who have sent "level_end:001:win", exclude not found by where clause: where t2.uid is not null.
Finally I perform grouping and have counts of users in each cohort, who have sent "level_end:001:win" in this particluar period.
Here's the code:
DROP TABLE IF EXISTS tmp_res;
create temporary table tmp_res (uid varchar(64) CHARACTER SET cp1251 NOT NULL, c int(11), cnt int(11) );
set i = 0;
set cnt_c = ceil((t_end - beg) / period) ;
WHILE i < cnt_c DO
insert into tmp_res
select concat(beg + period * i, "_", beg + period * (i+1)), c, count(distinct(uid)) from
(select t1.uid, t1.c from tmp_c t1 left join
(select uid, min(tm) from events_view where
locate("level_end:001:win", e) > 0 and
tm > (beg + period * i) AND tm <= (beg + period * (i+1)) group by uid ) t2
on t1.uid = t2.uid where t2.uid is not null) t3
group by c;
SET i = i+1;
END WHILE;
/*getting result of the first method: tooo slooooow!*/
select * from tmp_res;
The result I've got (it's ok that some cohorts are not appear on some periods):
"1423832400_1423890000","1","35"
"1423832400_1423890000","2","3"
"1423832400_1423890000","3","1"
"1423832400_1423890000","4","1"
"1423890000_1423947600","1","21"
"1423890000_1423947600","2","50"
"1423890000_1423947600","3","2"
"1423947600_1424005200","1","9"
"1423947600_1424005200","2","24"
"1423947600_1424005200","3","70"
"1423947600_1424005200","4","6"
"1424005200_1424062800","1","7"
"1424005200_1424062800","2","15"
"1424005200_1424062800","3","21"
"1424005200_1424062800","4","32"
"1424062800_1424120400","1","7"
"1424062800_1424120400","2","13"
"1424062800_1424120400","3","24"
"1424062800_1424120400","4","18"
"1424120400_1424178000","1","10"
"1424120400_1424178000","2","12"
"1424120400_1424178000","3","18"
"1424120400_1424178000","4","14"
"1424178000_1424235600","1","6"
"1424178000_1424235600","2","7"
"1424178000_1424235600","3","9"
"1424178000_1424235600","4","12"
"1424235600_1424293200","1","6"
"1424235600_1424293200","2","8"
"1424235600_1424293200","3","9"
"1424235600_1424293200","4","5"
"1424293200_1424350800","1","5"
"1424293200_1424350800","2","3"
"1424293200_1424350800","3","11"
"1424293200_1424350800","4","10"
"1424350800_1424408400","1","8"
"1424350800_1424408400","2","5"
"1424350800_1424408400","3","7"
"1424350800_1424408400","4","7"
"1424408400_1424466000","2","6"
"1424408400_1424466000","3","7"
"1424408400_1424466000","4","3"
"1424466000_1424523600","1","3"
"1424466000_1424523600","2","4"
"1424466000_1424523600","3","8"
"1424466000_1424523600","4","2"
"1424523600_1424581200","2","3"
"1424523600_1424581200","3","3"
It works but it takes too much time to process because there are many queries here instead of one, so I need to rewrite it.
I think it can be rewritten with joins, but I'm still not sure how.
I decided to make a temporary table and write period boundaries in it:
DROP TABLE IF EXISTS tmp_times;
create temporary table tmp_times (tm_start int(11), tm_end int(11));
set cnt_c = ceil((t_end - beg) / period) ;
set i = 0;
WHILE i < cnt_c DO
insert into tmp_times values( beg + period * i, beg + period * (i+1));
SET i = i+1;
END WHILE;
Then I get periods-to-events mapping (user_id + timestamp represent particular event) to temp table and left join it to cohorts table and group the result:
SELECT Concat(tm_start, "_", tm_end) per,
t1.c coh,
Count(DISTINCT( t2.uid ))
FROM tmp_c t1
LEFT JOIN (SELECT *
FROM tmp_times t3
LEFT JOIN (SELECT uid,
tm
FROM events_view
WHERE Locate("level_end:101:win", e) > 0)
t4
ON ( t4.tm > t3.tm_start
AND t4.tm <= t3.tm_end )
WHERE t4.uid IS NOT NULL
ORDER BY t3.tm_start) t2
ON t1.uid = t2.uid
WHERE t2.uid IS NOT NULL
GROUP BY per,
coh
ORDER BY per,
coh;
In my tests this returns the same result as method #1. I can't check the result manually, but I understand how method #1 work more and as far I can see it gives what I want. Method #2 is faster, but I'm not sure it's the best way and it will give the same result even if cohorts will intersect.
Maybe there are well-known common methods to perform a cohort analysis in SQL? Is method #1 I use more reliable than method #2? I work with joins not that often, that's why still do not fully understand joins magic yet.
Method #2 looks like pure magic, and I used to not believe in what I don't understand :)
Thanks for answers!

Use specific mysql index with rails

I have this ActiveRecord query
issue = Issue.find(id)
issue.articles.includes(:category).merge(Category.where(permalink: perma))
And the translated to mysql query
SELECT `articles`.`id` AS t0_r0, `articles`.`title` AS t0_r1,
`articles`.`hypertitle` AS t0_r2, `articles`.`html` AS t0_r3,
`articles`.`author` AS t0_r4, `articles`.`published` AS t0_r5,
`articles`.`category_id` AS t0_r6, `articles`.`issue_id` AS t0_r7,
`articles`.`date` AS t0_r8, `articles`.`created_at` AS t0_r9,
`articles`.`updated_at` AS t0_r10, `articles`.`photo_file_name` AS t0_r11,
`articles`.`photo_content_type` AS t0_r12, `articles`.`photo_file_size` AS t0_r13,
`articles`.`photo_updated_at` AS t0_r14, `categories`.`id` AS t1_r0,
`categories`.`name` AS t1_r1, `categories`.`permalink` AS t1_r2,
`categories`.`created_at` AS t1_r3, `categories`.`updated_at` AS t1_r4,
`categories`.`issued` AS t1_r5, `categories`.`order_articles` AS t1_r6
FROM `articles` LEFT OUTER JOIN `categories` ON
`categories`.`id` = `articles`.`category_id` WHERE
`articles`.`issue_id` = 409 AND `categories`.`permalink` = 'Διεθνή' LIMIT 1
In the explation of this query I saw that uses wrong index
+----+-------------+------------+-------+---------------------------------------------------------------------------+-------------------------------+---------+-------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+-------+---------------------------------------------------------------------------+-------------------------------+---------+-------+------+----------+-------------+
| 1 | SIMPLE | categories | const | PRIMARY,index_categories_on_permalink | index_categories_on_permalink | 768 | const | 1 | 100.00 | |
| 1 | SIMPLE | articles | ref | index_articles_on_issue_id_and_category_id, index_articles_on_category_id | index_articles_on_category_id | 2 | const | 10 | 100.05 | Using where |
+----+-------------+------------+-------+---------------------------------------------------------------------------+-------------------------------+---------+-------+------+----------+-------------+
I have two indexes, category_id alone and issue_id - category_id.
In this query I'm searching with issue_id and category_id which is much faster when using the index_articles_on_issue_id_and_category_id than the index_articles_on_category_id.
How can I select the correct index with active record query?
You can facilitate arel like so to use an index:
class Issue
def self.use_index(index)
# update: OP fixed my mistake
from("#{self.table_name} USE INDEX(#{index})")
end
end
# then
Issue.use_index("bla").where(some_condition: true)
Add use_index to ActiveRecord::Relation.
There was some discussion for multiple years about adding this to Rails core, however, it looks like the PR and branch got abandoned.
If you are aware of what database you're using and the limitations of this, you can easily add this to your own codebase so it can be used.
Very similar to #krichard's solution except more generalized for all models to use instead of just Issue.
config/initializers/active_record_relation.rb
class ActiveRecord::Relation
# Allow passing index hints to MySQL in case the query planner gets confused.
#
# Example:
# Message.first.events.use_index( :index_events_on_eventable_type_and_eventable_id )
# #=> Event Load (0.5ms) SELECT `events`.* FROM `events` USE INDEX (index_events_on_eventable_type_and_eventable_id) WHERE `events`.`eventable_id` = 123 AND `events`.`eventable_type` = 'Message'
#
# MySQL documentation:
# https://dev.mysql.com/doc/refman/5.7/en/index-hints.html
#
# See https://github.com/rails/rails/pull/30514
#
def use_index( index_name )
self.from( "#{ self.quoted_table_name } USE INDEX ( #{ index_name } )" )
end
end
This will allow you to use something like:
issue.articles.includes( :category ).use_index( :index_articles_on_issue_id_and_category_id )
And the resulting SQL will include:
FROM articles USE INDEX( index_articles_on_issue_id_and_category_id )

SQL Server 2008: U and X locks - deadlock on one table without any indexes. How?

I observe really strange behavior of my DB.
I have one small table (about 300 rows) where one field is continuously updated.
And I was getting a lot of deadlocks there - update of the table was deadlocking the similar update of the same table (U lock vs X lock).
So I decided to remove the clustered index (so table doesn't have any indexes now) to fix the deadlocks. But it didn't help and now I'm getting the deadlock between the U and X lock modes.
So one table, no indexes and 2 sessions updating one table
Victim
update dbo.MyNumber set
#nextno = nextno = nextno + 1
where [type] = #type
and yearid = #yearid
Winning query:
update dbo.MyNumber set
#nextno = nextno = nextno + 1
where [type] = #TYPE
and yrclosedyn = 0
Rows are definitely different but the page is the same.
How Is it possible? Maybe it is connected to the lock escalation, or ...?
I really appreciate any suggestions.
Thanks in advance
Mike
DEADLOCK XML:
<deadlock-list>
<deadlock victim="process6c492e8">
<process-list>
<process id="processb6a988" taskpriority="0" logused="1848" waitresource="RID: 5:1:127478:16" waittime="3478" ownerId="17153439" transactionname="user_transaction" lasttranstarted="2012-12-18T12:31:40.147" XDES="0xffffffff89482258" lockMode="U" schedulerid="7" kpid="4248" status="suspended" spid="98" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2012-12-18T12:31:49.913" lastbatchcompleted="2012-12-18T12:31:49.913" clientapp="PenAIR" hostname="S16047425" hostpid="9300" loginname="sa" isolationlevel="read committed (2)" xactid="17153439" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="MYDATABASE.dbo.MyStoredProcedure" line="92" stmtstart="9062" stmtend="9388" sqlhandle="0x030005002d15a05e58b5710016a100000100000000000000">
UPDATE dbo.MyNumber Set
#NEXTNO = NEXTNO = NEXTNO + 1
WHERE (TYPE = #TYPE) AND (YRCLOSEDYN = 0) </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 5 Object Id = 1587549485] </inputbuf>
</process>
<process id="process6c492e8" taskpriority="0" logused="192" waitresource="RID: 5:1:127478:20" waittime="8252" ownerId="17153562" transactionname="user_transaction" lasttranstarted="2012-12-18T12:31:45.140" XDES="0x6583b1e0" lockMode="U" schedulerid="13" kpid="19824" status="suspended" spid="143" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2012-12-18T12:31:45.140" lastbatchcompleted="2012-12-18T12:31:45.140" clientapp="PenAIR" hostname="S16047425" hostpid="4760" loginname="sa" isolationlevel="read committed (2)" xactid="17153562" currentdb="5" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="MYDATABASE.dbo.MyStoredProcedure" line="92" stmtstart="9062" stmtend="9388" sqlhandle="0x030005002d15a05e58b5710016a100000100000000000000">
UPDATE dbo.MyNumber Set
#NEXTNO = NEXTNO = NEXTNO + 1
WHERE ([TYPE] = #TYPE) AND (YRCLOSEDYN = 0) </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 5 Object Id = 1587549485] </inputbuf>
</process>
</process-list>
<resource-list>
<ridlock fileid="1" pageid="127478" dbid="5" objectname="MYDATABASE.dbo.MyNumber" id="lock464f2640" mode="X" associatedObjectId="72057594131120128">
<owner-list>
<owner id="processb6a988" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="process6c492e8" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
<ridlock fileid="1" pageid="127478" dbid="5" objectname="MYDATABASE.dbo.MyNumber" id="lockfffffffff1974980" mode="X" associatedObjectId="72057594131120128">
<owner-list>
<owner id="process6c492e8" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="processb6a988" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
</resource-list>
</deadlock>
</deadlock-list>
Shredding your deadlock graph into tabular form shows the following.
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
| LockMode | LockedObject | TranCount | LockEvent | LockedMode | WaitMode | WaitResource | IsolationLevel | LogUsed |
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
| U | MYDATABASE.dbo.MyNumber | NULL | rid | X | U | RID: 5:1:127478:20 | read committed (2) | 192 |
| U | MYDATABASE.dbo.MyNumber | NULL | rid | X | U | RID: 5:1:127478:16 | read committed (2) | 1848 |
+----------+-------------------------+-----------+-----------+------------+----------+--------------------+--------------------+---------+
You still haven't answered my question in the comments as to whether the sequence generation code is only called once in every transaction.
It is easy to generate a deadlock graph similar to the one in your post if not.
Setup
CREATE TABLE dbo.MyNumber
(
[TYPE] CHAR(1),
YRCLOSEDYN INT,
NEXTNO INT
)
INSERT INTO dbo.MyNumber
VALUES ('X', 0, 1),
('Y', 0, 1)
GO
CREATE PROC MyStoredProcedure #TYPE CHAR(1),
#NEXTNO INT OUTPUT
AS
UPDATE dbo.MyNumber
SET #NEXTNO = NEXTNO = NEXTNO + 1
WHERE ( [TYPE] = #TYPE )
AND ( YRCLOSEDYN = 0 )
Connection 1
BEGIN TRAN
DECLARE #NEXTNO INT
EXEC MyStoredProcedure 'Y', #NEXTNO OUTPUT
WAITFOR DELAY '00:00:05'
EXEC MyStoredProcedure 'X', #NEXTNO OUTPUT
ROLLBACK
Connection 2
(Run immediately after executing the code in connection 1)
BEGIN TRAN
DECLARE #NEXTNO INT
EXEC MyStoredProcedure 'X', #NEXTNO OUTPUT
EXEC MyStoredProcedure 'Y', #NEXTNO OUTPUT
ROLLBACK
The deadlock graph output from that is very similar to the one above
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
| LockMode | LockedObject | TranCount | LockEvent | LockedMode | WaitMode | WaitResource | IsolationLevel | LogUsed |
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
| U | MYDATABASE.dbo.MyNumber | 2 | rid | X | U | RID: 11:1:144:1 | read committed (2) | 248 |
| U | MYDATABASE.dbo.MyNumber | 2 | rid | X | U | RID: 11:1:144:0 | read committed (2) | 248 |
+----------+-------------------------+-----------+-----------+------------+----------+-----------------+--------------------+---------+
If this is the explanation for your issue you will need to ensure that you update the Sequences in the same order in all transactions (I assume there must be some good reason why you can't just use an IDENTITY column based solution)