I just transferred one of my pages to a Windows Azure Account. Everything went smooth .. until I tried to create some data. My trigger, which worked fine with MSSQL2008 fails on azure - how could I fix this trigger:
CREATE TRIGGER creator
ON someTable
FOR INSERT
AS
DECLARE #someTableID INT;
SELECT #someTableID=(SELECT someTableID FROM INSERTED)
INSERT INTO Preisgruppe ( Name, someTableID, UserPreisgruppe_ID ) VALUES ( 'Gast', #someTableID, 1)
INSERT INTO Oeffnungszeit ( someTableID, Tag_ID, von,bis) VALUES ( #someTableID, 0, '00:00','00:00'),( #someTableID, 1, '00:00','00:00'),( #someTableID, 2, '00:00','00:00'),( #someTableID, 3, '00:00','00:00'),( #someTableID, 4, '00:00','00:00'),( #someTableID, 5, '00:00','00:00'),( #someTableID, 6, '00:00','00:00')
GO
Nothing is looking bad in this Trigger.
I did try your code and it's working fine.
So it could be the structure. My look like this:
CREATE TABLE [dbo].[someTable](
[someTableID] [int] IDENTITY(1,1) NOT NULL,
[Column1] [nvarchar](50) NOT NULL
)
CREATE TABLE [dbo].[Preisgruppe](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](50) NULL,
[someTableID] [int] NULL,
[UserPreisgruppe_ID] [int] NULL
)
CREATE TABLE [dbo].[Oeffnungszeit](
[ID] [int] IDENTITY(1,1) NOT NULL,
[someTableID] [int] NOT NULL,
[Tag_ID] [int] NOT NULL,
[von] [time](7) NULL,
[bis] [time](7) NULL
)
Also it could be nice to have the Error message...
Just to provide another example, here is what I use...
TABLE DEFINITION:
This is just a normal table except the "main body" of AUDIT fields are in the HISTORY table.
CREATE TABLE [data].[Categories](
[Id] [uniqueidentifier] NOT NULL DEFAULT (newid()),
[Name] [nvarchar](250) NOT NULL,
[Description] [nvarchar](500) NULL,
[DisplayOrder] [bigint] NULL,
[ProductCount] [bigint] NULL,
[IsActive] [bit] NOT NULL CONSTRAINT [DF_Categories_IsActive] DEFAULT ((1)),
[UpdatedBy] [nvarchar](360) NOT NULL
)
On a side-note...
Heap tables are not allowed, so make each "Id" columns PRIMARY
You should also get used to using GUID's for your PRIMARY KEY's
HISTORY TABLE DEFINITION (used for audit purposes):
This table is used for AUDIT purposes. You still get to see who did what & when, except now, the history isn't buried in your main table and won't slow-down your INDEXES. And...you get TRUE AUDIT beyond that of mere log-shipping.
CREATE TABLE [history].[data_Categories](
[Id] [uniqueidentifier] NOT NULL DEFAULT (newid()),
[EntityId] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](250) NOT NULL,
[Description] [nvarchar](500) NULL,
[ProductCount] [bigint] NULL,
[DisplayOrder] [bigint] NULL,
[IsActive] [bit] NOT NULL,
[UpdatedBy] [nvarchar](360) NOT NULL,
[UpdateType] [nvarchar](50) NOT NULL,
[UpdatedDate] [datetime] NOT NULL
)
GO
ALTER TABLE [history].[data_Categories] ADD CONSTRAINT [DF_data_Categories_31EC6D26] DEFAULT (newid()) FOR [Id]
GO
ALTER TABLE [history].[data_Categories] ADD CONSTRAINT [DF_data_Categories_32E0915F] DEFAULT (getutcdate()) FOR [UpdatedDate]
GO
ALTER TABLE [history].[data_Categories] ADD DEFAULT ('00000000-0000-0000-0000-000000000000') FOR [EntityId]
GO
On a side-note...
You can also turn-off TRIGGERS in your DELETE stored procedures to make the AUDIT "cleaner"
The reason it becomes "cleaner" is you get a single DELETE AUDIT record instead of an UPDATE & DELETE AUDIT record
To do this, just turn off the TRIGGER before the DELETE STATEMENT and turn it on again afterwards.
TABLE TRIGGER:
Just a normal trigger...
CREATE TRIGGER [data].[trig_Categories]
ON [data].[Categories]
AFTER INSERT, DELETE, UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #Id INT
DECLARE #Type VARCHAR(20);
IF EXISTS(SELECT * FROM INSERTED)
BEGIN
IF EXISTS(SELECT * FROM DELETED)
BEGIN
SET #Type ='UPDATED';
END
ELSE
BEGIN
SET #Type ='INSERTED';
END
INSERT INTO
history.data_Categories (
[EntityId]
,[Name]
,[Description]
,[DisplayOrder]
,[ProductCount]
,[IsActive]
,[UpdatedBy]
,[UpdateType])
SELECT
[Id]
,[Name]
,[Description]
,[DisplayOrder]
,[ProductCount]
,[IsActive]
,[UpdatedBy]
,#Type
FROM INSERTED
END
ELSE
BEGIN
SET #type = 'DELETED';
INSERT INTO
history.data_Categories (
[EntityId]
,[Name]
,[Description]
,[DisplayOrder]
,[ProductCount]
,[IsActive]
,[UpdatedBy]
,[UpdateType])
SELECT
[Id]
,[Name]
,[Description]
,[DisplayOrder]
,[ProductCount]
,[IsActive]
,[UpdatedBy]
,#Type
FROM DELETED
END;
END
GO
Related
I have a database with a memory optimized table. I want to archive this table in another database. I want to write an stored procedure to do that.
I am implemented below sample from 1 and 2 successfully, but in these sample, the first database is not in memory and the second database is in memory.
In my case, the first database is in memory and the second one can be in memory or not.
Here is my code:
1- my table :
USE [TestReport]
GO
/****** Object: Table [dbo].[Report] Script Date: 1/22/2018 4:40:04 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Report]
(
[ReportID] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[Year] [int] NOT NULL,
[DayOfYear] [int] NOT NULL,
[ProductType] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[ApplicationID] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[TotalSize] [bigint] NOT NULL DEFAULT ((0)),
[TotalCount] [bigint] NOT NULL DEFAULT ((0)),
[LastReportTimeSpan] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NULL,
INDEX [idx] NONCLUSTERED HASH
(
[ReportID],
[DayOfYear]
)WITH ( BUCKET_COUNT = 131072),
CONSTRAINT [pk] PRIMARY KEY NONCLUSTERED HASH
(
[ReportID],
[Year],
[DayOfYear],
[ProductType],
[ApplicationID]
)WITH ( BUCKET_COUNT = 131072)
)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )
GO
2- simple Stored procedure
CREATE PROCEDURE [dbo].[ArchiveReport]
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english'
)
BEGIN
DECLARE #currentdate DATETIME2;
SET #currentdate = GETDATE();
declare #maintainDay INT = 5
INSERT TestReportArchive.[dbo].Report
SELECT [ReportID],
[Year],
[DayOfYear],
[ProductType],
[ApplicationID],
[TotalSize],
[TotalCount],
[LastReportTimeSpan]
FROM [dbo].[Report]
WHERE DATEADD(day, [DayOfYear] + #maintainDay, DATEADD(YEAR, [Year] - 1900, 0)) > #currentdate;
DELETE FROM [dbo].[Report]
WHERE DATEADD(day, [DayOfYear] + #maintainDay, DATEADD(YEAR, [Year] - 1900, 0)) > #currentdate;
END;
END
3- simple stored procedure error
Msg 4512, Level 16, State 3, Procedure ArchiveReport, Line 12
Cannot schema bind procedure 'dbo.ArchiveReport' because name 'TestReportArchive.dbo.Report' is invalid for schema binding. Names must be in two-part format and an object cannot reference itself.
TestReportArchive is my destination database
4- using 1 and 2. definition of table variable
USE [TestReport]
GO
/****** Object: UserDefinedTableType [dbo].[MemoryType] Script Date: 1/22/2018 4:35:14 PM ******/
CREATE TYPE [dbo].[MemoryType] AS TABLE(
[ReportID] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[Year] [int] NOT NULL,
[DayOfYear] [int] NOT NULL,
[ProductType] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[ApplicationID] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NOT NULL,
[TotalSize] [bigint] NOT NULL,
[TotalCount] [bigint] NOT NULL,
[LastReportTimeSpan] [nvarchar](50) COLLATE Latin1_General_100_BIN2 NULL,
INDEX [idx] NONCLUSTERED HASH
(
[ReportID],
[DayOfYear]
)WITH ( BUCKET_COUNT = 131072)
)
WITH ( MEMORY_OPTIMIZED = ON )
GO
5- stored procedure with table variable
CREATE PROCEDURE [dbo].[ArchiveReport]
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english'
)
BEGIN
DECLARE #currentdate DATETIME2;
SET #currentdate = GETDATE();
declare #maintainDay INT = 5
DECLARE #InMem [dbo].[MemoryType];
INSERT #InMem
SELECT [ReportID],
[Year],
[DayOfYear],
[ProductType],
[ApplicationID],
[TotalSize],
[TotalCount],
[LastReportTimeSpan]
FROM [dbo].[Report]
WHERE DATEADD(day, [DayOfYear] + #maintainDay, DATEADD(YEAR, [Year] - 1900, 0)) > #currentdate;
INSERT TestReportArchive.[dbo].[Report]
SELECT [ReportID],
[Year],
[DayOfYear],
[ProductType],
[ApplicationID],
[TotalSize],
[TotalCount],
[LastReportTimeSpan]
FROM #InMem
DELETE FROM [dbo].[Report]
WHERE DATEADD(day, [DayOfYear] + #maintainDay, DATEADD(YEAR, [Year] - 1900, 0)) > #currentdate;
END;
END
6- Error from 5 stored procedure
Msg 4512, Level 16, State 3, Procedure ArchiveReport, Line 25
Cannot schema bind procedure 'dbo.ArchiveReport' because name 'TestReportArchive.dbo.Report' is invalid for schema binding. Names must be in two-part format and an object cannot reference itself.
TestReportArchive is my destination database
Cross-database queries involving Memory-Optimized Tables are not supported.
Unsupported SQL Server Features for In-Memory OLTP
A query cannot access other databases if the query uses either a
memory-optimized table or a natively compiled stored procedure. This
restriction applies to transactions as well as to queries.
Ultimately I created a non-memory-optimized table (ReportTemp) on the testReport (first Database) and change the stored procedure to insert data from Report Table to ReportTemp Table in the first database. Then I write another SP to move Data to archive Database.
I have 2 tables User and UserLogin. UserLogin have a foreign key relationship with User table. What I want to do here is whenever I insert data into the User table through my API their autogenerated(user_id) auto-inserted into UserLogin table.
User table:
user_id | user_name | user_email
UserLogin table:
user_id | user_password | user_number
So when I run my query to add name and email in User table then autoincremented user_id is automatically inserted in UserLogin table with the provided password and number. How can I achieve this and is that thread safe?
yes it is possible and usually can be optained by ##identity try something like
set nocount off;
insert into User Values("Name","Email")
declare #lastID = ##identity
insert into UserLogin values(#lastID,"Password","number")
This code helps you
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[User](
[user_id] [int] IDENTITY(1,1) NOT NULL,
[user_name] [varchar](100) NULL,
[user_email] [varchar](100) NULL,
[salt] [uniqueidentifier] NULL,
CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED
(
[user_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[UserLogin](
[UserLoginId] [int] IDENTITY(1,1) NOT NULL,
[user_id] [int] NULL,
[user_password] [binary](1) NULL,
[user_number] [int] NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[User] WITH CHECK ADD CONSTRAINT [FK_UserLogin_User] FOREIGN KEY([user_id])
REFERENCES [dbo].[User] ([user_id])
GO
ALTER TABLE [dbo].[User] CHECK CONSTRAINT [FK_UserLogin_User]
GO
CREATE PROC [dbo].[Usp_UserLogin]
(
#user_name VARCHAR(100)
,#user_email VARCHAR(100)
,#user_password VARCHAR(200)
,#user_number INT
)
AS
Begin
SET NOCOUNT ON
DECLARE #Salt UNIQUEIDENTIFIER =NEWID()
,#IdentityNUmber INT
,#responseMessage nvarchar(1000)
BEGIN TRY
INSERT INTO Dbo.[User]([user_name],[user_email],[salt])
SELECT #user_name
,#user_email
,#salt
SET #IdentityNUmber=SCOPE_IDENTITY()
INSERT INTO Dbo.[UserLogin]([user_id],[user_password],user_number)
SELECT
#IdentityNUmber
,#user_number
,HASHBYTES('SHA2_512', #user_password + CAST(#salt AS NVARCHAR(36)))
END TRY
BEGIN CATCH
SET #responseMessage=ERROR_MESSAGE()
END CATCH
END
GO
Execute the Procedure
EXEC [Usp_UserLogin] #user_name='Test1',#user_email='Test1#gmail',#user_password='Test1#123',#user_number=2
I have a large table with 110M rows. I would like to copy some of the fields into a new table and here is a rough idea of how I am trying to do:
DECLARE l_seenChangesTo DATETIME DEFAULT '1970-01-01 01:01:01';
DECLARE l_migrationStartTime DATETIME;
SELECT NOW() into l_migrationStartTime;
-- See if we've run this migration before and if so, pick up from where we left off...
IF EXISTS(SELECT seenChangesTo FROM migration_status WHERE client_user = CONCAT('this-migration-script-', user())) THEN
SELECT seenChangesTo FROM migration_status WHERE client_user = CONCAT('this-migration-script-', user()) INTO l_seenChangesTo;
SELECT NOW() as LogTime, CONCAT('Picking up from where we left off: ', l_seenChangesTo) as MigrationStatus;
END IF;
INSERT IGNORE INTO newTable
(field1, field2, lastModified)
SELECT o.column1 AS field1,
o.column2 AS field2,
o.lastModified
FROM oldTable o
WHERE
o.lastModified >= l_seenChangesTo AND
o.lastModified <= l_migrationStartTime;
INSERT INTO migration_status (client_user,seenChangesTo)
VALUES (CONCAT('this-migration-script-', user()), l_migrationStartTime)
ON DUPLICATE KEY UPDATE seenChangesTo=l_migrationStartTime;
Context:
CREATE TABLE IF NOT EXISTS `newTable` (
`field1` varchar(255) NOT NULL,
`field2` tinyint unsigned NOT NULL,
`lastModified` datetime NOT NULL,
PRIMARY KEY (`field1`, `field2`),
KEY `ix_field1` (`field1`),
KEY `ix_lastModified` (`lastModified`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `oldTable` (
`column1` varchar(255) NOT NULL,
`column2` tinyint unsigned NOT NULL,
`lastModified` datetime NOT NULL,
PRIMARY KEY (`column1`, `column2`),
KEY `ix_column1` (`column1`),
KEY `ix_lastModified` (`lastModified`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `migration_status` (
`client_user` char(64) NOT NULL,
`seenChangesTo` char(128) NOT NULL,
PRIMARY KEY (`client_user`)
);
Note: I have a few more columns in oldTable. Both oldTable and newTable are in same DB schema using mysql.
What's the general strategy when copying a very table? Should I perform this migration in an iterative manner by copy say 50,000 rows at time.
The insert speed doing a migration like this iteratively is going to be dreadfully slow. Why not SELECT oldTable INTO OUTFILE, then LOAD DATA INFILE ?
Within a BPM web application, I have a field for an invoice # on a particular page but I need for it to be auto generated every time a user attaches an invoice and views that page. That number must be unique and preferably auto-incremented. A value for the invoice # field can be displayed by querying from a table from an external MYSQL database. So every time a user lands on that particular page, a SELECT query statement can be fired.
On MYSQL end, how would I set this up? So basically, I would like to setup a query for that invoice # field where it will for run a query for example,
SELECT invoice_num FROM invoice_generator
and every time this query runs, it would return the next incremented number.
You can use mysql trigger concept here....
I have added one example here...
It will be very usefull for u (see this link also :http://www.freemindsystems.com/mysql-triggers-a-practical-example/)
CREATE TABLE `products` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL DEFAULT '',
`price` int(20) NOT NULL DEFAULT '0',
`other` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `products_name_idx` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `freetags` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`tag` varchar(50) NOT NULL DEFAULT '',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `freetagged_objects` (
`tag_id` int(20) NOT NULL DEFAULT '0',
`object_id` int(20) NOT NULL DEFAULT '0',
`tagged_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`module` varchar(50) NOT NULL DEFAULT '',
PRIMARY KEY (`tag_id`, `object_id`),
KEY `freetagged_objects_tag_id_object_id_idx` (`tag_id`, `object_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
INSERT_PRODUCTS_TAGS
DELIMITER ||
DROP TRIGGER IF EXISTS insert_products_tags;
||
DELIMITER ##
CREATE TRIGGER insert_products_tags AFTER INSERT ON products
FOR EACH ROW
BEGIN
DECLARE current_id integer;
DECLARE tag_id integer;
DECLARE next integer;
DECLARE tag_field varchar(255);
DECLARE next_sep integer;
DECLARE current_tag varchar(255);
DECLARE right_tag varchar(255);
-- We use the field other as comma-separated tag_field
SET tag_field = NEW.other;
-- Check for empty tags
IF (CHAR_LENGTH(tag_field) <> 0) THEN
-- Loop until no more ocurrencies
set next = 1;
WHILE next = 1 DO
-- Find possition of the next ","
SELECT INSTR(tag_field, ',') INTO next_sep;
IF (next_sep > 0) THEN
SELECT SUBSTR(tag_field, 1, next_sep - 1) INTO current_tag;
SELECT SUBSTR(tag_field, next_sep + 1, CHAR_LENGTH(tag_field)) INTO right_tag;
set tag_field = right_tag;
ELSE
set next = 0;
set current_tag = tag_field;
END IF;
-- Drop spaces between comas
SELECT TRIM(current_tag) INTO current_tag;
-- Insert the tag if not already present
IF (NOT EXISTS (SELECT tag FROM freetags WHERE tag = current_tag)) THEN
-- Insert the tag
INSERT INTO freetags (tag) values (current_tag);
SELECT LAST_INSERT_ID() INTO tag_id;
ELSE
-- Or get the id
SELECT id FROM freetags WHERE tag = current_tag INTO tag_id;
END IF;
-- Link the object tagged with the tag
INSERT INTO freetagged_objects
(tag_id, object_id, module)
values
(tag_id, NEW.id, 'products');
END WHILE;
END IF;
END;
##
Now If you execute an insert on products table:
INSERT INTO PRODUCTS
(name, price, other)
values
("product1", 2, "tag1, tag2,tag3 , tag 4");
So it's my understanding that looping and cursors should be avoided unless absolutely necessary. In my situation it seems to me that if I have to rely on them I am doing it wrong so I'd like the communities input.
I am writing a stored procedure to add items to a queue, the number of items added are dependent on the intervals setup for the item type. Once the data is inserted into the queue I need to add the ID's of the queue items to other tables. I am running into an issue here as I generally rely on SCOPE_IDENTITY() to pull the ID for return.
Below is my table structure:
CREATE TABLE QueueItem (
QueueItemID [int] IDENTITY NOT NULL,
ReferenceID [int] NOT NULL,
StartDate [datetime] NOT NULL
);
CREATE TABLE CatalogDate (
ReferenceID [int] NOT NULL,
CatalogID [int] NOT NULL,
DayCount [int] NOT NULL
);
CREATE TABLE ItemInterval (
ReferenceID [int] NOT NULL,
Interval [int] NOT NULL
);
CREATE PROCEDURE SetQueueItem
#ReferenceID [int],
#UserID [int]
AS
BEGIN
DECLARE #DayCount [int]
DECLARE #CatalogID [int]
DECLARE #QueueItemID [int]
SELECT #DayCount = DayCount, #CatalogID = CatalogID
FROM CatalogDate
WHERE ReferenceID = #ReferenceID
DECLARE #Date [datetime] = --SELECT Date from another table using #UserID and #CatalogID
DECLARE #StartDate [datetime] = (SELECT DATEADD(dd, #DayCount, #Date))
INSERT INTO QueueItem(ReferenceID, StartDate)
SELECT #ReferenceID, DATEADD(#DateCount-Interval), #Date)
FROM ItemInterval
WHERE ReferenceID = #ReferenceID --SELECT RETURNS MULTIPLE ROWS
Now once the insert of multiple records has been done I need to take the QueueItemID's that were generated from the inserts and insert them along with some additional data in two other tables.
The only ways I can see of accomplishing this is by either breaking up the INSERT to loop through each record in ItemInterval and insert them one at a time, or to query the MAX records from the QueueItem table before and after the insert and then loop through the difference assuming the ID's are perfectly sequential.
Thoughts?
Take a look at the OUTPUT clause.
http://technet.microsoft.com/en-us/library/ms177564.aspx
http://msdn.microsoft.com/en-us/library/ms177564.aspx
Thanks to #StarShip3000 for the references on OUTPUT
To solve this problem I dumped the results into a variable table using OUTPUT and then use that table to insert the results in the other tables.
DECLARE #QueueItemTable TABLE
(
ItemID [int]
)
INSERT QueueItem(ReferenceID, StartDate)
OUTPUT inserted.QueueItemID INTO #QueueItemTable(ItemID)
SELECT #ReferenceID, DATEADD(#DateCount-Interval), #Date)
FROM ItemInterval
WHERE ReferenceID = #ReferenceID --INSERTS IDENTITIES into #QueueItemTable variable
--Insert into other tables
INSERT QueueRelationship(QueueItemID)
SELECT ItemID
FROM #QueueItemTable
Viola!