I've set up our Azure cloud DB to be a linked server to our 'SQL server 2008 R2'-server like this post described: http://blogs.msdn.com/b/sqlcat/archive/2011/03/08/linked-servers-to-sql-azure.aspx
I've enabled RPC and RPC Out because I read that somewhere.
Now the problem is I cannot get the ID of the just inserted record. Please take a look at this test table:
CREATE TABLE dbo.TEST
(
ID INT IDENTITY(1, 1) NOT NULL
CONSTRAINT PK_TEST_ID PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
I've also created this stored procedure:
CREATE PROCEDURE test_create #ID INT OUTPUT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
INSERT INTO TEST
DEFAULT VALUES
SELECT #ID = SCOPE_IDENTITY()
END
I've tried to get the last inserted value through multiple ways but none of them are working:
DECLARE #ID INT
EXEC AZURE01.TestDB.dbo.test_create #ID OUTPUT
SELECT #ID
INSERT INTO AZURE01.TestDB.dbo.TEST DEFAULT VALUES
SELECT #ID = SCOPE_IDENTITY();
SELECT #ID
INSERT INTO AZURE01.TestDB.dbo.TEST DEFAULT VALUES
SELECT #ID = ##IDENTITY
SELECT #ID
SELECT * FROM OPENQUERY(AZURE01, 'INSERT INTO TestDB.dbo.TEST DEFAULT VALUES; SELECT SCOPE_IDENTITY() AS ID');
DECLARE #ScopeIdentity TABLE (ID int);
INSERT INTO #ScopeIdentity
EXEC AZURE01.master..sp_executesql N'
INSERT TestDB.dbo.TEST DEFAULT VALUES;
SELECT SCOPE_IDENTITY()';
SELECT * FROM #ScopeIdentity;
INSERT AZURE01.TestDB.dbo.TEST
OUTPUT inserted.ID
INTO #ScopeIdentity
DEFAULT VALUES
SELECT * FROM #ScopeIdentity
I understand why SCOPE_IDENTITY() and ##IDENTITY don't work (because they are local functions/variables which don't have information from the linked server) but the stored procedure with the output parameter should work, right? (locally on the server it works)
Anyone? :-)
Have you considered using a GUID (uniqueidentifier) field instead of or as well as int?
You can then generate the ID client-side (there's a multitude of tools to generate GUIDs) and pass that straight in your insert.
You then have the choice of re-selecting the row based on the GUID column to get the new int value or just use the GUID field as your PK and be done with it.
--create proc on azure database
create proc xxp_GetId
as
begin
--exec xxp_GetId
DECLARE #ID INT
INSERT INTO dbo.bit_CubeGetParameter DEFAULT VALUES
SELECT #ID = SCOPE_IDENTITY();
SELECT #ID
end
-- now run this query on your sql server
exec <"Link Server">.<"Azure Database Name">.dbo.xxp_GetId
The issue is the remote server execution.
What you can try is :
EXEC #TSqlBatch AT LinkedServer
What this does is tell the database at the other side to execute the tsql locally.
This has many uses. Maybe it can serve in this case as well, as the Scope_Identity() should be executed locally along with the insert.
Related
I have two (2) databases of dissimilar Schematics,
db1 migrated from MSSQL to MYSQL
and
db2 created from Laravel Migration.
Here's the challenge:
The tables of db1 do not have id columns (Primary Key) like is easily found on db2 tables. So I kept getting the warning message:
Current selection does not contain a unique column. Grid edit, checkbox, Edit, Copy and Delete features are not available.
So I had to inject the id columns on the tables in the db1
I need to extract fields [level_name, class_name] from stdlist in db1,
Create levels (id,level_name,X,Y) on db2
classes (id,class_name,level_id) on db2
To throw more light: The level_id should come from the already created levels table
I have already succeeded in extracting the first instance using the following snippet:
First Query to Create Levels
INSERT INTO db2.levels(level_name,X,Y)
SELECT class_name as level_name,1 as X,ClassAdmitted as Y
FROM db1.stdlist
GROUP BY ClassAdmitted;
This was successful.
Now, I need to use the newly created ids in levels table to fill up level_id column in the classes table.
For that to be possible, must I re-run the above selection schematics? Is there no better way to maybe join the table column from db1.levels to db2.stdlist and extract the required fields for the new insert schematics.
I'll appreciate any help. Thanks in advance.
Try adding a column for Processed and then do a while exists loop
INSERT INTO db2.levels(level_name,X,Y)
SELECT class_name as level_name,1 as X,ClassAdmitted as Y, 0 as Processed
FROM db1.stdlist
GROUP BY ClassAdmitted;
WHILE EXISTS(SELECT * FROM db2.levels WHERE Processed = 0)
BEGIN
DECLARE #level_name AS VARCHAR(MAX)
SELECT TOP 1 #level_name=level_name FROM db2.levels WHERE Processed = 0
--YOUR CODE
UPDATE db2.levels SET Processed=1 WHERE level_name=#level_name
END
You may need to dump into a temp table first and then insert into your real table (db2.levels) when you're done processing. Then you wouldn't need the Unnecessary column of processed on the final table.
This is what worked for me eventually:
First, I picked up the levels from the initial database thus:
INSERT INTO db2.levels(`name`,`school_id`,`short_code`)
SELECT name ,school_id,short_code
FROM db1.levels
GROUP BY name
ORDER BY CAST(IF(REPLACE(name,' ','')='','0',REPLACE(name,' ','')) AS UNSIGNED
INTEGER) ASC;
Then I created a PROCEDURE for the classes insertion
CREATE PROCEDURE dowhileClasses()
BEGIN
SET #Level = 1;
SET #Max = SELECT count(`id`) FROM db2.levels;
START TRANSACTION;
WHILE #Level <= #Max DO
BEGIN
DECLARE val1 VARCHAR(255) DEFAULT NULL;
DECLARE val2 VARCHAR(255) DEFAULT NULL;
DECLARE bDone TINYINT DEFAULT 0;
DECLARE curs CURSOR FOR
SELECT trim(`Class1`)
FROM db1.dbo_tblstudent
WHERE CAST(IF(REPLACE(name,' ','')='','0',REPLACE(name,' ','')) AS UNSIGNED INTEGER) =#Level
GROUP BY `Class1`;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET bDone = 1;
OPEN curs;
SET bDone = 0;
REPEAT
FETCH curs INTO val1;
IF bDone = 0 THEN
SET #classname = val1;
SET #levelID = (SELECT id FROM db2.levels WHERE short_code=#Level limit 1);
SET #schoolId = 1;
SET #classId = (SELECT `id` FROM db2.classes where class_name = #classname and level_id= #levelID limit 1);
IF #classId is null and #classname is not null THEN
INSERT INTO db2.classes(class_name,school_id,level_id)
VALUES(#classname,#schoolId,#levelID);
END IF;
END IF;
UNTIL bDone END REPEAT;
CLOSE curs;
END;
SELECT CONCAT('lEVEL: ',#Level,' Done');
SET #Level = #Level + 1;
END WHILE;
END;
//
delimiter ;
CALL dowhileClasses();
With this, I was able to dump The classes profile matching the previously created level_ids.
The whole magic relies on the CURSOR protocol.
For further details here is one of the documentations I used.
I'm trying to use tSQLt to test a stored procedure that returns JSON data. The database is running under SQL Server 2016. The stored procedure is as follows (simplified considerably):
CREATE PROCEDURE [SearchForThings]
#SearchText NVARCHAR(1000),
#MaximumRowsToReturn INT
AS
BEGIN
SELECT TOP(#MaximumRowsToReturn)
[Id],
[ItemName]
FROM
[BigTableOfThings] AS bt
WHERE
bt.[Tags] LIKE N'%' + #SearchText + N'%'
ORDER BY
bt.[ItemName]
FOR JSON AUTO, ROOT(N'Things');
END;
This can't be tested in the same way XML can - I've tried a test table, as below, which was suggested in this related answer here -
CREATE TABLE #JsonResult (JsonData NVARCHAR(MAX))
INSERT #JsonResult (JsonData)
EXEC [SearchForThings] 'cats',10
The above code produces this error:
The FOR JSON clause is not allowed in a INSERT statement.
I cannot alter the stored procedure under test. How can I capture the JSON result?
Without being able to modify the stored proc, your last ditch effort would be to use OPENROWSET. Here's how you would call it in your case:
INSERT INTO #JsonResult
SELECT *
FROM OPENROWSET('SQLNCLI', 'Server=[ServerNameGoesHere];Trusted_Connection=yes;','EXEC SearchForThings ''cats'',10')
If you get an error, you can use the following to enable ad hoc distributed queries:
sp_configure 'Show Advanced Options', 1
GO
RECONFIGURE
GO
sp_configure 'Ad Hoc Distributed Queries', 1
GO
RECONFIGURE
GO
I know this is two years on but I stumbled on this today when trying to solve a different tSQLt problem.
Your issue occurs because the column being returned from your stored procedure is not explicity named. If you provide a column name for the JSON data you can insert the data into a #temp table, e.g.:
create table BigTableOfThings (
Id int not null,
ItemName nvarchar(50) not null,
Tags nvarchar(50) not null
);
insert BigTableOfThings values
(1, 'Whiskers', 'Cool for Cats'),
(2, 'Barkley', 'Dogs Rule!');
GO
create procedure SearchForThings
#SearchText nvarchar(1000),
#MaximumRowsToReturn int
as
begin
select [JsonData] = (
select top(#MaximumRowsToReturn)
Id,
ItemName
from
BigTableOfThings as bt
where
bt.Tags like N'%' + #SearchText + N'%'
order by
bt.ItemName
for json auto, root(N'Things')
);
end
go
create table #JsonResult (JsonData nvarchar(max));
insert #JsonResult (JsonData)
exec SearchForThings 'cats',10;
select * from #JsonResult;
go
Which yields...
{"Things":[{"Id":1,"ItemName":"Whiskers"}]}
I have one shared database and multiple client databases. The data is stored in the client database. We want to create a master set of stored procedures in the shared database and execute them from the client database. Given the following:
use shared;
go
create procedure GetInvoices as
print db_name() ' <- current database'
select * from invoices
go
use client1;
create table invoices(...columns...)
exec shared.dbo.GetInvoices
This returns the following error:
shared <- current database
Msg 208, Level 16, State 1, Procedure GetInvoices, Line 3
Invalid object name 'invoices'.
Without using dynamic SQL, how can I run the stored procedure in shared from client1 so that it executes in client1 and thus has access to all of the tables in client1?
You can run a stored procedure defined in master database in context of client1 database and see all client1 database tables, without dynamic SQL, but it uses undocumented stored procedure sp_ms_marksystemobject.
Your stored procedure name must start with sp_, for example sp_GetInvoices. Create it in master database, then call exec sp_ms_marksystemobject sp_GetInvoices to make it see the tables of the current database.
USE master
GO
CREATE OR ALTER PROCEDURE sp_GetInvoices
AS
BEGIN
SELECT ClientName from Invoice
END
GO
exec sp_ms_marksystemobject sp_GetInvoices
USE client1
GO
create table Invoice (ClientName varchar(100))
insert Invoice select 'Acme Client'
exec sp_GetInvoices
Result (running on SQL Server version 13.0.5081.1):
ClientName
------------
Acme Client
Try this on your "Master" database:
CREATE PROCEDURE [dbo].[GetDataFromClient]
#DB VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #STMT VARCHAR( 300 );
DECLARE #SP VARCHAR( 500 );
SET #SP = 'dbo.GetData';
SET #STMT = 'EXEC(''' + #SP + ''')';
EXEC('USE '+ #db + ';' + #STMT)
END
Now on the "Client" database:
CREATE TABLE [dbo].[TestClient](
[ID] [int] NOT NULL,
[Description] [varchar](10) NULL,
CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED
(
[ID] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY].
Create the stored procedure to retrieve data from table TestClient
CREATE PROCEDURE [dbo].[GetData]
AS
BEGIN
SELECT *
FROM TestClient;
END
Now you can retrieve the columns from the TestClient Database using:
USE [TestMaster]
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[GetDataFromClient]
#DB = N'TESTCLIENT'
SELECT 'Return Value' = #return_value
GO
You can call stored procedure using four part name after creating link server.
Or it can be called by openquery option.
LinkSerevr:
EXEC [ServerName] .dbname.scheme.StoredProcedureName
openquery : SELECT * FROM
OPENQUERY( [ServerName] .dbname.scheme.StoredProcedureName)
I wish to Insert or Update a row in a table - so I wish to try and use the MERGE syntax. My problem is that my data (to insert/update) exists in a variable table. I'm not sure how to write the correct syntax for the insert/update part.
Here's my pseduo code :-
-- Here's the Variable Table ... and not it has not PK.
DECLARE #PersonId INTEGER
DECLARE #variableTable TABLE (
#SomeScore DECIMAL(10,7),
#SomeAverage DECIMAL(10,7),
#SomeCount INTEGER)
-- Insert or Update
MERGE INTO SomeTable
WHERE PersonId = #PersonId
WHEN MATCHED THEN
UPDATE
SET PersonScore = ??????????
PersonAverage = ???????
PersonCount = ????????
WHEN NOT MATCHED THEN
INSERT(PersonId, PersonScore, PersonAverage, PersonCount)
VALUES(#PersonId, ????, ?????, ????)
.. and I'm not sure how I make sure the UPDATE correctly only updates 1 row (ie... does that need a WHERE clause?)
Finally, I based my this post on this SO question.
Yes it's possible. Your syntax was off though. The below seems to work. I have kept #PersonId as a separate scalar variable outside the table variable as that's how you have it in your question. And I have assumed that the Primary Key of SomeTable is PersonId
DECLARE #PersonId INT
DECLARE #variableTable TABLE (
SomeScore DECIMAL(10,7),
SomeAverage DECIMAL(10,7),
SomeCount INTEGER
)
-- Insert or Update
MERGE SomeTable AS T
USING #variableTable AS S
ON (T.PersonId = #PersonId)
WHEN MATCHED THEN
UPDATE
SET T.PersonScore = SomeScore,
T.PersonAverage = SomeAverage,
T.PersonCount = SomeCount
WHEN NOT MATCHED BY TARGET THEN
INSERT(PersonId, PersonScore, PersonAverage, PersonCount)
VALUES(#PersonId, SomeScore, SomeAverage, SomeCount);
I'm having a problem with executing a stored procedure from Perl (using the DBI Module). If I execute a simple SELECT * FROM table there are no problems.
The SQL code is:
DROP FUNCTION IF EXISTS update_current_stock_price;
DELIMITER |
CREATE FUNCTION update_current_stock_price (symbolIN VARCHAR(20), nameIN VARCHAR(150), currentPriceIN DECIMAL(10,2), currentPriceTimeIN DATETIME)
RETURNS INT
DETERMINISTIC
BEGIN
DECLARE outID INT;
SELECT `id` INTO outID FROM `mydb449`.`app_stocks` WHERE `symbol` = symbolIN;
IF outID > 0 THEN
UPDATE `mydb449`.`app_stocks`
SET `currentPrice` = currentPriceIN, `currentPriceTime` = currentPriceTimeIN
WHERE `id` = outID;
ELSE
INSERT INTO `mydb449`.`app_stocks`
(`symbol`, `name`, `currentPrice`, `currentPriceTime`)
VALUES (symbolIN, nameIN, currentPriceIN, currentPriceTimeIN);
SELECT LAST_INSERT_ID() INTO outID;
END IF;
RETURN outID;
END|
DELIMITER ;
The Perl code:
$sql = "select update_current_stock_price('$csv_result[0]', '$csv_result[1]', '$csv_result[2]', '$currentDateTime') as `id`;";
My::Extra::StandardLog("SQL being used: ".$sql);
my $query_handle = $dbh->prepare($sql);
$query_handle->execute();
$query_handle->bind_columns(\$returnID);
$query_handle->fetch();
If I execute select update_current_stock_price('aapl', 'Apple Corp', '264.4', '2010-03-17 00:00:00') asid; using the mysql CLI client it executes the stored function correctly and returns an existing ID, or the new ID.
However, the Perl will only return a new ID, (incrementing by 1 on each run). It also doesn't store the result in the database. It looks like it's executing a DELETE on the new id just after the update_current_stock_price function is run.
Any help? Does Perl do anything funky to procedures I should know about?
Before you ask, I don't have access to binary logging, sorry.
Perhaps you're doing it in a transaction and it's getting rolled back? The row is inserted but never becomes committed and cannot be seen.
I'd try it on your dev server and enable general query log, if in doubt.
Also you may want to know about the INSERT ... ON DUPLICATE KEY UPDATE syntax, which can probably do what you're trying to do anyway.
try
$query_handle->dump_results(15, "\n", '|');
before the bind_columns call to see if it is actually getting the results back, you could also try replace SELECT storedprocedure with SELECT * FROM storedprocedure
You should check that you are running the latest version of DBD::mysql (which is the MySQL-driver used by DBI). There used to be several issues with stored procedures, at least some are fixed in recent versions. Maybe these ressources are also helpful:
http://www.perlmonks.org/?node_id=609098
http://www.perlmonks.org/?node_id=830585