I am trying to run a stored procedure from an application which supports connecting to only tables and views. My work around was to use a view which gets results from SP via Openrowset(). Turns out that the SP is using #temp tables to store intermediate results and this is a DDL operation which seems to be not supported for distributed queries. I can replace #temp with #temp table variables but it slowing down the whole code drastically (I was using bulk insert (select * into #temp from t1) to speed up things).
The error message I am getting is
Cannot process the object "exec
DW.dbo.TestSpWithTempTable".
The OLE DB provider "SQLNCLI10" for linked server "(null)" indicates
that either the object has no columns or the current user does not
have permissions on that object.
Is there anyway I can use #temp tables in an SP and call it from a view using OpenRowSet?
CREATE PROC TestSpWithTempTable
AS
Select distinct TxnType into #txnTypes from Transactions
-- lot of other stuffs going on here
select TxnType from #temp
GO
The view I created is:
CREATE VIEW SelectDataFromSP
SELECT a.*
FROM OPENROWSET('SQLNCLI', 'Server=(local);TRUSTED_CONNECTION=YES;',
'exec DW.dbo.TestSpWithTempTable') AS a
The code that works but slow is
CREATE PROC TestSpWithTempTable
AS
declare #TxnTypes table(TxnType varchar(100))
insert into #TxnTypes
Select distinct TxnType from Transactions
-- lot of other stuffs going on here
select TxnType from #TxnTypes
GO
Related
I have a stored procedure in my SQL Server 2017 database which is used by a node.js API. This stored procedure does some calculations, then returns a json object using FOR JSON PATH.
I now want to write some tests in T-SQL to ensure correct functioning of this stored procedure. Altering the stored procedure is not an option.
To reproduce the problem, suppose I have a stored procedure like this:
USE tempdb
GO
CREATE PROCEDURE dbo.example_sp
AS
BEGIN
SELECT val1, val2
INTO #tmp
FROM ( VALUES (1, 2), (3, 4)) vals (val1, val2)
SELECT *
FROM #tmp
FOR JSON PATH
END
Now, noting that the FOR JSON PATH function will result in a single cell (1 row, 1 column) of type nvarchar, I would like to do something like this to extract the JSON object in order to test it:
CREATE TABLE #result (json_object varchar(max))
INSERT INTO #result
EXEC dbo.example_sp
But this produces an error:
The FOR JSON clause is not allowed in a INSERT statement.
OK, that's frustrating, but I can use the openrowset work around for that, right?
INSERT INTO #result
SELECT *
FROM OPENROWSET('sqlncli', 'Server=(local);Trusted_Connection=yes;',
'EXEC tempdb.dbo.example_sp')
No, now I have a this new error:
The metadata could not be determined because statement 'Select * from #tmp FOR JSON PATH' in procedure 'example_sp' uses a temp table.
Yet more frustrating, but surely I can use the WITH RESULT SET trick to solve that?
INSERT INTO #result
SELECT *
FROM OPENROWSET('sqlncli', 'Server=(local);Trusted_Connection=yes;',
'EXEC tempdb.dbo.example_sp with result sets ((output varchar(max) NULL))')
Now I receive this error:
Cannot process the object "exec tempdb.dbo.example_sp with result sets (( output varchar(max) NULL))". The OLE DB provider "SQLNCLI11" for linked server "(null)" indicates that either the object has no columns or the current user does not have permissions on that object.
OK, to deal with that we have to use SET NOCOUNT ON which permits SQL Server to skip some validation. The final version is:
INSERT INTO #result
SELECT *
FROM OPENROWSET('sqlncli','Server=(local);Trusted_Connection=yes;',
'SET NOCOUNT ON; exec tempdb.dbo.example_sp with result sets (( output varchar(max) NULL))')
Which works at long last. I feel that this is way too much kludge for such a simple thing ... surely there is a better way to do this?
When I run this command in the Management Studio:
DBCC CHECKIDENT ('TBL_NAME', RESEED, 0);
output is:
Checking identity information: current identity value '0', current column value '0'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
I would like to get that output and insert it into a #temptable, any thoughts?
If your intention is to get the latest generated identity for a specific table and store it to a temp table, starting from SQL Server 2008 R2, you can use the IDENT_CURRENT function like this:
SELECT IDENT_CURRENT('TBL_NAME') AS CurrentIdentity
INTO #temptable
For earlier versions of SQL Server, you can access the
SELECT IC.last_value AS CurrentIdentity
INTO #temptable
FROM sys.identity_columns IC
INNER JOIN sys.objects O ON IC.object_id=O.object_id
WHERE O.object_id=OBJECT_ID('TBL_NAME')
Capturing the PRINT output of DBCC or any other stored procedure is possible with many limitations through the OUTPUTBUFFER function like this:
DBCC CHECKIDENT ('TBL_NAME', RESEED, 0);
CREATE TABLE #temptable TABLE ([Buffer] NVARCHAR(MAX))
INSERT #output
EXEC ('DBCC OUTPUTBUFFER(##SPID)')
SELECT * FROM #temptable
I would strongly discourrage the use of OUTPUTBUFFER except for troubleshooting purposes.
I am creating multiple temp tables in a set of queries in SSRS before arriving at the final result that I want to display.
This works fine as long as I use select into to create my temp tables, but starts failing when I changed this to a set of create table and insert into statements.
I checked other resources on web, and most of them suggest either I am using bind variables wrong, or I am using a reserved keyword somewhere. I have thoroughly checked, and this is not the case.
I also checked the ASE documentation on SQL batches, and it states that create table statements are allowed in batches.
create table #temp1 (col1 char(9))
insert into #temp1 select col1 from existing_table1
set sort_merge off
select col2,col3 into #temp2 from #temp1 a, existing_table2 b where a.col1 = b.col1
set sort_merge on
select col2, col3 from #temp2
The error that I get is as below:
An error has occurred during report processing (rsProcessingAborted)
Query Execution failed for dataset dataset1. (rsErrorExecutingCommand)
[07002][Native Code 30070][ASEOLEDB]COUNT field incorrect
Background - I have a DB created from a single large flat file. Instead of creating a single large table with 106 columns. I created a "columns" table which stores the column names and the id of the table that holds that data, plus 106 other tables to store the data for each column. Since not all the records have data in all columns, I thought this might be a more efficient way to load the data (maybe a bad idea).
The difficulty with this was rebuilding a single record from this structure. To facilitate this I created the following procedure:
DROP PROCEDURE IF EXISTS `col_val`;
delimiter $$
CREATE PROCEDURE `col_val`(IN id INT)
BEGIN
DROP TEMPORARY TABLE IF EXISTS tmp_record;
CREATE TEMPORARY TABLE tmp_record (id INT(11), val varchar(100)) ENGINE=MEMORY;
SET #ctr = 1;
SET #valsql = '';
WHILE (#ctr < 107) DO
SET #valsql = CONCAT('INSERT INTO tmp_record SELECT ',#ctr,', value FROM col',#ctr,' WHERE recordID = ',#id,';');
PREPARE s1 FROM #valsql;
EXECUTE s1;
DEALLOCATE PREPARE s1;
SET #ctr = #ctr+1;
END WHILE;
END$$
DELIMITER ;
Then I use the following SQL where the stored procedure parameter is the id of the record I want.
CALL col_val(10);
SELECT c.`name`, t.`val`
FROM `columns` c INNER JOIN tmp_record t ON c.ID = t.id
Problem - The first time I run this it works great. However, each subsequent run returns the exact same record even though the parameter is changed. How does this persist even when the stored procedure should be dropping and re-creating the temp table?
I might be re-thinking the whole design and going back to a single table, but the problem illustrates something I would like to understand.
Unsure if it matters but I'm running MySQL 5.6 (64 bit) on Windows 7 and executing the SQL via MySQL Workbench v5.2.47 CE.
Thanks,
In MySQL stored procedures, don't put an # symbol in front of local variables (input parameters or locally declared variables). The #id you used refers to a user variable, which is kind of like a global variable for the session you're invoking the procedure from.
In other words, #id is a different variable from id.
That's the explanation of the immediate problem you're having. However, I would not design the tables as you have done.
Since not all the records have data in all columns, I thought this might be a more efficient way to load the data
I recommend using a conventional single table, and use NULL to signify missing data.
I'm converting a ColdFusion Project from Oracle 11 to MS SQL 2008. I used SSMA to convert the DB including triggers, procedures and functions. Sequences were mapped to IDENTITY columns.
I planned on using INSERT-Statements like
INSERT INTO mytable (col1, col2)
OUTPUT INSERTED.my_id
values('val1', 'val2')
This throws an error since the table has a trigger defined, that AFTER INSERT writes some of the INSERTED data to another table to keep a history of the data.
Microsoft writes:
If the OUTPUT clause is specified without also specifying the INTO
keyword, the target of the DML operation cannot have any enabled
trigger defined on it for the given DML action. For example, if the
OUTPUT clause is defined in an UPDATE statement, the target table
cannot have any enabled UPDATE triggers.
http://msdn.microsoft.com/en-us/library/ms177564.aspx
I'm now wondering what is the best practice fo firstly retrieve the generated id and secondly to "backup" the INSERTED data in a second table.
Is this a good approach for the INSERT? It works because the INSERTED value is not simply returned but written INTO a temporary variable. It works in my tests as Microsoft describes without throwing an error regarding the trigger.
<cfquery>
DECLARE #tab table(id int);
INSERT INTO mytable (col1, col2)
OUTPUT INSERTED.my_id INTO #tab
values('val1', 'val2');
SELECT id FROM #tab;
</cfquery>
Should I use the OUTPUT clause at all? When I have to write multiple clauses in one cfquery-block, shouldn't I better use SELECT SCOPE_DENTITY() ?
Thanks and best,
Bernhard
I think this is what you want to do:
<cfquery name="qryInsert" datasource="db" RESULT="qryResults">
INSERT INTO mytable (col1, col2)
</cfquery>
<cfset id = qryResults.IDENTITYCOL>
This seems to work - the row gets inserted, the instead of trigger returns the result, the after trigger doesn't interfere, and the after trigger logs to the table as expected:
CREATE TABLE dbo.x1(ID INT IDENTITY(1,1), x SYSNAME);
CREATE TABLE dbo.log_after(ID INT, x SYSNAME,
dt DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP);
GO
CREATE TRIGGER dbo.x1_after
ON dbo.x1
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT dbo.log_after(x) SELECT x FROM inserted;
END
GO
CREATE TRIGGER dbo.x1_before
ON dbo.x1
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #tab TABLE(id INT);
INSERT dbo.x1(x)
OUTPUT inserted.ID INTO #tab
SELECT x FROM inserted;
SELECT id FROM #tab;
END
GO
Now, if you write this in your cfquery, you should get a row back in output. I'm not CF-savvy so I'm not sure if it has to see some kind of select to know that it will be returning a result set (but you can try it in Management Studio to confirm I am not pulling your leg):
INSERT dbo.x1(x) SELECT N'foo';
Now you should just move your after insert logic to this trigger as well.
Be aware that right now you will get multiple rows back for (which is slightly different from the single result you would get from SCOPE_IDENTITY()). This is a good thing, I just wanted to point it out.
I have to admit that's the first time I've seen someone use a merged approach like that instead of simply using the built-in PK retrieval and splitting it into separate database requests (example).