This doesn't seem to have been answered anywhere (although very similar cases have been answered)...
I have an issue where I am trying to update a column's value in a table within a stored procedure. However, I pass more than one table to this stored procedure and some tables have a certain column and others don't. Thus I need to check if the column exists before I run this update. Now, because it's in a stored procedure, SQL seems to be parsing the entire chunk of code up front and complains that this column doesn't exist.
Code:
IF COL_LENGTH(''DBName' + #date+ '..' + #TableName + #date+''', ''ColumnName' + #specifictocolumn + 'restofcolumnname'') IS NOT NULL
update DBName' + #date+ '..' + #TableName + #date+ ' set ColumnName' + #specifictocolumn + 'restofcolumnname = 0
Alternatively
IF EXISTS(SELECT 1 FROM sys.columns WHERE Name = N''ColumnName' + #specifictocolumn + 'restofcolumnname '' AND Object_ID = Object_ID(N''DBName' + #date+ '..' + #TableName + #date+'''))
update DBName' + #date+ '..' + #TableName + #date+ ' set ColumnName' + #specifictocolumn + 'restofcolumnname = 0
Both of these give the error (column name removed for IP purposes):
Msg 207, Level 16, State 1, Line 6
Invalid column name 'ColumnName'.
There is a question on stack overflow called "Disable TSQL script check" that I looked at, but they suggest that you call the check of the column outside of the dynamic sql and then only execute if it passes the check. This won't work for me because part of the if-statement has variables in it that need to be in dynamic sql.
You can still split the dynamic SQL in 2 parts:
check if the column exists
do the actual update when 1. returns true
you'll probably want to use sp_executesql for this and an OUTPUT parameter.
Something along the lines of this:
DECLARE #sql nvarchar(max),
#result int
SELECT #sql = 'SELECT #col_length = COL_LENGTH(''DBName' + #date + '..' + #TableName + #date + ''', ''ColumnName' + #specifictocolumn + 'restofcolumnname'')'
EXEC sp_executesql #stmt = #sql,
#params = N'#col_length int OUTPUT',
#col_length = #result OUTPUT
IF #result IS NOT NULL
BEGIN
EXEC ('update DBName' + #date+ '..' + #TableName + #date+ ' set ColumnName' + #specifictocolumn + 'restofcolumnname = 0')
END
Or you could go 'dynamic inside dynamic', but will become a mess very quickly.
Related
I need to use the functionality of OPENJSON() in an old database with compatibility level 100. The server runs SQL SERVER 2016. So i came up with this idea: Create another DB "GeneralUTILS" (lvl 130) in the same server and call this function from lvl 100 DB:
CREATE FUNCTION [dbo].[OPENJSON_](#json NVARCHAR(MAX))
RETURNS #Results TABLE ([Key] nVARCHAR (4000) , [Value] NVARCHAR(MAX), [Type] INT)
AS
BEGIN
INSERT INTO #Results
SELECT * from OPENJSON(#json)
RETURN
END
But i don't have the WITH clause to modify the output table in the lvl 100 database.
Most important might be the question why you need this at all...
I hope I got correctly, what you need:
(Hint: This needs at least SQL-Server 2016)
--create two mock-up-databases
CREATE DATABASE dbOld;
GO
ALTER DATABASE dbOld SET COMPATIBILITY_LEVEL = 100; --v2008
GO
CREATE DATABASE dbForJsonIssues;
GO
ALTER DATABASE dbForJsonIssues SET COMPATIBILITY_LEVEL = 130; --v2016
GO
--Now we will create a stored procedure in the "higher" database
USE dbForJsonIssues;
GO
--Attention: replacing FROM is a very hacky way... Read the hints at the end...
--You might use parameters for the JSON-string and the JSON-path, but then you must use sp_executesql
CREATE PROCEDURE EXEC_Json_Command #Statement NVARCHAR(MAX), #TargetTable NVARCHAR(MAX)
AS
BEGIN
DECLARE #statementWithTarget NVARCHAR(MAX)=REPLACE(#Statement,'FROM',CONCAT(' INTO ',#TargetTable,' FROM'));
PRINT #statementWithTarget; --you can out-comment this line...
EXEC(#statementWithTarget);
END
GO
--Now we go into the "lower" database
USE dbOld;
GO
--A synonym is not necessary, but allows for easier code
CREATE SYNONYM dbo.ExecJson FOR dbForJsonIssues.dbo.EXEC_Json_Command;
GO
--This is how to use it
DECLARE #json NVARCHAR(MAX)=N'[{"someObject":[{"attr1":"11", "attr2":"12"},{"attr1":"21", "attr2":"22"}]}]';
DECLARE #Statement NVARCHAR(MAX)=CONCAT(N'SELECT * FROM OPENJSON(N''',#json,N''',''$[0].someObject'') WITH(attr1 INT,attr2 INT)');
--the target table will be created "on the fly"
--You can use ##SomeTarget too, but be careful with concurrencies in both approaches...
EXEC ExecJson #Statement=#Statement,#TargetTable='dbOld.dbo.SomeTarget';
SELECT * FROM SomeTarget;
--We can drop this table after dealing with the result
DROP TABLE SomeTarget;
GO
--Clean-up (carefull with real-data!)
USE master;
GO
DROP DATABASE dbOld;
DROP DATABASE dbForJsonIssues;
The most important concepts:
We cannot use the JSON-statements directly within the database, but we can create a statement on string base, pass it to the stored procedure and use EXEC() for its execution.
Using SELECT * INTO SomeDb.SomeSchema.SomeTargetTable FROM ... will create a table with the fitting structure. Make sure to use a table not existing in your database.
It is not really needed to pass the target table as parameter, you might place this in the statement yourself. Replacing the FROM in the stored procedure is a very shrewed way and could lead into troubles if from is found in another place.
You might use similar procedures for various needs...
Yeah. No way this would pass the smoke screen at our office. Anyway someone asked me to do something similar, but the use case was for parsing json arrays only. Since Json_Query and Json_Value are available I hacked this together just to give them something to work with. My colleague liked the results. Turns out he's much cooler than I am after he modified it.
Declare #Fields NVarchar(2000) = 'Name,Coolness'
Declare #Delimiter As Varchar(10) = ',';
Declare #Xml As Xml = Cast(('<V>' + Replace(#Fields, #delimiter, '</V><V>') + '</V>' ) As Xml);
Declare #Json Nvarchar(4000) = N'{"Examples":[{"Name": "Chris","Coolness": "10"},{"Name": "Jay","Coolness": "1"}]}';
Exec ('Begin Try Drop Table #JsonTemp End Try Begin Catch End Catch');
Create Table #JsonTemp (JsonNode Nvarchar(1000));
Declare #Max INTEGER = 100;
Declare #Index INTEGER = 0;
While #Index < #Max
Begin
Declare #Affected Integer = 0;
Declare #Select Nvarchar(200) = '''' + 'lax$.Examples[' + Convert(Nvarchar, #Index) + ']' + '''';
Declare #Statement Nvarchar(2000)= 'Select Json_Query(' + '''' + #Json + '''' + ', ' + #Select + ') Where Json_Query(' + '''' + #Json + '''' + ', ' + #Select + ') Is Not Null';
Insert Into #JsonTemp (JsonNode) Exec sp_executesql #Statement;
Set #Affected = ##RowCount;
If (#Affected = 0) Begin Break End
Set #Index = #Index + 1;
End
Declare #Table Table(Field NVarchar(200));
Declare #Selector NVarchar(500) = 'Json_Value(' + '''' + '{"Node":' + '''' + ' + ' + 'JsonNode' + ' + ' + '''' + '}' + '''' + ', ' + '''' + '$.Node.#Field' + '''' + ')';
Insert Into #Table(Field)
Select N.value('.', 'Varchar(10)') As Field
From #XML.nodes('V') As A(N);
Declare #Selectors Varchar(8000);
Select #Selectors = Coalesce(#Selectors + ', ', '') + Replace(#Selector, '#Field', Field) + ' As ' + Field
From #Table
Exec ('Select ' + #Selectors + ' From #JsonTemp');
I have below SPROC in which i am passing column name(value) along with other parameters(Place,Scenario).
ALTER PROCEDURE [dbo].[up_GetValue]
#Value varchar(20), #Place varchar(10),#Scenario varchar(20), #Number varchar(10)
AS BEGIN
SET NOCOUNT ON;
DECLARE #SQLquery AS NVARCHAR(MAX)
set #SQLquery = 'SELECT ' + #Value + ' from PDetail where Place = ' + #Place + ' and Scenario = ' + #Scenario + ' and Number = ' + #Number
exec sp_executesql #SQLquery
END
GO
when executing : exec [dbo].[up_GetValue] 'Service', 'HOME', 'Agent', '123697'
i am getting the below error msg
Invalid column name 'HOME'.
Invalid column name 'Agent'.
Do i need to add any thing in the sproc??
First: You tagged your question as mysql but I think your code is MSSQL.
Anyway, your problem is that you need to add quotes around each string valued parameter.
Like this:
alter PROCEDURE [dbo].[up_GetValue]
#Value varchar(20), #Place varchar(10),#Scenario varchar(20), #Number varchar(10)
AS BEGIN
SET NOCOUNT ON;
DECLARE #SQLquery AS NVARCHAR(MAX)
set #SQLquery = 'SELECT ' + QUOTENAME(#Value) + ' from PDetail where Place = ''' + #Place + ''' and Scenario = ''' + #Scenario + ''' and Number = ''' + #Number +''''
print #SQLquery
exec sp_executesql #SQLquery
END
GO
Update:
Use QUOTENAME to make sure it works.
QUOTENAME:
Returns a Unicode string with the delimiters added to make the input string a valid SQL Server delimited identifier.
You need to quote column names with ` (backtick) and string values with ".
set #SQLquery = 'SELECT `' + #Value + '` from PDetail where Place = "' + #Place + '" and Scenario = "' + #Scenario + '" and Number = ' + #Number
Try using a prepared statement instead of concatinating the string.
Example:
PREPARE stmt1 FROM 'SELECT ? from PDetail where Place = ? and Scenario = ? and Number = ?;
EXECUTE stmt1 USING #Value, #Place, #Scenario, #Number;
I Exec one procedure to generate column and use in SRRS dataset :
here's my SP :
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER procedure [dbo].[CrossTab_MultiLV]
( #Select varchar(2000),
#Pivots1Col varchar(100),
#Summaries varchar(500),
#GroupBy varchar(100),
#OtherCols varchar(1000) = Null)
AS
set nocount on
set ansi_warnings on
declare #Vals varchar(8000);
set #Vals = '';
set #OtherCols= isNull(', ' + #OtherCols,'')
create table #temp (Pivots1 varchar(100))
insert into #temp
exec ('select distinct convert(varchar(100),' + #Pivots1Col + ',101) as Pivots1 FROM (' + #Select + ') A')
select #Vals = #Vals + ', ' +
replace(replace(#Summaries,'(','(CASE WHEN ' + #Pivots1Col + '=''' + Pivots1 + ''' THEN '),')[', ' END) as [' + Pivots1 )
from #Temp
order by Pivots1
drop table #Temp
exec ( 'select ' + #GroupBy + #OtherCols + #Vals +
' from (' + #Select + ') A GROUP BY ' + #GroupBy)
set nocount off
set ansi_warnings on
from sp above I just want to process something and generate field by those SP Produce multiple column , but only show the two first column
:
range TotalAccount CL_Only CL_Only_Have_Rate CL_Only_No_Rate EU_CL EU_CL_Have_Rate EU_CL_No_Rate EU_Only EU_Only_Have_Rate EU_Only_No_Rate
12 3 1 1 0 2 2 0 0 0 0
it'll only show : range TotalAccount column , is there any mistake in my stored procedure ??
I would abandon the dynamic stored procedure design - SSRS does not work with these.
Instead I would present the data with fixed columns and use a Column Group in the SSRS design.
A few minutes ago I was only searching for a simple syntax (SQL server) query that will copy a table Row .
This is usually done from time to time, when working on a ASP.net project, testing data with queries
inside the SQL SERVER management studio . so one of the routine actions is copying a row, altering the required columns to be different from each other, then testing data with queries
So I've encountered - this stored procedure- ,as answer by Dan Atkinson
but adding it to where all non testing purpose are stored lead me to think
is it possible to store them in sorted order so I could Distinguish
'utils' or 'testingPurpose' ones from those used in projects
(default folder inside managment treeview is Programmabilty) could this be another folder too
or this is not an option ?
if not , I thought of Utils. prefix like that (if no other way exist)
dbo.Utils.CopyTableRow
dbo.Utils.OtherRoutineActions ....
Or there's a designated way to achieve what I was thinking of.
this is a first "Util" stored procedure i've made , found it's only solution
prefexing it via Util_
ALTER PROCEDURE [dbo].[Utils_TableRowCopy](
#TableName VARCHAR(50) ,
#RowNumberToCopy INT
)
AS
BEGIN
declare #RowIdentity sysname =
(SELECT name FROM sys.identity_columns WHERE object_id = object_id(#TableName)
)
DECLARE #columns VARCHAR(5000), #query VARCHAR(8000);
SET #query = '' ;
SELECT #columns =
CASE
WHEN #columns IS NULL THEN column_name
ELSE #columns + ',' + column_name
END
FROM INFORMATION_SCHEMA.COLUMNS
WHERE (
TABLE_NAME = LTRIM(RTRIM(#TableName))
AND
column_name <> LTRIM(RTRIM(#RowIdentity))
);
SET #query = 'INSERT INTO ' + #TableName + ' (' + #columns + ') SELECT ' + #columns + ' FROM ' + #TableName + ' WHERE ' + #RowIdentity + ' = ' + CAST(#RowNumberToCopy AS VARCHAR);
--SELECT SCOPE_IDENTITY();
declare #query2 VARCHAR(100) = ' Select Top 1 * FROM '+ #TableName +' Order BY ' + #RowIdentity + ' desc' ;
EXEC (#query);
EXEC (#query2);
END
EDIT: Database names have been modified for simplicity
I'm trying to get some dynamic sql in place to update static copies of some key production tables into another database (sql2008r2). The aim here is to allow consistent dissemination of data (from the 'static' database) for a certain period of time as our production databases are updated almost daily.
I am using a CURSOR to loop through a table that contains the objects that are to be copied into the 'static' database.
The prod tables don't change that frequently, but I'd like to make this somewhat "future proof" (if possible!) and extract the columns names from INFORMATION_SCHEMA.COLUMNS for each object (instead of using SELECT * FROM ...)
1) From what I have read in other posts, EXEC() seems limiting, so I believe that I'll need to use EXEC sp_executesql but I'm having a little trouble getting my head around it all.
2) As an added extra, if at all possible, i'd also like to exclude some columns for particular tables (structures vary slightly in the 'static' database)
here's what i have so far.
when executed, #colnames returns NULL and therefore #sql returns NULL...
could someone guide me to where i might find a solution?
any advice or help with this code is much appreciated.
CREATE PROCEDURE sp_UpdateRefTables
#debug bit = 0
AS
declare #proddbname varchar(50),
#schemaname varchar(50),
#objname varchar(150),
#wherecond varchar(150),
#colnames varchar(max),
#sql varchar(max),
#CRLF varchar(2)
set #wherecond = NULL;
set #CRLF = CHAR(10) + CHAR(13);
declare ObjectCursor cursor for
select databasename,schemaname,objectname
from Prod.dbo.ObjectsToUpdate
OPEN ObjectCursor ;
FETCH NEXT FROM ObjectCursor
INTO #proddbname,#schemaname,#objname ;
while ##FETCH_STATUS=0
begin
if #objname = 'TableXx'
set #wherecond = ' AND COLUMN_NAME != ''ExcludeCol1'''
if #objname = 'TableYy'
set #wherecond = ' AND COLUMN_NAME != ''ExcludeCol2'''
--extract column names for current object
select #colnames = coalesce(#colnames + ',', '') + QUOTENAME(column_name)
from Prod.INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = + QUOTENAME(#objname,'') + isnull(#wherecond,'')
if #debug=1 PRINT '#colnames= ' + isnull(#colnames,'null')
--replace all data for #objname
--#proddbname is used as schema name in Static database
SELECT #sql = 'TRUNCATE TABLE ' + #proddbname + '.' + #objname + '; ' + #CRLF
SELECT #sql = #sql + 'INSERT INTO ' + #proddbname + '.' + #objname + ' ' + #CRLF
SELECT #sql = #sql + 'SELECT ' + #colnames + ' FROM ' + #proddbname + '.' + #schemaname + '.' + #objname + '; '
if #debug=1 PRINT '#sql= ' + isnull(#sql,'null')
EXEC sp_executesql #sql
FETCH NEXT FROM ObjectCursor
INTO #proddbname,#schemaname,#objname ;
end
CLOSE ObjectCursor ;
DEALLOCATE ObjectCursor ;
P.S. i have read about sql injection, but as this is an internal admin task, i'm guessing i'm safe here!? any advice on this is also appreciated.
many thanks in advance.
You have a mix of SQL and dynamic SQL in your query against information_schema. Also QUOTENAME isn't necessary in the where clause and will actually prevent a match at all, since SQL Server stores column_name, not [column_name], in the metadata. Finally, I'm going to change it to sys.columns since this is the way we should be deriving metadata in SQL Server. Try:
SELECT #colnames += ',' + name
FROM Prod.sys.columns
WHERE OBJECT_NAME([object_id]) = #objname
AND name <> CASE WHEN #objname = 'TableXx' THEN 'ExcludeCol1' ELSE '' END
AND name <> CASE WHEN #objname = 'TableYy' THEN 'ExcludeCol2' ELSE '' END;
SET #colnames = STUFF(#colnames, 1, 1, '');