Dealing with conditionally-executed SELECT statements - sql-server-2008

i have this sp: using sql server 2008
create procedure SelectTopCounts
#Id bigint = null,
#Count int = null,
#GetAll bit = 0
as
begin
set nocount on
IF (#Count IS NULL)
SELECT #Count = 15 --default
if(#GetAll = 1)
begin
select col,col2... .......
--very long select statement...
end
if(#Count is not null)
begin
select top #count .....
--very long select statement...
end
Is there a way I can have only ONE select statement instead of duplicating within the if and else condition?

Assuming your table will always have <= 2 billion rows, replace all the IFs/ELSEs with this:
SELECT TOP(COALESCE(CASE #GetAll WHEN 1 THEN 2000000000 END, #Count, 15))
col1, col2
FROM ...
-- I assume the #Id comes into play somehow...
-- WHERE ID_column = COALESCE(#Id, ID_column);

Related

Loop through a split string variable to insert rows in a stored procedure in SQL Server 2008

I am working on SQL Server 2008 to create a stored procedure that:
takes a string variable like this: '1,2,3'
splits the string using a table-valued function to get each value separately
and then inserts each value into a new row in a table
What I am trying to do is something like this:
WHILE (select vlaue FROM dbo.SplitString('1,2,3',',')) has rows
insert into TableName (col1,col2) values (col1Data, value)
I am having a hard time trying to find the right syntax for this.
I use this Table-valued function:
CREATE FUNCTION [dbo].[Split] (#sep char(1), #s varchar(512))
RETURNS table
AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #s)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #s, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT pn,
SUBSTRING(#s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
FROM Pieces
)
GO
Which takes a string with a separator and returns a table with two columns the first returns a 1-based position and the second the element at that position in the string:
Usage:
SELECT * FROM dbo.Split(',', '1,2,3')
Returns:
pn s
1 1
2 2
3 3
To Insert results into a table:
INSERT INTO TableName (Col1)
SELECT S FROM dbo.Split(',', '1,2,3)
For your specific example change your syntax to be:
insert into TableName (col1,col2)
select col1Data, value FROM dbo.SplitString('1,2,3',',')
The typical INSERT INTO ... SELECT ... should do:
INSERT INTO TableName (col1,col2)
SELECT #col1Data,value FROM dbo.SplitString('1,2,3',','))
If someone else is looking for this, I was about to make a split function as several answers mentioned but noticed there's a built-in function that does this already.
string_split was added in MSSQL 2016.
INSERT INTO Project.FormDropdownAnswers (FkTableId, CreatedBy, CreatedDate)
SELECT 123, TRY_CAST(value AS INT), #username, getdate()
FROM string_split('44,45,46,47,55',',')
https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql
CREATE TABLE tablename
(
id SMALLINT ,
value INT
)
INSERT INTO tablename ( id, value )
SELECT * FROM dbo.Split('1,2,3',',')
try this....
If need to use as variables there is 2 nice options:
Procedure MF_SPLIT
CREATE PROC [MF_SPLIT] (#ELS NVARCHAR(MAX)=NULL OUTPUT, #RET NVARCHAR(MAX)=NULL OUTPUT, #PROC NVARCHAR(MAX)=NULL) AS BEGIN
IF #ELS IS NULL BEGIN
PRINT ' #ELS
List of elements in string (OUTPUT)
#RET
Next return (OUTPUT)
#PROC
NULL = '','', content to do split
Example:
DECLARE #NAMES VARCHAR(100) = ''ERICK,DE,VATHAIRE''
DECLARE #N VARCHAR(100)
WHILE #NAMES IS NOT NULL BEGIN
EXEC MF_SPLIT #NAMES OUTPUT, #N OUTPUT
SELECT List = #NAMES, ActiveWord = #N
END'
RETURN
END
SET #PROC = ISNULL(#PROC, ',')
IF CHARINDEX(#PROC, #ELS) = 0 BEGIN
SELECT #RET = #ELS, #ELS = NULL
RETURN
END
SELECT
#RET = LEFT(#ELS, CHARINDEX(#PROC, #ELS) - 1)
, #ELS = STUFF(#ELS, 1, LEN(#RET) + 1, '')
END
Usage:
DECLARE #NAMES VARCHAR(100) = '1,2,3'
DECLARE #N VARCHAR(100)
WHILE #NAMES IS NOT NULL BEGIN
EXEC MF_SPLIT #NAMES OUTPUT, #N OUTPUT
SELECT List = #NAMES, ActiveWord = #N
END
Procedure MF_SPLIT_DO (Depends of MF_SPLIT), less sintax to use BUT the code will be in a string and use default variable "#X"
CREATE PROC MF_SPLIT_DO (#ARR NVARCHAR(MAX), #DO NVARCHAR(MAX)) AS BEGIN
--Less sintax
DECLARE #X NVARCHAR(MAX)
WHILE #ARR IS NOT NULL BEGIN
EXEC MF_SPLIT #ARR OUT, #X OUT
EXEC SP_EXECUTESQL #DO, N'#X NVARCHAR(MAX)', #X
END
END
Usage:
EXEC MF_SPLIT_DO '1,2,3', 'SELECT #X'

How to write this query in SQL server 2008

I have a requirement to insert more than one record into the table where the stored procedure will return the value for insertion. Please consider my logic.
Query Logic
foreach(var a in (select id from table1))
{
insert into table2 values(a,DateTime.Now)
}
I need the same above logic needs to be done in SQL server. Any help to this solution will be appreciated.
Thanks,
declare #a int=0
while(#a<10)
begin
if(#a in (select id from table1))
begin
insert into table2 values(a,DateTime.Now)
set #a=#a+1
end
end
I tried this and working well. Thanks to #Nikola Markovinović
insert into table2(idColumn, dateColumn) select id, getdate() from table1
hope this helps..
DECLARE #intFlag INT
SET #intFlag = 1
WHILE (#intFlag <=5)
BEGIN
PRINT #intFlag
SET #intFlag = #intFlag + 1
IF #intFlag = 4
BREAK;
END
GO
declare #a int = 0, #n int, #i int = 1
select #n=COUNT(*) from table1
while #i < #n
begin
insert into table2 values
select x.a,Getdate() from
(select ROW_NUMBER() over (order by [key]) as slno,* from table1 ) as x where x.slno = #i
set #i=#i+1;
end

sql server 2008 not generating random value

I'm having a problem with random values being generated for each row in a result set in SQL Server 2008. I found a similar question here, but upon implementing the proposed answer, I saw the same problem as before. When running the query I have provided below, it seems that the same values will sometimes show up in consecutive rows, even though I'm calling for a new NEWID() with each row.
DECLARE #Id int = 0
DECLARE #Counter int = 1
DECLARE #Value int
CREATE TABLE #Table1
(
id int identity(1,1)
,Value int
)
WHILE #Counter < 100000
BEGIN
INSERT INTO #Table1 (Value)
SELECT CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT)
SET #Counter += 1
END
SET #Counter = 0
WHILE #Counter < 5
BEGIN
SELECT
#Value = T.Value
,#Id = T.id
FROM #Table1 T
WHERE T.id = CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT) + 1 + #Counter
IF #Id <> 0
SELECT #Value AS Value ,#Id as ID
SET #Counter += 1
END
DROP TABLE #Table1
If I change the INT to a BIGINT, as suggested in the link I provided, nothing is solved, so I don't believe that it's an "overflow" issue.
If I take the calculation out of the select, I don't get the doubled rows:
DECLARE #Id int = 0
DECLARE #Counter int = 1
DECLARE #Value int
-- new variable
DECLARE #DERIVED INT
CREATE TABLE #Table1
(
id int identity(1,1)
,Value int
)
WHILE #Counter < 100000
BEGIN
INSERT INTO #Table1 (Value)
SELECT CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT)
SET #Counter += 1
END
SET #Counter = 0
WHILE #Counter < 5
BEGIN
--set here to remove calculation from the select
SET #DERIVED = CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT) + 1 + #Counter;
SELECT
#Value = T.Value
,#Id = T.id
FROM #Table1 T
WHERE T.id = #DERIVED
IF #Id <> 0
SELECT #Value AS Value ,#Id as ID;
SET #Counter += 1
END
DROP TABLE #Table1
I'm seeing the duplicates every time with the pseudorandom generator inside the select. Oddly enough, I get about the same frequency of duplicates on the insert loop whether or not the calculation is inside the insert... select. It could be coincidence, since we are dealing with a randomly selected number. Note also, that since you're adding to the pseudorandom result, the results aren't technically duplicates. They're descending sequences:
11111 + 1 + 1 = 11113
11110 + 1 + 2 = 11113
Same overall result, different pseudorandom result. However, if I change
CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT) + 1 + #Counter
to
CAST(RAND(CHECKSUM(NEWID())) * 100000 as INT) + #Counter + #Counter
I still consistently get duplicates. That implies that the optimizer may be caching/re-using values, at least on the select. I'd call that improper for a non-deterministic function call. I get similar results on 10.0.1600 and 10.50.1600 (2008 RTM and 2008R2 RTM).

T-SQL: split and aggregate comma-separated values

I have the following table with each row having comma-separated values:
ID
-----------------------------------------------------------------------------
10031,10042
10064,10023,10060,10065,10003,10011,10009,10012,10027,10004,10037,10039
10009
20011,10027,10032,10063,10023,10033,20060,10012,10020,10031,10011,20036,10041
I need to get a count for each ID (a groupby).
I am just trying to avoid cursor implementation and stumped on how to do this without cursors.
Any Help would be appreciated !
You will want to use a split function:
create FUNCTION [dbo].[Split](#String varchar(MAX), #Delimiter char(1))
returns #temptable TABLE (items varchar(MAX))
as
begin
declare #idx int
declare #slice varchar(8000)
select #idx = 1
if len(#String)<1 or #String is null return
while #idx!= 0
begin
set #idx = charindex(#Delimiter,#String)
if #idx!=0
set #slice = left(#String,#idx - 1)
else
set #slice = #String
if(len(#slice)>0)
insert into #temptable(Items) values(#slice)
set #String = right(#String,len(#String) - #idx)
if len(#String) = 0 break
end
return
end;
And then you can query the data in the following manner:
select items, count(items)
from table1 t1
cross apply dbo.split(t1.id, ',')
group by items
See SQL Fiddle With Demo
Well, the solution i always use, and probably there might be a better way, is to use a function that will split everything. No use for cursors, just a while loop.
if OBJECT_ID('splitValueByDelimiter') is not null
begin
drop function splitValueByDelimiter
end
go
create function splitValueByDelimiter (
#inputValue varchar(max)
, #delimiter varchar(1)
)
returns #results table (value varchar(max))
as
begin
declare #delimeterIndex int
, #tempValue varchar(max)
set #delimeterIndex = 1
while #delimeterIndex > 0 and len(isnull(#inputValue, '')) > 0
begin
set #delimeterIndex = charindex(#delimiter, #inputValue)
if #delimeterIndex > 0
set #tempValue = left(#inputValue, #delimeterIndex - 1)
else
set #tempValue = #inputValue
if(len(#tempValue)>0)
begin
insert
into #results
select #tempValue
end
set #inputValue = right(#inputValue, len(#inputValue) - #delimeterIndex)
end
return
end
After that you can call the output like this :
if object_id('test') is not null
begin
drop table test
end
go
create table test (
Id varchar(max)
)
insert
into test
select '10031,10042'
union all select '10064,10023,10060,10065,10003,10011,10009,10012,10027,10004,10037,10039'
union all select '10009'
union all select '20011,10027,10032,10063,10023,10033,20060,10012,10020,10031,10011,20036,10041'
select value
from test
cross apply splitValueByDelimiter(Id, ',')
Hope it helps, although i am still looping through everything
After reiterating the comment above about NOT putting multiple values into a single column (Use a separate child table with one value per row!),
Nevertheless, one possible approach: use a UDF to convert delimited string to a table. Once all the values have been converted to tables, combine all the tables into one table and do a group By on that table.
Create Function dbo.ParseTextString (#S Text, #delim VarChar(5))
Returns #tOut Table
(ValNum Integer Identity Primary Key,
sVal VarChar(8000))
As
Begin
Declare #dlLen TinyInt -- Length of delimiter
Declare #wind VarChar(8000) -- Will Contain Window into text string
Declare #winLen Integer -- Length of Window
Declare #isLastWin TinyInt -- Boolean to indicate processing Last Window
Declare #wPos Integer -- Start Position of Window within Text String
Declare #roVal VarChar(8000)-- String Data to insert into output Table
Declare #BtchSiz Integer -- Maximum Size of Window
Set #BtchSiz = 7900 -- (Reset to smaller values to test routine)
Declare #dlPos Integer -- Position within Window of next Delimiter
Declare #Strt Integer -- Start Position of each data value within Window
-- -------------------------------------------------------------------------
-- ---------------------------
If #delim is Null Set #delim = '|'
If DataLength(#S) = 0 Or
Substring(#S, 1, #BtchSiz) = #delim Return
-- --------------------------------------------
Select #dlLen = DataLength(#delim),
#Strt = 1, #wPos = 1,
#wind = Substring(#S, 1, #BtchSiz)
Select #winLen = DataLength(#wind),
#isLastWin = Case When DataLength(#wind) = #BtchSiz
Then 0 Else 1 End,
#dlPos = CharIndex(#delim, #wind, #Strt)
-- --------------------------------------------
While #Strt <= #winLen
Begin
If #dlPos = 0 Begin -- No More delimiters in window
If #isLastWin = 1 Set #dlPos = #winLen + 1
Else Begin
Set #wPos = #wPos + #Strt - 1
Set #wind = Substring(#S, #wPos, #BtchSiz)
-- ----------------------------------------
Select #winLen = DataLength(#wind), #Strt = 1,
#isLastWin = Case When DataLength(#wind) = #BtchSiz
Then 0 Else 1 End,
#dlPos = CharIndex(#delim, #wind, 1)
If #dlPos = 0 Set #dlPos = #winLen + 1
End
End
-- -------------------------------
Insert #tOut (sVal)
Select LTrim(Substring(#wind,
#Strt, #dlPos - #Strt))
-- -------------------------------
-- Move #Strt to char after last delimiter
Set #Strt = #dlPos + #dlLen
Set #dlPos = CharIndex(#delim, #wind, #Strt)
End
Return
End
Then write, (using your table schema),
Declare #AllVals VarChar(8000)
Select #AllVals = Coalesce(#allVals + ',', '') + ID
From Table Where ID Is Not null
-- -----------------------------------------
Select sVal, Count(*)
From dbo.ParseTextString(#AllVals, ',')
Group By sval

How to loop the data and if username exists append incremental number e.g. JOHSMI1 or JOHSMI2

I have a userid table
UserId
JHOSMI
KALVIE
etc...
What I would like to do is create a select statement and pass user id, if the userid already exists then append 1 to the id, This gets complicated if you already have JHOSMI, JHOSMI1, then I want to return JHOSMI2.
Really appreciate help here.
Thanks in advance
edited 21-Jul
this is what i got so far.. but not working the way
select #p AS StaffID,
#old_p := #p,
#Cnt := #Cnt+1 As Lvl,
(SELECT #p :=Concat(#i, #Cnt)
FROM departmenttaff
WHERE upper(trim(UserId)) = upper(trim(StaffID))
AND upper(trim(department)) like upper(trim('SERVICE'))
) AS dummy
FROM (
SELECT
#i := upper(trim('JOHSMI')),
#p := upper(trim('JOHSMI')),
#old_p :='',
#Cnt:=0
) vars,
departmenttaff p
WHERE #p <> #old_p
order by Lvl Desc LIMIT 1;
This will do exactly what you want. You will need a unique constraint on your column.
You might also need to add in error code if success = 0.
This is in MSSQL, you will need to add the relevant commands for MySQL. I do not have MySQL so I cannot test it.
NOTE: You can replace the try catch with some IF EXISTS logic. I just prefer the try catch because its more stable for multiple threads.
begin tran
select * from #tmp
declare #success bit
declare #name varchar(50)
declare #newname varchar(50)
declare #nextid int
declare #attempts int
set #name = 'brad2something'
set #success = 0
set #attempts = 0
while #success = 0 and #attempts < 5 begin
begin try
set #attempts = #attempts + 1 -- failsafe
set #newname = #name
if exists (select * from #tmp where username = #name) begin
select #nextid = isnull(max(convert(int, substring(username, LEN(#name) + 1, 50))), 0) + 1
from #tmp where username like #name + '%' and isnumeric(substring(username, LEN(#name) + 1, 50)) = 1
set #newname = #name + CONVERT(varchar(20), #nextid)
end
insert into #tmp (username) values (#newname)
set #success = 1
end try begin catch end catch
end
--insert into #tmp (username)
--select
select #success
select * from #tmp
rollback
/*
drop table #tmp
create table #tmp (
username varchar(50) not null unique
)
insert into #tmp (username)
select 'brad'
union all select 'brad1'
union all select 'brad2something5'
union all select 'brad2'
union all select 'laney'
union all select 'laney500'
*/
I noticed you want to back fill data. If you want to back fill then this will work. It is extremely inefficient but there is no way around it. There is optimizing code you can put in for when an "error" occurs to prevent all previous counts from happening, but this will work.
begin tran
select * from #tmp
declare #success bit
declare #name varchar(50)
declare #newname varchar(50)
declare #nextid int
declare #attempts int
set #name = 'laney'
set #success = 0
set #attempts = 0
set #nextid = 1
while #success = 0 and #attempts < 5 begin
begin try
if exists (select * from #tmp where username = #name) begin
set #newname = #name + CONVERT(varchar(20), #nextid)
while exists (select * from #tmp where username = #newname) begin
set #nextid = #nextid + 1
set #newname = #name + CONVERT(varchar(20), #nextid)
end
end else
set #newname = #name
set #attempts = #attempts + 1 -- failsafe
insert into #tmp (username) values (#newname)
set #success = 1
end try begin catch end catch
end
--insert into #tmp (username)
--select
select #success
select * from #tmp
rollback
/*
drop table #tmp
create table #tmp (
username varchar(50) not null unique
)
insert into #tmp (username)
select 'brad'
union all select 'brad1'
union all select 'brad2something5'
union all select 'brad2'
union all select 'laney'
union all select 'laney500'
*/
Is it mandatory to have the count in same column? its better to have it in a different integer column. Anyways, if this is the requirement then select userid from table where userid like 'JHOSMI%', then do extract the number using mysql substr function.
For other people who might find this, here's a version in PostgreSQL:
create or replace function uniquify_username(varchar) returns varchar as $$
select $1 || coalesce((max(num) + 1)::varchar, '')
from
(select
substring(name, '^(.*?)[0-9]*$') as prefix,
coalesce(substring(name, '.*([0-9]+)$'), '0')::integer as num
from user1) users
where prefix = $1
$$ LANGUAGE sql;
I think it could be adapted to MySQL (though probably not as a stored procedure) but I don't have a MySQL server handy to do the conversion on.
Put a UNIQUE constraint on the column.
You didn't say what language you are using, so use this pseudo code
counter = 0
finished = false
while finished = false
{
try
{
if counter >= 1 then name = name + counter
counter = counter + 1
insert into table (name)
}
}
This code is extremely finicky. But will get the job done and there is no real other way to do this except for in sql, and you will always have some type of try catch to avoid two processes running at the same time. This way you use the unique key constraint to force the error, and supress it because it is expected.
I in no way condone using try/catch for business logic like this, but you are putting yourself in a situation thats unavoidable. I would say put the ID in a seperate column and make a unique constraint on both fields.
Proper solution:
Columns: Name, ID, Display Name
Unique constraint on: Name, ID
Display Name is a computed column (virtual) is Name + ID
If you do it this way, then all you have to do is INSERT INTO table (name, (select max() from table))