write a TRANSFORM statement in Sql Server - sql-server-2008

I am migrating a web application backend from Access to MSSQL, however I was not able o reproduce the following Query in MSSQL, any ideas?
TRANSFORM First(FollowUp.FUData) AS FirstOfFUData
SELECT FollowUp.MRN
FROM FollowUp
GROUP BY FollowUp.MRN
PIVOT FollowUp.FU;
please note that this query converts data from the EAV table Followup to a normal table.
This is the design of the table Followup:

In SQL Server you can use the PIVOT function and your query would be set up this way:
select MRN, Value1, Value2
from
(
select MRN, FUData, FU
from FollowUp
) src
pivot
(
max(FUData)
for FU in (Value1, Value2)
) piv
Where you would replace the Value1, Value2, etc with any of the values that you items that should now be columns.
SQL Server 2008, does not have a FIRST() function so you will have to use another aggregate function or query the data in such a manner to return the the first record for each item in FU.
Another way to write this is using an aggregate function with a CASE statement:
select MRN,
max(case when FU = 'value1' then FUData else null end) Value1,
max(case when FU = 'value2' then FUData else null end) Value2
from FollowUp
group by MRN
The above versions will work great if you have a known number of FU values to transform into columns, but if you do not then you will need to use dynamic SQL similar to this:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT distinct ',' + QUOTENAME(FU)
from FollowUp
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT MRN,' + #cols + ' from
(
select MRN, FUData, FU
from FollowUp
) x
pivot
(
max(FUData)
for FU in (' + #cols + ')
) p '
execute(#query)

Related

T-SQL query to return JSON array of strings

I'm hoping to build an optimized data JSON structure that only includes data, no names. I'll included the names in another JSON.
For example
[["1", "William"],["2", "Dylan"]]
I'm looking at "for json auto", running a query like this.
declare #t table(id int, name varchar(20))
insert into #t (id, name) values( 1, 'William')
insert into #t (id, name) values( 2, 'Dylan')
declare #result as varchar(max)
select id, name from #t for json auto
However it includes the names with every value.
[{"id":1,"name":"William"},{"id":2,"name":"Dylan"}]
Is there a way to instruct SQL Server to omit the names and just return a string array?
I'll need to update a couple hundred queries, so I'm hoping for an answer that doesn't require too much modification on a basic query.
Unfortunately, SQL Server does not support the JSON_AGG function or similar. You can hack it with STRING_AGG and STRING_ESCAPE.
You can either do this with a single aggregation and concatenating the row together
SELECT '[' + STRING_AGG(CONCAT(
'["',
id,
'","',
STRING_ESCAPE(name, 'json'),
'"]'
), ',') + ']'
FROM #t t;
Or with a nested aggregation, aggregating first each row in an unpivoted subquery, then all rows together
SELECT '[' + STRING_AGG('[' + j.json + ']', ',') + ']'
FROM #t t
CROSS APPLY (
SELECT STRING_AGG('"' + STRING_ESCAPE(value, 'json') + '"', ',')
FROM (VALUES
(CAST(id AS nvarchar(max))),
(name)
) v(value)
) j(json);
db<>fiddle
I've assumed columns are not nullable. Nullable columns will need special handling, I leave it as an exercise to the reader.
Not all that different from Charlie's but uses CONCAT_WS to remove some of the explicit " characters:
SELECT [json] = '['
+ STRING_AGG('["' + CONCAT_WS('", "', id,
STRING_ESCAPE(COALESCE(name,''), N'JSON'))
+ '"]', ',') + ']'
FROM #t;
Output (after adding a 3rd row, values(3, NULL):
json
[["1", "William"],["2", "Dylan"],["3", ""]]
Example db<>fiddle
If you want the literal string null with no quotes:
SELECT [json] = '['
+ STRING_AGG('['
+ CONCAT_WS(', ', CONCAT('"', id, '"'),
COALESCE('"' + STRING_ESCAPE(name, N'JSON') + '"', 'null'))
+ ']', ',') + ']'
FROM #t;
Output:
json
[["1", "William"],["2", "Dylan"],["3", null]]
Example db<>fiddle
If you don't want the NULL value to present a column in the JSON, just remove the COALESCE:
SELECT [json] = '['
+ STRING_AGG('["' + CONCAT_WS('", "', id,
STRING_ESCAPE(name, N'JSON'))
+ '"]', ',') + ']'
FROM #t;
Output:
json
[["1", "William"],["2", "Dylan"],["3"]]
Example db<>fiddle
If you don't want that row present in the JSON at all, just filter it out:
FROM #t WHERE name IS NOT NULL;
If that column doesn't allow NULLs, state it explicitly so we don't have to guess (probably doesn't hurt to confirm id is unique, either):
declare #t table(id int UNIQUE, name varchar(20) NOT NULL);

How to select JSON Properties as column using T-SQL

I have a table with a JSON column. I want to select JSON properties as column. The property names will be unknown. So I have to use dynamic SQL. Based on this SO suggestion, I was able to get properties.
CREATE TABLE [Templates]
(
[ID] [INT] NOT NULL,
[Template] [NVARCHAR](MAX)
)
INSERT INTO Templates(ID,Template)
VALUES (1, '{"FirName":"foo"}'),
(2, '{"FirName":"joe","LastName":"dow"}'),
(3, '{"LastName":"smith","Address":"1234 Test Drive"}'),
(4, '{"City":"New York"}')
// SELECT Keys
SELECT DISTINCT(j.[key])
FROM Templates T
CROSS APPLY OPENJSON(T.Template) AS j
How do I create fitting statement/WITH-clause dynamically to select properties as column? If property doesn't exist then it should return null
SQL FIDDLE
Another possible approach is to use OPENJSON() with dynamically generated WITH clause. Note, that in this case you need to use lax mode in the path expression to guarantee that OPENJSON() doesn't raise an error if the object or value on the specified path can't be found.
Table:
CREATE TABLE [Templates](
[ID] [int] NOT NULL,
[Template] [nvarchar](max)
)
INSERT INTO Templates(ID,Template)
VALUES
(1,'{"FirName":"foo"}'),
(2,'{"FirName":"joe","LastName":"dow"}'),
(3,'{"LastName":"smith","Address":"1234 Test Drive"}'),
(4,'{"City":"New York"}')
Statement:
DECLARE #stm nvarchar(max) = N''
-- Dynamic explicit schema (WITH clause)
SELECT #stm = CONCAT(
#stm,
N', [',
[key],
N'] nvarchar(max) ''lax $."',
[key],
'"'''
)
FROM (
SELECT DISTINCT j.[key] FROM Templates t
CROSS APPLY OPENJSON(T.Template) AS j
) cte
-- Statement
SELECT #stm = CONCAT(
N'SELECT j.* ',
N'FROM Templates t ',
N'CROSS APPLY OPENJSON(t.Template) WITH (',
STUFF(#stm, 1, 2, N''),
N') j '
)
-- Execution
PRINT #stm
EXEC sp_executesql #stm
Output:
--------------------------------------------
Address City FirName LastName
--------------------------------------------
foo
joe dow
1234 Test Drive smith
New York
Dynamic columns would require Dynamic SQL. If the desired columns are known, you can use a simple pivot or even a conditional aggregation.
Example
Declare #SQL varchar(max)= stuff((Select ','+QuoteName([key])
From (SELECT DISTINCT(j.[key]) FROM Templates T
CROSS APPLY OPENJSON(T.Template) AS j) A
Order By 1
For XML Path('')),1,1,'')
Set #SQL = '
Select *
From (
Select T.ID
,j.[Key]
,j.[Value]
From Templates T
Cross Apply OpenJSON(T.Template) AS j
) src
Pivot ( max(value) for [Key] in ('+ #SQL+') ) pvt
'
Exec(#SQL)
EDIT - If you don't want ID in the Final Results
Declare #SQL varchar(max)= stuff((Select ','+QuoteName([key])
From (SELECT DISTINCT(j.[key]) FROM Templates T
CROSS APPLY OPENJSON(T.Template) AS j) A
Order By 1
For XML Path('')),1,1,'')
Set #SQL = '
Select '+#SQL+'
From (
Select T.ID
,j.[Key]
,j.[Value]
From Templates T
Cross Apply OpenJSON(T.Template) AS j
) src
Pivot ( max(value) for [Key] in ('+ #SQL+') ) pvt
'
Exec(#SQL)

Is it possible that I could find a row contains a string? Assume that I do not know which columns contain a string

I know that there are several ways to find which row's column contains a string, like using [column name] regexp ' ' or [column name] like ' '
while currently what I need some help is I have a table with several columns, all of there are varchar or text and I am not sure which column contains a certain string. Just say that I want to search a "xxx from a table. Several different columns could contain this string or not. Is there a way that I could find which column contains this string?
I have a thinking and the solution could be
select * from [table name] where [column1] regexp 'xxx' or
[column2] regexp 'xxx' or ...... [column39] regexp 'xxx' or .....
[colum60] regexp 'xxx' or ... or [column 80] regexp 'xxx';
I do not want the query like this. Is there another effective way?
To give a better example, say that we are searching for a table that belongs to a blog.
We have title, URL, content, key words, tag, comment and so on. Now we just say, if any blog article is related to "database-normalization", this word may appear in the title, URL or content or anywhere, and I do not want to write it one by one like
where title regexp 'database-normalization' or content regexp 'database-normalization' or url regexp 'database-normalization'......
as when there are hundreds columns, I need to write a hundred, or in this case is there an effective way instead of write hundred or statement? Like using if-else or collections or some others to build the query.
If you want a pure dynamic way, you can try this. I've tried it long back on sql-server and hope it may help you.
#TMP_TABLE -- a temporary table
- PK, IDENTITY
- TABLE_NAME
- COLUMN_NAME
- IS_EXIST
INSERT INTO #TMP_TABLE (TABLE_NAME,COLUMN_NAME)
SELECT C.TABLE_NAME, COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS C
WHERE C.TABLE_NAME = <your-table> AND C.DATA_TYPE = 'varchar'; -- you can modify it to handle multiple table at once.
-- boundaries
SET #MINID = (SELECT ISNULL(MIN(<PK>),0) FROM #TMP_TABLE );
SET #MAXID = (SELECT ISNULL(MAX(<PK>),0) FROM #TMP_TABLE );
WHILE ((#MINID<=#MAXID) AND (#MINID<>0))
BEGIN
SELECT #TABLE_NAME = TABLE_NAME,#COLUMN_NAME = COLUMN_NAME
FROM #TMP_TABLE
WHERE <PK> = #MINID;
SET #sqlString = ' UPDATE #TMP_TABLE
SET IS_EXIST = 1
WHERE EXIST (SELECT 1 FROM '+ #TABLE_NAME+' WHERE '+ #COLUMN_NAME +' = ''demo.webstater.com'') AND <PK> = '+ #MINID;
EXEC(#sql) ;
SET #MINID = (SELECT MIN(<PK>) FROM #TMP_TABLE WHERE <PK> > #MINID );
END
SELECT * FROM #TMP_TABLE WHERE IS_EXIST = 1 ; -- will give you matched results.
If you know the columns in advance, what you proposed is probably the most effective way (if a little verbose).
Otherwise, you could get the column names from INFORMATION_SCHEMA.COLUMNS and construct dynamic SQL based on that.
His question is not to query specific columns with like clause. He has been asking to apply same pattern across columns dynamically.
Example: Table having 3 columns - FirstName, LastName, Address and pattern matching is "starts with A" then resulting query should be:
Select * From Customer where FirstName like 'A%" or LastName like 'A%" or Address like 'A%'
If you want to build query in business layer, this could easily be done with reflection along with EF.
If you are motivated to do in database then you can achieve by building query dynamically and then execute through sp_executesql.
Try this (Just pass tablename and the string to be find)-:
create proc usp_findString
#tablename varchar(500),
#string varchar(max)
as
Begin
Declare #sql2 varchar(max),#sql nvarchar(max)
SELECT #sql2=
STUFF((SELECT ', case when '+QUOTENAME(NAME)+'='''+#string+''' then 1 else 0 end as '+NAME
FROM (select a.name from sys.columns a join sys.tables b on a.[object_id]=b.[object_id] where b.name=#tablename) T1
--WHERE T1.ID=T2.ID
FOR XML PATH('')),1,1,'')
--print #string
set #sql='select '+#sql2+' from '+#tablename
print #sql
EXEC sp_executesql #sql
End
SQL Server 2014
One way is to use CASE to check the substring existence with LOCATE in mysql and return the column but all you have to check in every column of the table as below:
CREATE TABLE test(col1 VARCHAR(1000), col2 VARCHAR(1000), col3 VARCHAR(1000))
INSERT INTO test VALUES
('while currently what I need some help is I have a table with 10 columns',
'contains a certain string. Just say that I want to search a table',
'contains a certain string demo.webstater.com')
SELECT (CASE WHEN LOCATE('demo.webstater.com', col1, 1) > 0 THEN 'col1'
WHEN LOCATE('demo.webstater.com', col2, 1) > 0 THEN 'col2'
WHEN LOCATE('demo.webstater.com', col3, 1) > 0 THEN 'col3'
END) whichColumn
FROM test
OUTPUT:
whichColumn
col3
There are many ways in which you can do your analysis. You can use "LIKE A%%" if it starts from A in SQL, "REGEX" LibrarY for multiple checks.

Using CTE with a dynamic pivot

I'm trying to use This question to perform a dynamic pivot, but I want to use a CTE to get the initial data.
My query looks like this:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX);
WITH dataSet (coDate, TransactionDate, TotalBalance, TransDate, collected)
AS
( *SELECT STATEMENT )
SET #cols = STUFF((SELECT distinct ',' + QUOTENAME(c.category)
FROM dataSet c
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT coDate, ' + #cols + ' from
(
select coDate
, TotalBalance
, collected
, TransDate
from dataSet
) x
pivot
(
SUM(collected)
for category in (' + #cols + ')
) p '
execute(#query)
And the error SQL gives me is Incorrect syntax near the keyword 'SET'. I did try adding a semicolon and go as well as a comma before the SET statement, but this the first time I've used PIVOT so I'm not sure how CTE interacts with it.

SQL joining 1 to 1 and 1 to many table [duplicate]

This question already has answers here:
How to concatenate text from multiple rows into a single text string in SQL Server
(47 answers)
Closed 7 years ago.
If I issue SELECT username FROM Users I get this result:
username
--------
Paul
John
Mary
but what I really need is one row with all the values separated by comma, like this:
Paul, John, Mary
How do I do this?
select
distinct
stuff((
select ',' + u.username
from users u
where u.username = username
order by u.username
for xml path('')
),1,1,'') as userlist
from users
group by username
had a typo before, the above works
This should work for you. Tested all the way back to SQL 2000.
create table #user (username varchar(25))
insert into #user (username) values ('Paul')
insert into #user (username) values ('John')
insert into #user (username) values ('Mary')
declare #tmp varchar(250)
SET #tmp = ''
select #tmp = #tmp + username + ', ' from #user
select SUBSTRING(#tmp, 0, LEN(#tmp))
good review of several approaches:
http://blogs.msmvps.com/robfarley/2007/04/07/coalesce-is-not-the-answer-to-string-concatentation-in-t-sql/
Article copy -
Coalesce is not the answer to string concatentation in T-SQL I've seen many posts over the years about using the COALESCE function to get string concatenation working in T-SQL. This is one of the examples here (borrowed from Readifarian Marc Ridey).
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
SELECT #categories
This query can be quite effective, but care needs to be taken, and the use of COALESCE should be properly understood. COALESCE is the version of ISNULL which can take more than two parameters. It returns the first thing in the list of parameters which is not null. So really it has nothing to do with concatenation, and the following piece of code is exactly the same - without using COALESCE:
DECLARE #categories varchar(200)
SET #categories = ''
SELECT #categories = #categories + ',' + Name
FROM Production.ProductCategory
SELECT #categories
But the unordered nature of databases makes this unreliable. The whole reason why T-SQL doesn't (yet) have a concatenate function is that this is an aggregate for which the order of elements is important. Using this variable-assignment method of string concatenation, you may actually find that the answer that gets returned doesn't have all the values in it, particularly if you want the substrings put in a particular order. Consider the following, which on my machine only returns ',Accessories', when I wanted it to return ',Bikes,Clothing,Components,Accessories':
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
SELECT #categories
Far better is to use a method which does take order into consideration, and which has been included in SQL2005 specifically for the purpose of string concatenation - FOR XML PATH('')
SELECT ',' + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
FOR XML PATH('')
In the post I made recently comparing GROUP BY and DISTINCT when using subqueries, I demonstrated the use of FOR XML PATH(''). Have a look at this and you'll see how it works in a subquery. The 'STUFF' function is only there to remove the leading comma.
USE tempdb;
GO
CREATE TABLE t1 (id INT, NAME VARCHAR(MAX));
INSERT t1 values (1,'Jamie');
INSERT t1 values (1,'Joe');
INSERT t1 values (1,'John');
INSERT t1 values (2,'Sai');
INSERT t1 values (2,'Sam');
GO
select
id,
stuff((
select ',' + t.[name]
from t1 t
where t.id = t1.id
order by t.[name]
for xml path('')
),1,1,'') as name_csv
from t1
group by id
;
FOR XML PATH is one of the only situations in which you can use ORDER BY in a subquery. The other is TOP. And when you use an unnamed column and FOR XML PATH(''), you will get a straight concatenation, with no XML tags. This does mean that the strings will be HTML Encoded, so if you're concatenating strings which may have the < character (etc), then you should maybe fix that up afterwards, but either way, this is still the best way of concatenating strings in SQL Server 2005.
building on mwigdahls answer. if you also need to do grouping here is how to get it to look like
group, csv
'group1', 'paul, john'
'group2', 'mary'
--drop table #user
create table #user (groupName varchar(25), username varchar(25))
insert into #user (groupname, username) values ('apostles', 'Paul')
insert into #user (groupname, username) values ('apostles', 'John')
insert into #user (groupname, username) values ('family','Mary')
select
g1.groupname
, stuff((
select ', ' + g.username
from #user g
where g.groupName = g1.groupname
order by g.username
for xml path('')
),1,2,'') as name_csv
from #user g1
group by g1.groupname
You can use this query to do the above task:
DECLARE #test NVARCHAR(max)
SELECT #test = COALESCE(#test + ',', '') + field2 FROM #test
SELECT field2 = #test
For detail and step by step explanation visit the following link
http://oops-solution.blogspot.com/2011/11/sql-server-convert-table-column-data.html
DECLARE #EmployeeList varchar(100)
SELECT #EmployeeList = COALESCE(#EmployeeList + ', ', '') +
CAST(Emp_UniqueID AS varchar(5))
FROM SalesCallsEmployees
WHERE SalCal_UniqueID = 1
SELECT #EmployeeList
source:
http://www.sqlteam.com/article/using-coalesce-to-build-comma-delimited-string
In SQLite this is simpler. I think there are similar implementations for MySQL, MSSql and Orable
CREATE TABLE Beatles (id integer, name string );
INSERT INTO Beatles VALUES (1, "Paul");
INSERT INTO Beatles VALUES (2, "John");
INSERT INTO Beatles VALUES (3, "Ringo");
INSERT INTO Beatles VALUES (4, "George");
SELECT GROUP_CONCAT(name, ',') FROM Beatles;
you can use stuff() to convert rows as comma separated values
select
EmployeeID,
stuff((
SELECT ',' + FPProjectMaster.GroupName
FROM FPProjectInfo AS t INNER JOIN
FPProjectMaster ON t.ProjectID = FPProjectMaster.ProjectID
WHERE (t.EmployeeID = FPProjectInfo.EmployeeID)
And t.STatusID = 1
ORDER BY t.ProjectID
for xml path('')
),1,1,'') as name_csv
from FPProjectInfo
group by EmployeeID;
Thanks #AlexKuznetsov for the reference to get this answer.
A clean and flexible solution in MS SQL Server 2005/2008 is to create a CLR Agregate function.
You'll find quite a few articles (with code) on google.
It looks like this article walks you through the whole process using C#.
If you're executing this through PHP, what about this?
$hQuery = mysql_query("SELECT * FROM users");
while($hRow = mysql_fetch_array($hQuery)) {
$hOut .= $hRow['username'] . ", ";
}
$hOut = substr($hOut, 0, strlen($hOut) - 1);
echo $hOut;