Biml (Or SSIS) doesn't seem to want to automatically identify the row Delimiter. Without a column with a manually set on the last column of the column list to the intended Row Delimiter SSIS does not set the delimiter correctly for the row. I'm guessing that SSIS just implies the row delimiter even if it is set in the connection properties. Any one know of a fix for this other than writing around the problem and setting the last column's delmiter to the intended row delimiter (See "T" Column below)?
I checked the properties of the output Connection and it properly states the "RowDelimiter" as the CRLF, but if you look at the
Here is the Biml file:
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<Connections>
<OleDbConnection Name="Source" ConnectionString="Provider=SQLNCLI11;Server=localhost;Initial Catalog=test;Integrated Security=SSPI;">
</OleDbConnection>
<FlatFileConnection Name="Created" FilePath="D:\\created.dat" FileFormat="Changed">
<Expressions>
<Expression PropertyName="ConnectionString">#[$Package::FileDropRoot] + "\\"+REPLACE((DT_WSTR, 10)(DT_DBDATE)GETDATE(),"-","") + "." + "created.dat"</Expression>
</Expressions>
</FlatFileConnection>
</Connections>
<FileFormats>
<FlatFileFormat Name="Changed" ColumnNamesInFirstDataRow="true" HeaderRowDelimiter="CRLF" RowDelimiter="CRLF">
<Columns>
<Column Name="col1" DataType="String" Delimiter="Comma" ColumnType="Delimited" />
<Column Name="col2" DataType="String" Delimiter="Comma" ColumnType="Delimited"/>
<!-- this must be here in order to terminate the row -->
<Column Name="T" DataType="String" Delimiter="Comma" ColumnType="CRLF"/>
</Columns>
</FlatFileFormat>
</FileFormats>
<Packages>
<Package Name="Test" ConstraintMode="Linear" ProtectionLevel="EncryptSensitiveWithUserKey">
<Tasks>
<Dataflow Name="Test">
<Transformations>
<OleDbSource Name="Select Stagement" ConnectionName="Source">
<DirectInput>
Select * From Test
</DirectInput>
</OleDbSource>
<FlatFileDestination Name="UpdateFile" ConnectionName="Created">
</FlatFileDestination>
</Transformations>
</Dataflow>
</Tasks>
<Parameters>
<Parameter Name="FileDropRoot" DataType="String">D:\FileDrop</Parameter>
</Parameters>
</Package>
</Packages>
</Biml>
Here is the SSIS "Code" with the T Column above, please note that a Row Delimiter is specified (Line Breaks in export file):
<DTS:ConnectionManager DTS:CreationName="FLATFILE" DTS:DTSID="{9CDCB838-2A42-4CCA-A59C-DC60E9B3A967}" DTS:ObjectName="Created" DTS:refId="Package.ConnectionManagers[Created]">
<DTS:ObjectData>
<DTS:ConnectionManager DTS:CodePage="1252" DTS:ColumnNamesInFirstDataRow="True" DTS:ConnectionString="D:\\created.dat" DTS:Format="Delimited" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:LocaleID="1033" DTS:RowDelimiter="_x000D__x000A_" DTS:TextQualifier="_x003C_none_x003E_" DTS:Unicode="True">
<DTS:FlatFileColumns>
<DTS:FlatFileColumn DTS:ColumnDelimiter="_x002C_" DTS:ColumnType="Delimited" DTS:CreationName="" DTS:DataType="303" DTS:DTSID="{D64391D4-4551-44E9-8539-4C473EB700AA}" DTS:ObjectName="col1" DTS:TextQualified="True">
</DTS:FlatFileColumn>
<DTS:FlatFileColumn DTS:ColumnDelimiter="_x002C_" DTS:ColumnType="Delimited" DTS:CreationName="" DTS:DataType="303" DTS:DTSID="{974ED1AD-7D72-4A65-A877-BADEC09DAF20}" DTS:ObjectName="col2" DTS:TextQualified="True">
</DTS:FlatFileColumn>
<DTS:FlatFileColumn DTS:ColumnDelimiter="_x000D__x000A_" DTS:ColumnType="Delimited" DTS:CreationName="" DTS:DataType="303" DTS:DTSID="{4347C3C1-39BD-40B1-B38F-526730FE7BFB}" DTS:ObjectName="T" DTS:TextQualified="True">
</DTS:FlatFileColumn>
</DTS:FlatFileColumns>
</DTS:ConnectionManager>
</DTS:ObjectData>
<DTS:PropertyExpression DTS:Name="ConnectionString">#[$Package::FileDropRoot] + "\\"+REPLACE((DT_WSTR, 10)(DT_DBDATE)GETDATE(),"-","") + "." + "created.dat"</DTS:PropertyExpression>
</DTS:ConnectionManager>
Here is the SSIS "Code" without the T Column above, please note that a Row Delimiter is specified (No line breaks in export file):
<DTS:ConnectionManager DTS:CreationName="FLATFILE" DTS:DTSID="{79E9C576-FD53-4D4F-A07C-AED8D4CE72E6}" DTS:ObjectName="Created" DTS:refId="Package.ConnectionManagers[Created]">
<DTS:ObjectData>
<DTS:ConnectionManager DTS:CodePage="1252" DTS:ColumnNamesInFirstDataRow="True" DTS:ConnectionString="D:\\created.dat" DTS:Format="Delimited" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:LocaleID="1033" DTS:RowDelimiter="_x000D__x000A_" DTS:TextQualifier="_x003C_none_x003E_" DTS:Unicode="True">
<DTS:FlatFileColumns>
<DTS:FlatFileColumn DTS:ColumnDelimiter="_x002C_" DTS:ColumnType="Delimited" DTS:CreationName="" DTS:DataType="303" DTS:DTSID="{BBCA22D2-5D3E-47AC-AA0A-413C0C1A5CB2}" DTS:ObjectName="col1" DTS:TextQualified="True">
</DTS:FlatFileColumn>
<DTS:FlatFileColumn DTS:ColumnDelimiter="_x002C_" DTS:ColumnType="Delimited" DTS:CreationName="" DTS:DataType="303" DTS:DTSID="{44E567E4-BE78-432C-A8AC-C388E8BCFADC}" DTS:ObjectName="col2" DTS:TextQualified="True">
</DTS:FlatFileColumn>
</DTS:FlatFileColumns>
</DTS:ConnectionManager>
</DTS:ObjectData>
<DTS:PropertyExpression DTS:Name="ConnectionString">#[$Package::FileDropRoot] + "\\"+REPLACE((DT_WSTR, 10)(DT_DBDATE)GETDATE(),"-","") + "." + "created.dat"</DTS:PropertyExpression>
</DTS:ConnectionManager>
Here is the Script I used to Create the table on the database Connection
CREATE TABLE Test(col1 varchar(25),col2 varchar(25))
INSERT INTO Test
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2' UNION all
SELECT '1','2'
You have an incorrect assumption in that the last column's delimiter shouldn't be CRLF. You'll see in all the linked examples, while counter-intuitive, your flat file format file should use your row delimiter as the column delimiter for the final column. Every other column would use your "standard" column delimiter. And yes, it's repeated from the header declaration of what your row delimiter should be.
http://bimlscript.com/Snippet/Details/18
http://bimlscript.com/Snippet/Details/54
Some people juggle geese...
Related
I'd like to use FOR JSON to build a data payload for an HTTP Post call. My Source table can be recreated with this snippet:
drop table if exists #jsonData;
drop table if exists #jsonColumns;
select
'carat' [column]
into #jsonColumns
union
select 'cut' union
select 'color' union
select 'clarity' union
select 'depth' union
select 'table' union
select 'x' union
select 'y' union
select 'z'
select
0.23 carat
,'Ideal' cut
,'E' color
,'SI2' clarity
,61.5 depth
,55.0 [table]
,3.95 x
,3.98 y
,2.43 z
into #jsonData
union
select 0.21,'Premium','E','SI1',59.8,61.0,3.89,3.84,2.31 union
select 0.29,'Premium','I','VS2',62.4,58.0,4.2,4.23,2.63 union
select 0.31,'Good','J','SI2',63.3,58.0,4.34,4.35,2.75
;
The data needs to be formatted as follows:
{
"columns":["carat","cut","color","clarity","depth","table","x","y","z"],
"data":[
[0.23,"Ideal","E","SI2",61.5,55.0,3.95,3.98,2.43],
[0.21,"Premium","E","SI1",59.8,61.0,3.89,3.84,2.31],
[0.23,"Good","E","VS1",56.9,65.0,4.05,4.07,2.31],
[0.29,"Premium","I","VS2",62.4,58.0,4.2,4.23,2.63],
[0.31,"Good","J","SI2",63.3,58.0,4.34,4.35,2.75]
]
}
My attempts thus far is as follows:
select
(select * from #jsonColumns for json path) as [columns],
(select * from #jsonData for json path) as [data]
for json path, without_array_wrapper
However this returns arrays of objects rather than values, like so:
{
"columns":[
{"column":"carat"},
{"column":"clarity"},
{"column":"color"},
{"column":"cut"},
{"column":"depth"},
{"column":"table"},
{"column":"x"},
{"column":"y"},
{"column":"z"}
]...
}
How can I limit the arrays to only showing the values?
Honestly, this seems like it's going to be easier with string aggregation rather than using the JSON functionality.
Because you're using using SQL Server 2016, you don't have access to STRING_AGG or CONCAT_WS, so the code is a lot longer. You have to make use of FOR XML PATH and STUFF instead and insert all the separators manually (why there's so many ',' in the CONCAT expression). This results in the below:
DECLARE #CRLF nchar(2) = NCHAR(13) + NCHAR(10);
SELECT N'{' + #CRLF +
N' "columns":[' + STUFF((SELECT ',' + QUOTENAME(c.[name],'"')
FROM tempdb.sys.columns c
JOIN tempdb.sys.tables t ON c.object_id = t.object_id
WHERE t.[name] LIKE N'#jsonData%' --Like isn't needed if not a temporary table. Use the literal name.
ORDER BY c.column_id ASC
FOR XML PATH(N''),TYPE).value('.','nvarchar(MAX)'),1,1,N'') + N'],' + #CRLF +
N' "data":[' + #CRLF +
STUFF((SELECT N',' + #CRLF +
N' ' + CONCAT('[',JD.carat,',',QUOTENAME(JD.cut,'"'),',',QUOTENAME(JD.color,'"'),',',QUOTENAME(JD.clarity,'"'),',',JD.depth,',',JD.[table],',',JD.x,',',JD.y,',',JD.z,']')
FROM #jsonData JD
ORDER BY JD.carat ASC
FOR XML PATH(N''),TYPE).value('.','nvarchar(MAX)'),1,3,N'') + #CRLF +
N' ]' + #CRLF +
N'}';
DB<>Fiddle
There is a SQL table mytable that has a column mycolumn.
That column has text inside each cell. Each cell may contain "this.text/31/" or "this.text/72/" substrings (numbers in that substrings can be any) as a part of string.
What SQL query should be executed to display a list of unique such substrings?
P.S. Of course, some cells may contain several such substrings.
And here are the answers for questions from the comments:
The query supposed to work on SQL Server.
The prefered output should contain the whole substring, not the numeric part only. It actually could be not just the number between first "/" and the second "/".
And it is varchar type (probably)
Example:
mycolumn contains such values:
abcd/eftthis.text/31/sadflh adslkjh
abcd/eftthis.text/44/khjgb ljgnkhj this.text/447/lhkjgnkjh
ljgkhjgadsvlkgnl
uygouyg/this.text/31/luinluinlugnthis.text/31/ouygnouyg
khjgbkjyghbk
The query should display:
this.text/31/
this.text/44/
this.text/447/
How about using a recursive CTE:
CREATE TABLE #myTable
(
myColumn VARCHAR(100)
)
INSERT INTO #myTable
VALUES
('abcd/eftthis.text/31/sadflh adslkjh'),
('abcd/eftthis.text/44/khjgb ljgnkhj this.text/447/lhkjgnkjh'),
('ljgkhjgadsvlkgnl'),
('uygouyg/this.text/31/luinluinlugnthis.text/31/ouygnouyg'),
('khjgbkjyghbk')
;WITH CTE
AS
(
SELECT MyColumn,
CHARINDEX('this.text/', myColumn, 0) AS startPos,
CHARINDEX('/', myColumn, CHARINDEX('this.text/', myColumn, 1) + 10) AS endPos
FROM #myTable
WHERE myColumn LIKE '%this.text/%'
UNION ALL
SELECT T1.MyColumn,
CHARINDEX('this.text/', T1.myColumn, C.endPos) AS startPos,
CHARINDEX('/', T1.myColumn, CHARINDEX('this.text/', T1.myColumn, c.endPos) + 10) AS endPos
FROM #myTable T1
INNER JOIN CTE C
ON C.myColumn = T1.myColumn
WHERE SUBSTRING(T1.MyColumn, C.EndPos, 100) LIKE '%this.text/%'
)
SELECT DISTINCT SUBSTRING(myColumn, startPos, EndPos - startPos)
FROM CTE
Having a table named test with the following data:
COLUMN1
aathis.text/31/
this.text/1/
bbbthis.text/72/sksk
could this be what you are looking for?
select SUBSTR(COLUMN1,INSTR(COLUMN1,'this.text', 1 ),INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 )+10) - INSTR(COLUMN1,'this.text', 1 )+1) from test;
result:
this.text/31/
this.text/1/
this.text/72/
i see your problem:
Assume the same table as above but now with the following data:
this.text/77/
xxthis.text/33/xx
xthis.text/11/xxthis.text/22/x
xthis.text/1/x
The following might help you:
SELECT SUBSTR(COLUMN1, INSTR(COLUMN1,'this.text', 1 ,1), INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 ,1)+10) - INSTR(COLUMN1,'this.text', 1 ,1)+1) FROM TEST
UNION
SELECT CASE WHEN (INSTR(COLUMN1,'this.text', 1,2 ) >0) THEN
SUBSTR(COLUMN1, INSTR(COLUMN1,'this.text', 1,2 ), INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 ,2),2) - INSTR(COLUMN1,'this.text', 1,2 )+1) end FROM TEST;
it will generate the following result:
this.text/1/
this.text/11/
this.text/22/
this.text/33/
this.text/77/
The downside is that you need to add a select statement for every occurance you might have of "this.text". If you might have 100 "this.text" in the same cell it might be a problem.
SQL> select SUBSTR(column_name,1,9) from tablename;
column_name
this.text
SELECT REGEXP_SUBSTR(column_name,'this.text/[[:digit:]]+/')
FROM table_name
I am running a query in SSRS that is using 2 common table expressions. The query runs fine in the query designer, but the when I press ok and the dataset is formed; the fields in the dataset are the columns in the select * statement inside the cte. How do I get the columns i created in the cte to show up in the fields of my dataset in the ssrs? Any help is much appreciated.
IF #FilterByEventCode IS NULL
BEGIN
SELECT *
FROM
dbo.Historywithqualityfilter(#FQN, '.Event Code,.Event Description',
Dateadd(mi, -10, #DateStart), #DateStop, 'good', 'KLN-FTVP')
END
ELSE
BEGIN
WITH t1(timestamp, eventcode)
AS (SELECT localtimestamp,
valueasstring
FROM dbo.Historywithqualityfilter (#FQN, '.Event Code',
Dateadd(mi, -10, #DateStart),
#DateStop, 'good', 'KLN-FTVP')
WHERE #FilterByEventCode = valueasstring),
t2(timestamp, eventdescription)
AS (SELECT localtimestamp,
valueasstring
FROM dbo.Historywithqualityfilter (#FQN, '.Event Description',
Dateadd(mi, -10, #DateStart), #DateStop, 'good',
'KLN-FTVP')
)
SELECT *
FROM t1 a
INNER JOIN t2 b
ON a.timestamp = b.timestamp
END
What I've noticed is that SSRS has problems getting all fields in such way even when you're in query builder and you set all the parameters so that the maximum of fields apear when you click on "Refresh Fields" it'll still do what it wants to do.
For me you have only two solutions, the first is to edit your query so that the maximum of fields apear no mater what parameters you enter, push refresh fields, and then change the query back to what it was (however be carful not to refresh the fields again so cancel any request from the report builder to do such a thing).
The second is to manually create the missing fields.
I'm trying to convert a Hundred Year Date (HYD) format to a regular date format through SSIS derived column transform. For example: Convert 41429
to 06/04/2013. I can do it with formatinng code within a script (and maybe I simply have to go this route) but feel there has to be a way to do so within a derived column that I'm just not getting. Any help is appreciated.
This is what I came up with. Are you sure your conversion is correct? My answer is 1 day. off.
DECLARE #t1 as date = '01/01/1900';
DECLARE #t2 as DATE = '12/31/1900';
DECLARE #hyd as INT;
-- This example shows that we need to add 1
SELECT #hyd = DATEDIFF (d, #t1, #t2) + 1 -- 364 + 1
SELECT #hyd
set #t2 = '06/04/2013';
SELECT #hyd = DATEDIFF (d,#t1, '06/04/2013') + 1-- 41427
SELECT #hyd
SELECT DATEADD (d, #hyd, '01-JAN-1900')
SELECT DATEADD (d, 41429, '01-JAN-1900')
A hundred year date is a calculation based on the number of days since 1899-12-31. It's an "Excel Thing". It also has a bug in it that you must account for.
The equivalent TSQL logic would be
DECLARE
#HYD int = 41429;
SELECT
#HYD =
CASE
WHEN #HYD > 60
THEN #HYD -1
ELSE
#HYD
END;
SELECT
DATEADD(d, #HYD, '1899-12-31') AS HYD;
Armed with that formula, I can write the following Expression in a Derived Column Transformation (assuming you have a column named HYD)
(HYD > 60) ? DATEADD("d",HYD - 1,(DT_DATE)"1899-12-31") : DATEADD("d",HYD,(DT_DATE)"1899-12-31")
And the results
--or inline SQL...using this
SELECT
case when ([HYD] > 60) then
DATEADD(day,[HYD] - 1,'1899-12-31')
else
DATEADD(day,[HYD],'1899-12-31')
end 'HYD_conv'
FROM
TableName
--and in the where clause if you like...
WHERE
(case when ([HYD] > 60) then DATEADD(day,[HYD] - 1,'1899-12-31') else DATEADD(day,[HYD],'1899-12-31') end) = '2016-01-14'
Given the SQL...
declare #xmlDoc xml
set #xmlDoc = '<people>
<person PersonID="8" LastName="asdf" />
<person PersonID="26" LastName="rtert" />
<person PersonID="33" LastName="dfgh" />
<person PersonID="514" LastName="ukyy" />
</people>'
What would be the sql to convert that xml into a table of two columns PersonID and LastName?
SELECT T.c.query('.').value('(//#PersonID)[1]', 'int'),
T.c.query('.').value('(//#LastName)[1]', 'varchar(50)')
FROM #xmlDoc.nodes('/people/person') T(c)
select T.X.value('#PersonID', 'int') as PersonID,
T.X.value('#LastName', 'nvarchar(50)') as LastName
from #xmlDoc.nodes('/people/person') as T(X)