Query not working in execute SQL task in the ssis package - ssis

This query works fine in the query window of SQL Server 2005, but throws error when I run it in Execute SQL Task in the ssis package.
declare #VarExpiredDays int
Select #VarExpiredDays= Value1 From dbo.Configuration(nolock) where Type=11
DECLARE #VarENDDateTime datetime,#VarStartDateTime datetime
SET #VarStartDateTime= GETDATE()- #VarExpiredDays
SET #VarENDDateTime=GETDATE();
select #VarStartDateTime
select #VarENDDateTime
SELECT * FROM
(SELECT CONVERT(Varchar(11),#VarStartDateTime,106) AS VarStartDateTime) A,
(SELECT CONVERT(Varchar(11),#VarENDDateTime,106) AS VarENDDateTime) B
What is the issue here?

Your intention is to retrieve the values of start and end and assign those into SSIS variables.
As #Diego noted above, those two SELECTS are going to cause trouble. With the Execute SQL task, your resultset options are None, Single Row, Full resultset and XML. Discarding the XML option because I don't want to deal with it and None because we want rows back, our options are Single or Full. We could use Full, but then we'd need to return values of the same data type and then the processing gets much more complicated.
By process of elimination, that leads us to using a resultset of Single Row.
Query aka SQLStatement
I corrected the supplied query by simply removing the two aforementioned SELECTS. The final select can be simplified to the following (no need to put them into derived tables)
SELECT
CONVERT(Varchar(11),#VarStartDateTime,106) AS VarStartDateTime
, CONVERT(Varchar(11),#VarENDDateTime,106) AS VarENDDateTime
Full query used below
declare #VarExpiredDays int
-- I HARDCODED THIS
Select #VarExpiredDays= 10
DECLARE #VarENDDateTime datetime,#VarStartDateTime datetime
SET #VarStartDateTime= GETDATE()- #VarExpiredDays
SET #VarENDDateTime=GETDATE();
/*
select #VarStartDateTime
select #VarENDDateTime
*/
SELECT * FROM
(SELECT CONVERT(Varchar(11),#VarStartDateTime,106) AS VarStartDateTime) A,
(SELECT CONVERT(Varchar(11),#VarENDDateTime,106) AS VarENDDateTime) B
Verify the Execute SQL Task runs as expected. At this point, it simply becomes a matter of wiring up the outputs to SSIS variables. As you can see in the results window below, I created two package level variables StartDateText and EndDateText of type String with default values of an empty string. You can see in the Locals window they have values assigned that correspond to #VarExpiredDays = 10 in the supplied source query
Getting there is simply a matter of configuring the Result Set tab of the Execute SQL Task. The hardest part of this is ensuring you have a correct mapping between source system type and SSIS type. With an OLE DB connection, the Result Name has no bearing on what the column is called in the query. It is simply a matter of referencing columns by their ordinal position (0 based counting).
Final thought, I find it better to keep things in their base type, like a datetime data type and let the interface format it into a pretty, localized value.

you have more that one output type. You have two variables and one query.
You need to select only one on the "resultset" propertie
are you mapping these to the output parameters?
select #VarStartDateTime
select #VarENDDateTime

Related

how to include hard-coded value to output from mysql query?

I've created a MySQL sproc which returns 3 separate result sets. I'm implementing the npm mysql package downstream to exec the sproc and get a result structured in json with the 3 result sets. I need the ability to filter the json result sets that are returned based on some type of indicator in each result set. For example, if I wanted to get the result set from the json response which deals specifically with Suppliers then I could use some type of js filter similar to this:
var supplierResultSet = mySqlJsonResults.filter(x => x.ResultType === 'SupplierResults');
I think SQL Server provides the ability to include a hard-coded column value in a SQL result set like this:
select
'SupplierResults',
*
from
supplier
However, this approach appears to be invalid in MySQL b/c MySQL Workbench is telling me that the sproc syntax is invalid and won't let me save the changes. Do you know if something like what I'm trying to achieve is possible in MySQL and if not then can you recommend alternative approaches that would help me achieve my ultimate goal of including some type of fixed indicator in each result set to provide a handle for downstream filtering of the json response?
If I followed you correctly, you just need to prefix * with the table name or alias:
select 'SupplierResults' hardcoded, s.* from supplier s
As far as I know, this is the SQL Standard. select * is valid only when no other expression is added in the selec clause; SQL Server is lax about this, but most other databases follow the standard.
It is also a good idea to assign a name to the column that contains the hardcoded value (I named it hardcoded in the above query).
In MySQL you can simply put the * first:
SELECT *, 'SupplierResults'
FROM supplier
Demo on dbfiddle
To be more specific, in your case, in your query you would need to do this
select
'SupplierResults',
supplier.* -- <-- this
from
supplier
Try this
create table a (f1 int);
insert into a values (1);
select 'xxx', f1, a.* from a;
Basically, if there are other fields in select, prefix '*' with table name or alias

How to compare the two table row count , if counts matches than ok if not matches this will restart the SSIS package

I have made the ssis package in which i made the data flow for incremental data. Source and destination server ip's are different. Below you can find the flow diagram of my packageControl flow diagram
Data flow diagram
the package is working fine .
In the Execute SQl task :- it controls the log table and start the incremental task
query which i used is :-
insert into audit_log (
Packagename,
process_date,
start_datetime,
end_datetime,
Record_processed,
status
)values('CRM-TO-TRANSORGDB',null,GETDATE(),null,null,null);
select MAX(ID) as ID,MAX(process_date) as proc_date from audit_log where Packagename ='CRM-TO-TRANSORGDB' ;
store the ID and proc_date in the variable.
in the Execute SQl task 1:- it just update the log table.
UPDATE audit_log
SET
process_date=?,
end_datetime = GETDATE(),
status='SUCCESS'
record_processed=?
WHERE (packagename = 'CRM-TO-TRANSORGDB') AND ID=? ;
this is the query we have used to update the log table.
In the Data flow simple fetching the all the records and put in into the destination table.
this all i have done .
But my question are:-
1) How to compare the total no. of row counts from the source table to destination table in ssis package.
2) if its doesn't matches than it will restart my task automatically.
#thomas as per your instruction i have done the following thing:
1) i have made the Execute SQl Task for source and destination .
2) and Add the Execute Package task and added the condition for not matching the count.
and added the expression for check row_count_src!= row_count_dest
and in Source_table_count i have used the below query:
select count(SubOrderID) as row_count_src from fact_suborder_journey
WHERE Suborderdate between '2016-06-01' and GETDATE()-1 ;
in dest_table_count i have used the below query:
select count(SubOrderID) as row_count_dest from fact_suborder_journey
WHERE Suborderdate between '2016-06-01' and GETDATE()-1 ;
i have added the two variable as int64 in ths ssis package. and map in the result set below you can find the pic what i have done.
but After done all this this i am getting this error:
[Execute SQL Task] Error: An error occurred while assigning a value to variable "row_count_src": "The type of the value being assigned to variable "User::row_count_src" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object.
".
I havent tested this completely but you might be able to do something like this. This creates a loop of your packages and will executes as long as your count variables are different from each other.
What have i done?
First i have a DataFlow Task which moves data from source to
destination.
Then i have an Execute SQL task which basically counts all rows from
TableA and maps it to variable count1 eg. Source table
Then i have an Execute SQL task which basically counts all rows from
TableB and maps it to variable count2 eg. Destination Table
Then i create an Execute Package task where i reference it too it
self. Then i make a precedence constraint with an expression saying
Count1 != count2.
Because if they are different you want to restart the task. If they
are equal the last task Execute Package task will never be executed.
Hope that is something like that?
If I understand your challenge correctly...
In the data flow task, use a RowCount transformation between source
and destination to capture the rows written to the destination. This
will be stored in a variable.
In the control flow, get the max row counts available from the log table and store that a variable.
Create an execute package tasks that executes this same package and put a precedence constraint before if that compares if variable from Step1 <> variable in Step2.

Common Table Expressions -- Using a Variable in the Predicate

I've written a common table expression to return hierarchical information and it seems to work without issue if I hard code a value into the WHERE statement. If I use a variable (even if the variable contains the same information as the hard coded value), I get the error The maximum recursion 100 has been exhausted before statement completion.
This is easier shown with a simple example (note, I haven't included the actual code for the CTE just to keep things clearer. If you think it's useful, I can certainly add it).
This Works
WITH Blder
AS
(-- CODE IS HERE )
SELECT
*
FROM Blder as b
WHERE b.PartNo = 'ABCDE';
This throws the Max Recursion Error
DECLARE #part CHAR(25);
SET #part = 'ABCDE'
WITH Blder
AS
(-- CODE IS HERE )
SELECT
*
FROM Blder as b
WHERE b.PartNo = #part;
Am I missing something silly? Or does the SQL engine handle hardcoded values and parameter values differently in this type of scenario?
Kindly put semicolon at the end of your variable assignment statement
SET #part ='ABCDE';
Your SELECT statement is written incorrectly: the SQL Server Query Optimizer is able to optimize away the potential cycle if fed the literal string, but not when it's fed a variable, which uses the plan that developed from the statistics.
SQL Server 2016 improved on the Query Optimizer, so if you could migrate your DB to SQL Server 2016 or newer, either with the DB compatibility level set to 130 or higher (for SQL Server 2016 and up), or have it kept at 100 (for SQL Server 2008) but with OPTION (USE HINT ('ENABLE_QUERY_OPTIMIZER_HOTFIXES')) added to the bottom of your SELECT statement, you should get the desired result without the max recursion error.
If you are stuck on SQL Server 2008, you could also add OPTION (RECOMPILE) to the bottom of your SELECT statement to create an ad hoc query plan that would be similar to the one that worked correctly.

SSIS Execute SQL Task error no rows returned

I am a bit new to SSIS and given a task to send mail to particular stores based on Purchase Orders -> PONumber.
The steps should be as follows:
1)Take a XML file from a particular folder
2)Get the PONumber from that file
3)Write a query to fetch all the store email addresses for PONumbers
4)Send a mail to particular restaurant
Below screenshot is a package I had created. The only thing I am getting an issue is the Execute SQL Task , not sure what is the exact cause?
Could you please help on how can I debug this ? This was working fine before, but suddenly it started showing errors.
IMAGE1
IMAGE5
Execute SQL task is expecting results from the query, but is not getting any. Maybe you could use SQL Server profiler to catch exact SQL that is executed on SQL Server. Then you can use that SQL in query window to troubleshoot what it returns or why it is not not giving any results.
Edit.
With your current additional information interesting place is "parameter mapping" page, which you did not include. You should link SSIS variable to query parameter in there as Matt explained. SSIS does NOT link your variables in SSIS and query automatically even if they have the same names.
#dvlpr is correct your problem is you are getting NO results when Execute SQL Task 1 needs a single result.
The code you pasted is a little unclear as to which code is where but I will assume the first part is the code you use in SSIS Execute Task and the latter is an example in SSMS. If that is the case the problem is you are assigning the variable with a value of 0 in the script itself which I assume there is no PONUMBER that is 0:
Declare #POID as Varchar(50)
Set #POID = 0
WHERE (BizTalk_POA_HEADER.PONUMBER = #POID)
If you want to pass in the PONUMBER from your first dataflow task you need to load that to a variable and then use the variable in your Execute SQL task and made sure you setup parameter mapping correctly when doing so. here is one SO question on parameters that will help How to pass variable as a parameter in Execute SQL Task SSIS? And here is use of an expression task in a Data Flow task to set the variables value SSIS set result set from data flow to variable (note use the non-accepted answer that it was added later and was for 2012+ while the original was for 2008)
Next unless you are guaranteed only 1 result you will also need to add TOP 1 to your select statement because if you get more than 1 result you will get a different error again.
EDIT Per all of the comments:
So the configuration looks like you are using an ADO.NET connection which allows you to use named paramaters. There are restrictions if you don use that (https://msdn.microsoft.com/en-us/library/cc280502.aspx). The parameter mapping looks correct, and the result set should be fine. As far as your Error I don't know because you haven't posted the exact error so I cannot know what is the problem. If you use ADO.Net with your current Execute SQL Task configuration in the images you do have a couple of problems. 1 you are trying to declare the variable that you want to pass as a parameter that doesn't work, you need to remove that DECLARE statement. I suspect all you really need to do is modify your SQL Input to be:
SELECT DISTINCT BizTalk_POA_HEADER.PONUMBER, FAN_Suppliers.SupplierName,
FAN_Company_Details.CompanyName, FAN_Company_Details.[PrimaryEmail],
BizTalk_POA_HEADER.[DeliveryDate]
FROM BizTalk_POA_HEADER INNER JOIN
FAN_PO_Details ON BizTalk_POA_HEADER.PONUMBER =
CONCAT('PO',FAN_PO_Details.PoNumber) INNER JOIN
FAN_PO ON FAN_PO_Details.PurchaseOrderID = FAN_PO.PurchaseOrderID
INNER JOIN FAN_SupplierDetails ON FAN_PO.SupplierDetailsID =
FAN_SupplierDetails.SuppliersDetailsID INNER JOIN
FAN_Suppliers ON FAN_SupplierDetails.SupplierID = FAN_Suppliers.SupplierID
INNER JOIN FAN_Company_Details ON FAN_PO.CompanyID =
FAN_Company_Details.CompanyDetailsID
WHERE (BizTalk_POA_HEADER.PONUMBER = #POID)
Just get rid of the declare #POID and SET = 0 for a couple of reasons 1 because it is redundant when you have setup parameter mapping, 2 SSIS doesn't like it and will throw an error, 3 because you are setting a value of 0 to it which means it would always be 0.....

Table valued parameters for SSRS 2008

We have a requirement of generating SSRS reports from where we need to convert multi-valued string and integer parameters to datatable and pass it to stored procedure. The stored procedure contains multiple table type parameters. Earlier we used varchar(8000) but it was also crossing the datatype limit. Then we thought to introducing datatable concept. But we were not aware of how to pass values from SSRS.
We found a solution from GruffCode on Using Table-Valued Parameters With SQL Server Reporting Services.
The solution solved my problem, and we're able to generate reports. However, sometimes SSRS returns the two following errors:
An error has occurred during report processing.
Query execution failed for dataset 'DSOutput'.
String or binary data would be truncated. The statement has been terminated.
And
An unexpected error occurred in Report Processing.
Exception of type 'System.OutOfMemoryException' was thrown.
I'm not sure when and where it's causing the issue.
The approach outlined in that blog post relies on building an enormous string in memory in order to load all of the selected parameter values into the table-valued parameter instance. If you are selecting a very large number of values to pass into the query I could see it potentially causing the 'System.OutOfMemoryException' while trying to build the string containing the insert statements that will load the parameter.
As for the 'string or binary data would be truncated' error that sounds like it's originating within the query or stored procedure that the report is using to gather its data. Without seeing what that t-sql looks like I couldn't say why that's happening, but I'd guess that it's also somehow related to selecting a very large number of parameter values.
Unfortunately I'm not sure that there's a workaround for this, other than trying to see if you could figure out a way to select fewer parameter values. Here's a couple of rough ideas:
If you have a situation where users might select a handful of parameter values or all parameter values then you could have the query simply take a very simple boolean value indicating that all values were selected rather than making the report send all of the values in through a parameter.
You could also consider "zooming out" of your parameter values a bit and grouping them together somehow if they lend themselves to that. That way users would be selecting from a smaller number of parameter values that represent a group of the individual values all rolled up.
I'm not a fan of using a Text parameter and EXEC in the SQL statement like the article you referenced describes as doing so is subject to SQL injection. The default SSRS behavior with a Multi-value parameter substitutes a comma-separated list of the values directly in place of the parameter when the query is sent to the SQL server. That works great for simple IN queries, but can be undesirable elsewhere. This behavior can be bypassed by setting the Parameter Value on the DataSet to an expression of =Join(Parameters!CustomerIDs.Value, ", "). Once you have done that you can get a table variable loaded by using the following SQL:
DECLARE #CustomerIDsTable TABLE (CustomerID int NOT NULL PRIMARY KEY)
INSERT INTO #CustomerIDsTable (CustomerID)
SELECT DISTINCT TextNodes.Node.value(N'.', N'int') AS CustomerID
FROM (
SELECT CONVERT(XML, N'<A>' + COALESCE(N'<e>' + REPLACE(#CustomerIDs, N',', N'</e><e>') + N'</e>', '') + N'</A>') AS pNode
) AS xmlDocs
CROSS APPLY pNode.nodes(N'/A/e') AS TextNodes(Node)
-- Do whatever with the resulting table variable, i.e.,
EXEC rpt_CustomerTransactionSummary #StartDate, #EndDate, #CustomerIDsTable
If using text instead of integers then a couple of lines get changed like so:
DECLARE #CustomerIDsTable TABLE (CustomerID nvarchar(MAX) NOT NULL PRIMARY KEY)
INSERT INTO #CustomerIDsTable (CustomerID)
SELECT DISTINCT TextNodes.Node.value(N'.', N'nvarchar(MAX)') AS CustomerID
FROM (
SELECT CONVERT(XML, N'<A>' + COALESCE(N'<e>' + REPLACE(#CustomerIDs, N',', N'</e><e>') + N'</e>', '') + N'</A>') AS pNode
) AS xmlDocs
CROSS APPLY pNode.nodes(N'/A/e') AS TextNodes(Node)
-- Do whatever with the resulting table variable, i.e.,
EXEC rpt_CustomerTransactionSummary #StartDate, #EndDate, #CustomerIDsTable
This approach also works well for handling user-entered strings of comma-separated items.