SSIS 2012: How do I change the project's environment variables? - ssis

I want to change 2 date project variables, StartDateTime and EndDateTime. I have another variable called RunType. Very simply, I want to read the value RunType first. If it is set to "Incremental" I want to change the StartDate from whatever it is to 12:00:00 AM yesterday and the EndDate to 11:59:59 PM yesterday. The errors I get from trying to write back the values seem to indicate writes are out on project level variables. Is this true--or is there something I need to do differently when dealing with these project level variables? I thought of creating package level variables, a control table, blah blah... It seems like overkill for this.
When I test the package manually by changing the parameter values under the integration service catalog/environments--I get the range I expect. This package will run via sql agent job. Is there a pre-ssis step I can create and execute to do this simple task outside of ssis?

While the values may be stored in internal.object_parameters, resist the temptation to edit the values in those tables directly. Instead use the supplied methods for manipulation. In this case, the stored procedure catalog.set_environment_variable_value.
Below is a sample of how you can programmatically change the value of an environment variable. I would see this as being a TSQL jobstep in SQL Agent job that runs prior to your package launching to ensure the correct values for StartDateTime and EndDateTime are set as expected.
DECLARE #var sql_variant;
DECLARE
#StartDateTime date
, #EndDateTime datetime
, #RunType bit = 1;
SELECT
#StartDateTime = CAST(dateadd(d, -1, CURRENT_TIMESTAMP) AS date)
, #EndDateTime = DATEADD(s, -1, cast(cast(current_timestamp AS date) AS datetime))
SELECT #StartDateTime, #EndDateTime;
IF (#RunType = 1)
BEGIN
SET #var = #StartDateTime;
EXECUTE [SSISDB].[catalog].[set_environment_variable_value]
#variable_name=N'StartDateTime'
, #environment_name=N'MyEnvironmentName'
, #folder_name=N'MyFolder'
, #value=#var;
SET #var = #EndDateTime;
EXECUTE [SSISDB].[catalog].[set_environment_variable_value]
#variable_name=N'EndDateTime'
, #environment_name=N'MyEnvironmentName'
, #folder_name=N'MyFolder'
, #value=#var;
END
ELSE
BEGIN
PRINT 'Logic goes here to handle the other conditions for RunType'
END

Related

How do I run an SSIS from stored proc using a SQL login without a job

How do I set up a SQL login to run a stored proc that executes an SSIS, without using a SQL job?
I have a working process (SQL Server 2016) where I build & run an SSIS execution directly from a stored procedure (using a Windows service account). I am using the [catalog].[create_execution] method (so no SQL job).
Works great, dynamic parameters are set for the SSIS and it starts running while control returns to the service account.
Now we have a non-sysadmin user with a SQL login service account who wants to be able to call the same stored proc and run the SSIS.
I know a SQL login cannot run IS Catalog stored procs, so it can't run the stored proc with the SSIS.
The SQL login is used elsewhere so the user does not want to switch this connection to a Windows login, plus we want to keep this login with minimal permissions.
The async nature of the SSIS call is exactly why I want this. The user only waits for the stored proc to kick off the SSIS and does not have to wait for the SSIS to complete before control is returned to the user.
I do NOT want to use a SQL Agent job for the SSIS piece as I know I will sometimes call the stored proc/SSIS while the previous call is still running (runtime is <10 seconds). In this case I need them to run in parallel (the code has no issues running in parallel), and a SQL job can't start (fails) if the previous job is still executing.
I did create a Windows login specifically for this purpose. I thought I could have the SQL login "impersonate" the Windows login but ended up elevating both logins to sysadmin before it ran successfully. It works, but the elevated permissions will not be allowed in Production.
I've read more than a dozen similar questions that all say "create a job", but I don't want to (as described above) and there "should" be a way to do this without a job.... right? :)
Any suggestions/solutions are greatly appreciated!
You have succinctly identified why assorted approaches won't work (+1). I'm still working through my tea so better (bad) ideas might come to me but what if you were able to have multiple jobs running concurrently? I think that'd solve your issue of allowing the SQL User able to run an SSIS package because you'd control "who" is presented to the SSISDB as the running user.
We can address the concurrent job execution by creating one-time, self-deleting jobs.
Allow the SQL user to execute the "JobMaker" procedure defined below.
Job Maker
The first step the procedure does is generate the new job's name. In this case, it'll take the form of SubJob_2021-12-11_DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF
Give it a good string to start the job name so things "sort nicely" in the gui. If you need to use this technique to run multiple packages, I'd embed the package name into the job name.
I add the date so if a job exists beyond today, that'd be my queue to see if it's still running or whether it errored out.
The GUID is a uniquely generated sequence so even if you and I both called the JobMaker proc at the exact same time, we'd still get a unique job created.
The rest of the procedure is me using the job creation wizard to run my SSIS package "Package.dtsx" in the project "JustWait" in the folder "So" and assigning values to the package parameters. I'd expect you'd do a similar thing based on your specific requirements.
I specify #delete_level = 1 which is delete the job on success so I don't clog my job list with one-time jobs.
The final step is to start the job with sp_start_job
USE msdb;
GO
CREATE PROCEDURE dbo.JobMaker
AS
BEGIN
-- "Magic" here - we build out a dynamic name for our job
-- It takes the form of SubJob_ today's date as YYYY-MM-DD and then a unique guid
-- It creates the job and then as the last step runs the new job
DECLARE #jobNameDynamic sysname = CONCAT(N'SubJob_', CONVERT(char(10), GETDATE(), 121), '_', NEWID());
SELECT
#jobNameDynamic;
DECLARE #jobId binary(16);
EXEC msdb.dbo.sp_add_job
#job_name = #jobNameDynamic
, #enabled = 1
, #notify_level_eventlog = 0
, #notify_level_email = 2
, #notify_level_page = 2
, #delete_level = 1
, #category_name = N'[Uncategorized (Local)]'
, #owner_login_name = N'sa'
, #job_id = #jobId OUTPUT;
EXEC msdb.dbo.sp_add_jobserver
#job_name = #jobNameDynamic
, #server_name = N'ERECH\DEV2017';
EXEC msdb.dbo.sp_add_jobstep
#job_name = #jobNameDynamic
, #step_name = N'JorbStep'
, #step_id = 1
, #cmdexec_success_code = 0
, #on_success_action = 1
, #on_fail_action = 2
, #retry_attempts = 0
, #retry_interval = 0
, #os_run_priority = 0
, #subsystem = N'SSIS'
, #command = N'/ISSERVER "\"\SSISDB\So\JustWait\Package.dtsx\"" /SERVER "\"ERECH\dev2017\"" /Par "\"$Project::aDateTime(DateTime)\"";"\"1/1/2022 12:00:00 AM\"" /Par "\"$Project::aWideString\"";x /Par "\"$Project::anInt(Int32)\"";0 /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E'
, #database_name = N'master'
, #flags = 0;
EXEC msdb.dbo.sp_update_job
#job_name = #jobNameDynamic
, #enabled = 1
, #start_step_id = 1
, #notify_level_eventlog = 0
, #notify_level_email = 2
, #notify_level_page = 2
, #delete_level = 1
, #description = N''
, #category_name = N'[Uncategorized (Local)]'
, #owner_login_name = N'sa'
, #notify_email_operator_name = N''
, #notify_page_operator_name = N'';
EXECUTE msdb.dbo.sp_start_job
#job_name = #jobNameDynamic;
END;
You might additional requirements, like having a job run as a proxy user but this approach should address the big ticket items.
Otherwise, the impersonation that happens in the SSIDB for the package running conflicts with a sql defined user (until 2019?). The ways to start a package would be
agent job
SSISDB stored CLR procedures
xp_cmdshell calls and even then, I think your dtutil calls will need to have a windows users to work - although you might be able to change your context but that's gonna be an even uglier hack.

How to use the same SSIS Data Flow with different Date Values?

I have a very straightforward SSIS package containing one data flow which is comprised of an OLEDB source and a flat file destination. The OLEDB source calls a query that takes 2 sets of parameters. I've mapped the parameters to Date/Time variables.
I would like to know how best to pass 4 different sets of dates to the variables and use those values in my query?
I've experimented with the For Each Loop Container using an item enumerator. However, that does not seem to work and the package throws a System.IO.IOException error.
My container is configured as follows:
Note that both variables are of the Date/Time data type.
How can I pass 4 separate value sets to the same variables and use each variable pair to run my data flow?
Setup
I created a table and populated it with contiguous data for your sample set
DROP TABLE IF EXISTS dbo.SO_67439692;
CREATE TABLE dbo.SO_67439692
(
SurrogateKey int IDENTITY(1,1) NOT NULL
, ActionDate date
);
INSERT INTO
dbo.SO_67439692
(
ActionDate
)
SELECT
TOP (DATEDIFF(DAY, '2017-12-31', '2021-04-30'))
DATEADD(DAY, ROW_NUMBER() OVER (ORDER BY (SELECT NULL)), '2017-12-31') AS ActionDate
FROM
sys.all_columns AS AC;
In my SSIS Package, I added two Variables, startDate and endDAte2018 both of type Date Time. I added an OLE DB Connection manager pointed to the database where I made the above tables.
I added a Foreach Item Enumerator, configured it for Item Enumerator and defined the columns there as datetime as well
I populated it (what a clunky editor) with the year ranges from 2018 to 2020 as shown and 2021-01-01 to 2021-04-30.
I wired the variables up as shown in the problem definition and ran it as is. No IO error reported.
Once I knew my foreach container was working, the data flow was trivial.
I added a data flow inside the foreach loop with an OLE DB Source using a parameterized query like so
DECLARE #StartDate date, #EndDate date;
SELECT #StartDate = ?, #EndDate = ?;
SELECT *
FROM
dbo.SO_67439692 AS S
WHERE
S.ActionDate >= #StartDate AND S.ActionDate <= #EndDate;
I mapped my two variables in as parameter names of 0 and 1 and ran it.
The setup you described works great. Either there is more to your problem than stated or there's something else misaligned. Follow along with my repro and compare it to what you've built and you should see where things are "off"

Reporting parameter saving

In SSRS, User wants his report parameters selection should be remembered.That is for the first time user will select parameters and somehow next time onward it should remember parameters.
Is it possible to achieve this in any way?? Anybody has any thought on this?
The My Reports feature allows users to build and save their own reports on the server. It also allows them to create linked reports that have parameter defaults set up the way they want. That's the only built-in feature that might get you what you want. It is pretty limited, though, to be honest.
On the other hand, you could build a user parameter storage feature of your own. Here's a rudimentary example.
Start with a table to store your user report parameter values.
create table UserReportParameters (
UserName nvarchar(50),
ParameterSet nvarchar(50),
ParameterName nvarchar(50),
ParameterValue nvarchar(max)
)
Since you're seeking a way to store a user's parameter values, I'll assume you're using some sort of authentication on your reporting service, such as Windows Authentication. We'll be storing the UserName provided by that authentication service along with an arbitrarily named ParameterSet. This ParameterSet value could be the url to the report or some other unique identifier for the report or perhaps a logical name for a set of reports that all use common parameters such as "Sales Reports".
We'll need a way to save these parameter values. A simple stored procedure will do the trick.
create proc SaveUserReportParameter (
#UserName nvarchar(50),
#ParameterSet nvarchar(50),
#ParameterName nvarchar(50),
#ParameterValue nvarchar(max)
)
as
delete UserReportParameters where UserName = #UserName and ParameterSet = #ParameterSet and ParameterName = #ParameterName
insert UserReportParameters select #UserName, #ParameterSet, #ParameterName, #ParameterValue
Now in your report's main dataset query or stored procedure (somewhere you can be sure the code runs once per report execution) you just need to call that stored procedure to store each value.
exec SaveUserReportParameter #UserName, 'Sales Reports', 'StartDate', #StartDate
exec SaveUserReportParameter #UserName, 'Sales Reports', 'EndDate', #EndDate
exec SaveUserReportParameter #UserName, 'Sales Reports', 'DepartmentId', #DepartmentId
exec SaveUserReportParameter #UserName, 'Sales Reports', 'PromoCode', #PromoCode
Note that the table stores everything as nvarchar. I'm being lazy here and letting implicit conversion happen. If you want to store values such as datetime in a specific format, you'll need to convert them when inserting them into the table variable. The #UserName report parameter used here and below should be an internal parameter whose default value is =User!UserId.
Now that we're storing parameters, let's start using them. We'll need another stored procedure. This one's a bit bigger.
create proc GetUserReportParameters (
#UserName nvarchar(50),
#ParameterSet nvarchar(50),
#Columns nvarchar(max)
) as
declare #sql nvarchar(max)
set #sql = '
select * from
(
select
p.ParameterName,
p.ParameterValue
from
(select #UserName UserName, #ParameterSet ParameterSet) stub
left join UserReportParameters p on p.UserName = stub.UserName and p.ParameterSet = stub.ParameterSet
) v
pivot (
min(ParameterValue)
for ParameterName in (' + #Columns + ')
) as pvt'
exec sp_executesql #sql, N'#UserName nvarchar(50), #ParameterSet nvarchar(50)', #UserName, #ParameterSet
And then to call it
exec GetUserReportParameters #UserName, 'Sales Reports', 'StartDate,EndDate,DepartmentId,PromoCode'
As you can see, you provide the UserName and ParameterSet values. The same as you used when you called the save procedure. Here, though, you're also providing a string that's a simple comma separated list of column names. These column names are used by a pivot query to ensure your result set includes columns by those names. You should be aware those columns can and will contain nulls, especially when a user first accesses the report. You should also be aware that all values are nvarchar(max). If you need to parse or convert any values or provide your own defaults when the value is null, you'll need to do some extra work.
In an embedded dataset named UserReportParameters I call the procedure, store the values locally and then do my conversions and null-swapping as necessary.
declare #Parameters table (
StartDate datetime,
EndDate datetime,
DepartmentId int,
PromoCode nvarchar(50)
)
insert #Parameters
exec GetUserReportParameters #UserName, 'Sales Reports', 'StartDate,EndDate,DepartmentId,PromoCode'
select
isnull(cast(StartDate as datetime), dateadd(day,datediff(day,0,getdate()),0)) StartDate,
isnull(cast(EndDate as datetime), dateadd(day,datediff(day,0,getdate()),0)) EndDate,
isnull(cast(DepartmentId as int),15) DepartmentId,
isnull(PromoCode,'FAKESALE') PromoCode
from #Parameters
Now, every time you run the report (more specifically, every time the dataset that contains your call to the save procedure is executed) the parameters you opt to save will be saved. When you leave and come back to the report page, the parameters will be populated with the last values you chose. Note that you don't have to save every parameter value. Just the ones you want to save per-user. You also don't have to use every parameter value that's saved in a given ParameterSet. If you have two sales reports, one that uses PromoCode and one that uses ProductCategory you can save both their parameter values in the 'Sales Reports' parameter set without worry that they'll interfere with one another. Additionally, you could easily create two separate datasets in your report, each pulling a different parameter set. For instance, if PromoCode actually gets saved in the 'Marketing' parameter set and DepartmentId is from the Products parameter set. Once you have this framework, you have a lot of flexibility in how your user parameter defaults get saved.
It is generally considered bad practice to allow reports to alter data. I agree with this conventional wisdom when it comes to domain data. However, this parameter-saving feature is really more of an extension of SSRS functionality, akin to report execution logging. I do not believe it violates the principle.
SCALABILITY -- This will work great for a small number of reports for a relatively small number of users. There could easily be performance issues in larger enterprise environments. (That's why I called this a "rudimentary example" at the start.) You could address this by redesigning the value storage mechanism. For instance, use pre-defined tables for each parameter set instead of dumping them all into a single table so you can avoid pivoting. I'll leave that decision and work to you.
Try this one. It's a bit long, I will just attach the document link here.
Report Parameter Saving Solution

Parameters in SQL Server 2008

I have a stored procedure that pulls data for a report. I'm having a problem with the parameters. I have a couple temp tables and some joins that work so I have omitted them below. The problem is this line:
WHERE
SeminarDivision = #SeminarDivision AND SeminarType = #SeminarType
When I put this where clause in to use my seminar parameters the stored proc returns nothing But I need to generate a report based on those two parameters. So where do the parameters go? Can anyone help?
#StartDate DateTime,
#EndDate DateTime,
#SeminarDivision VARCHAR(50),
#SeminarType VARCHAR(50)
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
... OMITTED
SELECT
WL.PID,
CONVERT(varchar(20), upper(substring(FirstName,1,1))+
LOWER(substring(FirstName,2,19))) AS FirstName,
CONVERT(varchar(20), upper(substring(LastName,1,1))+
LOWER(substring(LastName,2,19))) AS LastName,
S.SeminarDivision,
S.SeminarType,
S.StartDate,
S.SeminarLocation
FROM
#tblWaitList WL
INNER JOIN #tblSeminar S ON WL.SeminarGuid=S.SeminarGuid
WHERE
SeminarDivision = #SeminarDivision AND SeminarType = #SeminarType
ORDER BY
LastName,FirstName,StartDate
First and foremost there is nothing wrong with your code, when asking where do these parameters go, they go exactly where you put them. The question is - is the data coming in for SeminarDivision and SeminarType the right type of data? For instance just as a test,
copy the code into a new sql code query inside the editor. Run the command without the where, if you get values great. Now change the where to
WHERE
SeminarDivision = "Possible_Value"
Where Possible_Value should be a possible value...If it returns rows, good...now add the second condition also hardcoding a value:
WHERE SeminarDivision = "Possble_Value" AND SeminarType="Possible_Value_2"
Getting any data? Is it possible you want OR rather then AND ?
There's nothing wrong with the 'location' of your params.
If you're getting no data back, it's either because you've not populated #tblWaiList or #tblSeminar or because the records simply don't match your WHERE clause.
Check your params have the value you think they do by executing print #SeminarDivision etc.
SELECT * FROM #tblSeminar may give you a clue too.
You are not setting parameters correctly for the call.
Try this in SSMS, change values accordingly
EXEC Proc '20110101', '20111101', 'PossibleDivision', 'PossibleType'
If this fails, then show us "OMITTED" code
if this works, show us how you are calling this from the client code

Unable to retrieve column information: No colum information was returned by the sql command

I am using Microsoft sql server 2008, I tried all the 3 solutions, but every time I get the same error
Error at Data Flow Task[OLEDB source[449]]:No colum information was returned by the sql command
I am using the following batch of sql statments to retrieve the server level configuration of all servers in my company. The table variable #tb1_SvrStng has 83 columns and it is populated using different resources.
So I summarize the sql script. I cannot use it as stored procedure because this script is going to run against 14 servers (once for each server). So if I store the procedure on one server, other server cannot execute that procedure in its context.
I will highly appreciate your help. I am not using any temporary table in my script.
declare #tb1_SvrStng table
(
srvProp_MachineName varchar(50),
srvProp_BldClrVer varchar(50),
srvProp_Collation varchar(50),
srvProp_CNPNB varchar(100),
...
xpmsver_ProdVer varchar(50),
..... .
syscnfg_UsrCon_cnfgVal int,
.....
);
insert into #tb1_SvrStng
(
srvProp_BldClrVer,
srvProp_Collation,
srvProp_CNPNB , ........
........ .
)
select convert(varchar, serverproperty('BuildClrVer')),
convert(varchar, serverproperty('Collation'))
........
.......
declare #temp_msver1 table
(
id int, name varchar(100),
...........
);
insert into #temp_msver1 exec xp_msver
Update #tb1_SvrStng
set xpmsver_ProdVer =
(
select value from #temp_msver1 where name = 'ProductVersion'
),
xpmsver_Platform =
(
select value from #temp_msver1 where name = 'Platform'
),
.....
......
select
srvProp_SerName as srvProp_SerName,
getdate() as reportDateTime,
srvProp_BldClrVer as srvProp_BldClrVer,
srvProp_Collation as srvProp_Collation,
.....
.....
from #tb1_SvrStng
From what i can gather from your code and question is that the query cannot be processed outside runtime, because you're doing something dynamic to it, or that it won't process it since it's doing something funky.
One trick to this would be to use the Source component in the data flow task to something "dummy" - which you can fake with a query like this
SELECT
CONVERT(DATATYPE,NULL) AS srvProp_SerName,
CONVERT(DATETIME,NULL) AS reportDateTime,
CONVERT(DATATYPE,NULL) AS srvProp_BldClrVer,
CONVERT(DATATYPE,NULL) AS srvProp_Collation
This way the source component should be able to read the metadata. You can then put your proper query (as long as it's within the limits of the length of the query text) into a variable, and then assign this as an expression to the source component.
At runtime it will then use the expression query - and hopefully don't mind too much the metadata issue.
This may or may not work but it should be worth a try since it won't take long to confirm.