I have written an SSIS package that essentially takes data from multiple sources and writes it to an Excel file (it is a bit more complicated than this, but I do not think the specifics really matter at this point).
Now, I need to run this DTSX package every week (on a Monday), and every month (on the 1st) and save the excel file to a name specified by a variable within the package, having run several simple SQL stored procedures, which have either 'Weekly' or 'Monthly' passed in to work out the dates needed to get the right data.
The initial plan was to copy the DTSX package and have a SQL Job just run the first package every Monday and the 2nd package on the 1st of each month.
Is there a way I can use the same package to do both things (for example, can I pass 'Monthly' or 'Weekly' into the DTSX package from the SQL Job somehow) and if so, how do I do this?
Thanks,
Bob
Create a variable in the package called ExecutionMode. Use this variable as a parameter to the appropriate stored procedures. Set ExecutionMode to "Weekly" or "Monthly" and run your package. Make sure that all procs run correctly.
Use Package Configurations and put ExecutionMode in the config file for the package. Now, the ExecutionMode can be passed as a parameter.
Create two jobs for the SSIS package of type "SQL Server Integration Services". In each one, specify the package and the configuration file. On the SET VALUES tab, choose the ExecutionMode variable and set it to "Weekly" or "Monthly" depending on the schedule.
Here is how to run it command line (including setting variables):
http://www.sqlservercentral.com/articles/SQL+Server+2005+-+SSIS/2999/
Thanks again for the help Raj.
My final answer was to create the job of type SQL Server Integration Services, then set a variable ('TimeScale') within the SSIS package.
Once I set the job type to SSIS I could then 'Set Values' as follows (note that it is exactly as below - 'package' should be 'package', NOT the name of your package!):
Property Path: \package.Variables[TimeScale].Value
Value: Monthly
Full sample code to import into a job if you need the example. :)
USE [msdb]
GO
/****** Object: Job [Sample] Script Date: 10/28/2009 16:04:22 ******/
BEGIN TRANSACTION
DECLARE #ReturnCode INT
SELECT #ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 10/28/2009 16:04:22 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC #ReturnCode = msdb.dbo.sp_add_category #class=N'JOB', #type=N'LOCAL', #name=N'[Uncategorized (Local)]'
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
END
DECLARE #jobId BINARY(16)
EXEC #ReturnCode = msdb.dbo.sp_add_job #job_name=N'Sample',
#enabled=1,
#notify_level_eventlog=0,
#notify_level_email=0,
#notify_level_netsend=0,
#notify_level_page=0,
#delete_level=0,
#description=N'No description available.',
#category_name=N'[Uncategorized (Local)]',
#owner_login_name=N'NTWK\FrostbiteXIII', #job_id = #jobId OUTPUT
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
/****** Object: Step [Sample Step] Script Date: 10/28/2009 16:04:22 ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Sample Step',
#step_id=1,
#cmdexec_success_code=0,
#on_success_action=1,
#on_success_step_id=0,
#on_fail_action=2,
#on_fail_step_id=0,
#retry_attempts=0,
#retry_interval=0,
#os_run_priority=0, #subsystem=N'SSIS',
#command=N'/DTS "TestPackage.dtsx" /SERVER MyServer /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /SET "\package.Variables[TimeScale].Value";Monthly',
#database_name=N'master',
#flags=0
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_update_job #job_id = #jobId, #start_step_id = 1
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobschedule #job_id=#jobId, #name=N'Weekly Step',
#enabled=1,
#freq_type=8,
#freq_interval=3,
#freq_subday_type=1,
#freq_subday_interval=0,
#freq_relative_interval=0,
#freq_recurrence_factor=1,
#active_start_date=20091028,
#active_end_date=99991231,
#active_start_time=80000,
#active_end_time=235959
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobserver #job_id = #jobId, #server_name = N'(local)'
IF (##ERROR 0 OR #ReturnCode 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (##TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
Related
I am using project deployment. I have several project parameters. My packages only use project-level parameters, and no package-level ones. I have programatically deployed my project and set an environmental reference:
I call each package from a SQL Agent job. I am unable to link my environment variables to the package when it runs. I have successfully linked the project to the environment:
But now when I run my agent job, it fails. When I look at the SSISDB reports, it says it "created execution", but shows no variables.
Do I actually have to explicitly link every variable in each package to the environment variable? Why even bother to group them by environment?
I have created my environmental references like this (sql cmd):
EXEC [SSISDB].[catalog].[create_environment_reference] #environment_name='$(ChooseEnvironment)', #reference_id=#reference_id OUTPUT, #project_name='$(ProjectName)', #folder_name='$(folderName)', #reference_type=R
EXEC SSISDB.catalog.set_object_parameter_value #parameter_name=N'EmailFrom', #parameter_value='EmailFrom', #project_name=$(ProjectName), #object_type=20, #folder_name=$(FolderName), #value_type=N'R'
Additional info: I have created a sql agent job that calls each package with a job step like this:
set #cmd = N'/ISSERVER "\"\SSISDB\CHAT\SSISPackages\Chat_Load_RMS_InputFiles.dtsx\"" /SERVER "\"' + #TargetDBServer + '\"" /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E'
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId,
#step_name=N'PACKAGE: Chat_Load_RMS_InputFiles.dtsx',
#step_id=1,
#cmdexec_success_code=0,
#on_success_action=3,
#on_success_step_id=0,
#on_fail_action=2,
#on_fail_step_id=0,
#retry_attempts=0,
#retry_interval=0,
#os_run_priority=0, #subsystem=N'SSIS',
#command=#cmd,
#database_name=N'master',
#flags=0
Do I need to add a reference id to my SSIS #cmd variable? Also, if I address this in the job, can I remove my code above to set each project-level variable to an environment, or do I still need that? It seems like for cleanliness, I should just be able to say: this project uses this environment. Done. Otherwise, it's almost like using package-level variables and all the tinkering those require.
If you are running a package as an "direct" SSIS step in SQL Agent step, you have to select the environment in the package configuration tab on the step configuration dialog.
If you running it using TSQL script you need to provide a reference id when calling catalog.create_execution:
DECLARE
#reference_id bigint,
#FullPackageName NVARCHAR(100);
SELECT #reference_id = reference_id
FROM [$(SSISDB)].catalog.environment_references er
INNER JOIN [$(SSISDB)].catalog.projects AS p
ON p.project_id = er.project_id
INNER JOIN [$(SSISDB)].catalog.folders AS f
ON f.folder_id = p.folder_id
WHERE er.environment_folder_name IS NULL
AND er.environment_name = #EnvironmentName
AND p.name = #ProjectName
AND f.name = #FolderName;
IF ##ROWCOUNT = 0
BEGIN
DECLARE
#msg NVARCHAR(100);
SET #msg = N'Could not find a reference for a local (.) ''' + #EnvironmentName + N''' environment.';
THROW 50000, #msg, 1;
END;
SET #FullPackageName = #PackageName + N'.dtsx';
EXEC [$(SSISDB)].catalog.create_execution
#package_name = #FullPackageName,
#execution_id = #execution_id OUTPUT,
#folder_name = #FolderName,
#project_name = #ProjectName,
#use32bitruntime = False,
#reference_id = #reference_id;
Here i am using the following R external script to write sql database tableas csv file.
I know how to export the data using export and import wizard.
but i have to export data using scripts.
declare #file_path varchar(300)
select #file_path = 'C:/NB/DATA/DB/arima.csv'
EXEC sp_execute_external_script
#language = N'R'
,#script = N'
write.csv(data,file=file_path,row.names=FALSE);'
,#input_data_1_name = N'data'
,#input_data_1= N'select * from [dbo].[fcst_model]'
,#params = N'#file_path varchar(300)'
,#file_path = #file_path;
note:
fcst_model is database table
it has two columns
while executing the script i got the following error.
Msg 39004, Level 16, State 20, Line 305
A 'R' script error occurred during execution of 'sp_execute_external_script'
with HRESULT 0x80004004.
Msg 39019, Level 16, State 2, Line 305
An external script error occurred:
Error in file(file, ifelse(append, "a", "w")) :
cannot open the connection
Calls: source ... write.csv -> eval.parent -> eval -> eval -> write.table ->
file
In addition: Warning message:
In file(file, ifelse(append, "a", "w")) :
cannot open file 'C:/NB/DATA/DB/arima.csv': Permission denied
can anyone help me to solve this issue.
Thanks in advance.
drop proc if exists to_csv;
go
create proc to_csv(#param1 varchar(MAX))
as
begin
exec sp_execute_external_script
#language = N'R'
,#script = N'
write.csv(df,output_path)'
,#input_data_1 = N'select * from fcst_data1'
,#input_data_1_name=N'df'
,#params = N'#output_path varchar(MAX) '
,#output_path = #param1;
end;
go
exec to_csv "C:\\NB\\DB\\SQL_R\\data\\data.csv"
I use batch-file for copy database from server1 to server2.
Step 1: call stored procedure for FLUSH TABLES table1,table2, ..., table1000 FOR EXPORT;
Step 2: copy files .ibd and .cfg to temp directory and archive this
Step 3: unlock tables;
The problem is the first step - files .cfg are created and then removed, but unlock the tables is not called. Why? Files .cfg are created and immediately disappear, I do not have time to copy
.bat file command:
mysql -u %db_user% -p%db_password% %db_name% --default-character-set=utf8 < stored_proc_flush_tables.sql
file stored_proc_flush_tables.sql:
DROP PROCEDURE IF EXISTS stored_proc_flush_tables;
DELIMITER //
CREATE PROCEDURE stored_proc_flush_tables
(
)
BEGIN
DECLARE t_name BLOB;
DECLARE tmp_query BLOB;
DECLARE done_tables INT DEFAULT 0;
DECLARE cursor_tables CURSOR FOR
SELECT table_name FROM information_schema.tables WHERE table_schema=DB_NAME;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done_tables = 1;
SET #table_name = '';
SET #tmp_query = '';
OPEN cursor_tables;
tables_loop: LOOP
FETCH cursor_tables INTO t_name;
IF done_tables = 1 THEN
LEAVE tables_loop;
END IF;
SET #tmp_query = CONCAT_WS('', #tmp_query, ',', t_name);
END LOOP;
CLOSE cursor_tables;
SET #tmp_query = TRIM(LEADING ',' FROM #tmp_query);
SET #tmp_query = CONCAT_WS('', 'FLUSH TABLES', ' ', #tmp_query, ' ', 'FOR EXPORT');
PREPARE stmt FROM #tmp_query;
EXECUTE stmt;
END //
DELIMITER ;
call stored_proc_flush_tables();
Files .cfg are created and immediately disappear, I do not have time to copy them
Problem is that you end mysql session that makes FLUSH TABLES ... FOR EXPORT
before you try to copy files.
When mysql session/connection ends all locks unlocked and *.cfg is consired as temporal file is deleted.
So you should have program that makes FLUSH ... FOR EXPORT and keeps session
open and then copies files and after that releases table lock (or ends session).
I am trying to connect to a mysql server using LuaSql via a mysql proxy. I try to execute a simple program (db.lua):
require("luasql.mysql")
local _sqlEnv = assert(luasql.mysql())
local _con = nil
function read_auth(auth)
local host, port = string.match(proxy.backends[1].address, "(.*):(.*)")
_con = assert(_sqlEnv:connect( "db_name", "username", "password", "hostname", "3306"))
end
function disconnect_client()
assert(_con:close())
end
function read_query(packet)
local cur = con:execute("select * from t1")
myTable = {}
row = cur:fetch(myTable, "a")
print(myTable.id,myTable.user)
end
This code executes well when I execute it without mysql-proxy. When I am connecting with mysql-proxy, the error-log displays these errors:
mysql.lua:8: bad argument #1 to 'insert' (table expected, got nil)
db.lua:1: loop or previous error loading module 'luasql.mysql'
mysql.lua is a default file of LuaSql:
---------------------------------------------------------------------
-- MySQL specific tests and configurations.
-- $Id: mysql.lua,v 1.4 2006/01/25 20:28:30 tomas Exp $
---------------------------------------------------------------------
QUERYING_STRING_TYPE_NAME = "binary(65535)"
table.insert (CUR_METHODS, "numrows")
table.insert (EXTENSIONS, numrows)
---------------------------------------------------------------------
-- Build SQL command to create the test table.
---------------------------------------------------------------------
local _define_table = define_table
function define_table (n)
return _define_table(n) .. " TYPE = InnoDB;"
end
---------------------------------------------------------------------
-- MySQL versions 4.0.x do not implement rollback.
---------------------------------------------------------------------
local _rollback = rollback
function rollback ()
if luasql._MYSQLVERSION and string.sub(luasql._MYSQLVERSION, 1, 3) == "4.0" then
io.write("skipping rollback test (mysql version 4.0.x)")
return
else
_rollback ()
end
end
As stated in my previous comment, the error indicates that table.insert (CUR_METHODS, ...) is getting a nil as first arg. Since the first arg is CUR_METHODS, it means that this object CUR_METHODS has not been defined yet. Since this happens near top of the luasql.mysql module, my guess is that the luasql initialization was incomplete, maybe because the mysql DLL was not found. My guess is that the LUA_CPATH does not find the MySQL DLL for luasql, but I'm surprised that you wouldn't get a package error, so something odd is going on. You'll have to dig into the luasql module and C file to figure out why it is not being created.
Update: alternately, update your post to show the output of print("LUA path:", package.path) and print("LUA path:", package.cpath) from your mysql-proxy script and also show the path of folder where luasql is installed and contents of that folder.
I am trying to shrink my database log file. I have tried to run:
USE databasename
BACKUP log databasename
WITH truncate_only
DBCC shrinkfile (databasename_log, 1)
I get the error message:
Msg 155, Level 15, State 1, Line 3
'truncate_only' is not a recognized
BACKUP option.
Am I missing something?
SQL Server 2008 no longer allows the NO_LOG / TRUNCATE_ONLY options.
To truncate your transaction log, you either have to back it up (for real) or switch the database's Recovery Model to Simple. The latter is probably what you really want here. You don't need Full recovery unless you are making regular transaction log backups to be able to restore to some point mid-day.
I think the best way is to use a script like this:
USE AdventureWorks
GO
-- Use some dynamic SQL just only not to re-write several times the name
-- of your db, or to insert this snippet into a loop for all your databases...
DECLARE #dbname varchar(50) = 'AdventureWorks';
DECLARE #logFileName varchar(50) = #dbname + '_log';
DECLARE #SQL nvarchar(max);
SET #SQL = REPLACE('ALTER DATABASE {dbname} SET RECOVERY FULL;', '{dbname}', #dbname);
EXECUTE(#SQL);
DECLARE #path nvarchar(255) = N'F:\BCK_DB\logBCK' + CONVERT(CHAR(8), GETDATE(), 112) + '_'
+ REPLACE(CONVERT(CHAR(8), GETDATE(), 108),':','') + '.trn';
BACKUP LOG #dbname TO DISK = #path WITH INIT, COMPRESSION;
DBCC SHRINKFILE(#logFileName);
-- determine here the new file size and growth rate:
SET #SQL = REPLACE('ALTER DATABASE {dbname} MODIFY FILE (NAME = ' + #logFileName + ', SIZE = 32000MB, FILEGROWTH = 10%);',
'{dbname}', #dbname);
EXECUTE(#SQL);
GO
http://www.snip2code.com/Snippet/12913/How-to-correctly-Shrink-Log-File-for-SQL