PROBLEM
Let's say I have a jenkins/hudson job (for example free-style) that takes two parameters PARAM_ONE and PARAM_TWO. Now, I do not know the values of those parameters, but I can run some script (perl/shell) to find values of those parameters and then I want the user to select from a dropdown list after which I can start the build.
Is there any way of doing that?
Sounds like you've found a plug-in that does what you need, that is pretty similar to the built-in Parameterized Builds functionality.
To answer your second question: when you define parameterized builds, the parameters are typically passed to your job as environment variables. So you'd access them however you access environment variables in your language, for instance, if you defined a parameter PARAM_ONE, you'd access it as:
In bash:
$PARAM_ONE
In Windows batch:
%PARAM_ONE%
In Python:
import os
os.getenv('PARAM_ONE')
etc.
I imagine this would be the same for the Extended Choice Parameter plugin you are using.
Just install this, and give the parameter in the build script like:
Windows
"your build script" %PARAMONE% %PARAMTWO%
In Java, you can access these parameters off the run object
EnvVars envVars = new EnvVars();
envVars = run.getEnvironment(listener);
for (String envName2 : envVars.keySet()) {
listener.getLogger().println(envName2 + " = " + envVars.get(envName2));
}
Related
My runsettings file contains few connectionstrings which I want to be able to override in VSTS depending on the environment.
I don't want a specific runsettings file for each environment, but I want to use environment variables in order to be consistent on how our other deployment release are configured.
However I'm facing issue when I want to forward to my unit test a connectionstring (or any parameter) which include a semicolon (;). It's being truncated. I've tested transmitting other value without ";" successfully.
settings.runsettings
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<TestRunParameters>
<Parameter name="CRM_CONNECTIONSTRING" value="Url = https://MYCRM.crm4.dynamics.com; Username=login#email.com; Password=mypassword;" />
<TestRunParameters>
</RunSettings>
However, when exectuting (and displaying the actual value received in the unit test) the value is truncated after the first ";"
is there a way to protect the value ?
ending up to answer to myself with a workaround after contacting microsoft directly.
issue come up as well here on official vsts-task github: https://github.com/Microsoft/vsts-tasks/issues/2567
Workaround: before the test assembly task, run a powershell script taking path to runsettings file as parameter, reading VSTS environment variables and replace direcly XML values in runsettings.
I've provided my powershell script here : https://github.com/camous/vsts-powershell/blob/master/Set-RunSettings.ps1
(parameters have to be prefixed by "__")
and I wrote a more complete "how to" here: https://stuffandtacos.azurewebsites.net/2016/09/28/override-runsettings-parameters-in-visual-studio-team-service-when-value-contains-semi-colons/
Adding following (within double quotes) in Override test run parameters will also preserve the required format.
-key "$(PipelineVariableName)" instead of -key $(PipelineVariableName)
Note: I tested the above with pipeline variable value containing -(hyphen) and ' '(space).
I am implementing a custom auditing framework, logging ETL events such as start, end, error, insertrows etc.
As well as logging at a package level, I'm implementing "session logging" where a sequence of package executions, i.e. a controller package that executes several packages, is a session. In order to keep track of the "session", the stored procedures always return a SessionLogID.
I was hoping I could map this result set to a project parameter as otherwise, I will have to save it to a user var and then pass it around between packages via parameters. This will mean every single package will have a Package Parameter and User Variable called SessionLogID. I don't want to do this if I don't need to.
Open to other suggestions.
Thanks,
Adam
Parameters cannot change at runtime. They are a set once kind of deal whereas variables can change at any time. You can set the variable once in the parent package and map the variable to the child package's using a parameter.
It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.
I installed the SetEnv plugin and it works fine for getting the variables during a task.
unfortunately when i try to use the env variable in the resulting status email I have no luck at all. Is this supposed to work?
I've tried both $VARNAME and ${VARNAME} - neither of which get replaced correctly in the email.
The simplest way to use environment variables (or any variables) in your email notifications is by using the Email-ext plugin.
Check their "Content token reference" for specifics but in short you get much more sophisticated substitution. Heres a few I use regularly:
${ENV, var} - Displays an environment
variable.
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines} - Displays lines from the build log that match the regular expression.
${CHANGES_SINCE_LAST_SUCCESS, reverse, format, showPaths, changesFormat, pathFormat} - Displays the changes since the last successful build.
${FAILED_TESTS} - Displays failing unit test information, if any tests have failed.
The plugin makes it easy to define a base "global" template in the Hudson configuration then sort of "extend" that template in your job configuration- adding additional detail. It also allows you to route notifications more granularly based on the build status/outcome.
This is possible already. It looks like you're using the wrong syntax. As mentioned previously, the email-ext plugin has a specific method for accessing environment variables. Try putting this in the email body instead:
${ENV, var=VARNAME}
An alternative method would be to use Hudson's execute shell feature to echo the environment variable during the build and parsing for it using BUILD_LOG_REGEX.
For example, you could have this in the Execute Shell part:
echo "Output: ${VARNAME}"
and parse it in the email using
${BUILD_LOG_REGEX, regex="^Output:", showTruncatedLines=false, substText=""}
It looks like I will have to wait for this:
http://wiki.hudson-ci.org/display/HUDSON/The+new+EMailer
Scenario: I am using Managed Extensibility Framework to load plugins (exports) at runtime based on an interface contract defined in a separate dll. In my Visual Studio solution, I have 3 different projects: The host application, a class library (defining the interface - "IPlugin") and another class library implementing the interface (the export - "MyPlugin.dll").
The host looks for exports in its own root directory, so during testing, I build the whole solution and copy Plugin.dll from the Plugin class library bin/release folder to the host's debug directory so that the host's DirectoryCatalog will find it and be able to add it to the CompositionContainer. Plugin.dll is not automatically copied after each rebuild, so I do that manually each time I've made changes to the contract/implementation.
However, a couple of times I've run the host application without having copied (an updated) Plugin.dll first, and it has thrown an exception during composition:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information
This is of course due to the fact that the Plugin.dll it's trying to import from implements a different version of IPlugin, where the property/method signatures don't match. Although it's easy to avoid this in a controlled and monitored environment, by simply avoiding (duh) obsolete IPlugin implementations in the plugin folder, I cannot rely on such assumptions in the production environment, where legacy plugins could be encountered.
The problem is that this exception effectively botches the whole Compose action and no exports are imported. I would have preferred that the mismatching IPlugin implementations are simply ignored, so that other exports in the catalog(s), implementing the correct version of IPlugin, are still imported.
Is there a way to accomplish this? I'm thinking either of several potential options:
There is a flag to set on the CompositionContainer ("ignore failing imports") prior to or when calling Compose
There is a similar flag to specify on the <ImportMany()> attribute
There is a way to "hook" on to the iteration process underlying Compose(), and be able to deal with each (failed) import individually
Using strong name signing to somehow only look for imports implementing the current version of IPlugin
Ideas?
I have also run into a similar problem.
If you are sure that you want to ignore such "bad" assemblies, then the solution is to call AssemblyCatalog.Parts.ToArray() right after creating each assembly catalog. This will trigger the ReflectionTypeLoadException which you mention. You then have a chance to catch the exception and ignore the bad assembly.
When you have created AssemblyCatalog objects for all the "good" assemblies, you can aggregate them in an AggregateCatalog and pass that to the CompositionContainer constructor.
This issue can be caused by several factors (any exceptions on the loaded assemblies), like the exception says, look at the ExceptionLoader to (hopefully) get some idea
Another problem/solution that I found, is when using DirectoryCatalog, if you don't specify the second parameter "searchPattern", MEF will load ALL the dlls in that folder (including third party), and start looking for export types, that can also cause this issue, a solution is to have a convention name on all the assemblies that export types, and specify that in the DirectoryCatalog constructor, I use *_Plugin.dll, that way MEF will only load assemblies that contain exported types
In my case MEF was loading a NHibernate dll and throwing some assembly version error on the LoaderException (this error can happen with any of the dlls in the directory), this approach solved the problem
Here is an example of above mentioned methods:
var di = new DirectoryInfo(Server.MapPath("../../bin/"));
if (!di.Exists) throw new Exception("Folder not exists: " + di.FullName);
var dlls = di.GetFileSystemInfos("*.dll");
AggregateCatalog agc = new AggregateCatalog();
foreach (var fi in dlls)
{
try
{
var ac = new AssemblyCatalog(Assembly.LoadFile(fi.FullName));
var parts = ac.Parts.ToArray(); // throws ReflectionTypeLoadException
agc.Catalogs.Add(ac);
}
catch (ReflectionTypeLoadException ex)
{
Elmah.ErrorSignal.FromCurrentContext().Raise(ex);
}
}
CompositionContainer cc = new CompositionContainer(agc);
_providers = cc.GetExports<IDataExchangeProvider>();