I have been trying out Service Factory and have run into some problems in regards to long filenames - surpassing the limit in Vista/XP. The problem is that when generating code from the models service factory prefixes everything with the namespace specified. Making the folder structure huge. For example starting in
c:\work\sftest\MyWebService
I create each of the models with moderate length of names in data contracts and service interface. I set the namespace to be MyCompany.SFTest.MyWebservice
After generating code I end up with
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Business Logic
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Resource Access
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.DataContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.FaultContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.MessageContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Tests
Under each of the folders is a project file with the same prefix
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj
This blows up the recipe as windows can't accept filenames exceeding a specific length.
Is it necessary to explicitly include the namespace in each of the foldernames?
Obviously at some point I might want to branch a service to another location but for the same reason as above might be unable to.
Is there a workaround for this?
I don't know Service Factory so i am not sure if this will help. Anyway: maybe the article Naming a File or Directory from MSDN can help.
Windows API has a maximum length for paths (MAX_PATH = 260). If you want to use longer pathnames you will have to use the Unicode versions of the API by prefixing your paths with "\\?\", i. e. use
"\\?\C:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj"
instead of
"C:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj"
Does Service Factory allow that notation?
We had exactly this problem and we got around it by making our service factory a very thin wrapper around a normal library (that has been marked up with the WCF stuff). This gave us a normally deep project (the factory) and then a stunningly deep wrapper factory (without all that extract interface and logic and what not).
We still have some problems - but mainly in the client side - our servers are for the most part trouble free.
Related
I want to track pipeline changes in source control, and I'm looking for a way to programmatically retrieve the json representation from the ADF.
The .Net routines return the objects, but sadly ToString() does not return json (wouldn't THAT be convenient?), so right now I'm looking at copying the json down by hand (shoot me now!), or possibly trying to recreate the json from the .Net objects (shoot me later!).
Please tell me I'm being dense and there is an obvious way to do this.
You can serialize the object using Newtonsoft Json.
See (https://azure.microsoft.com/en-us/documentation/articles/data-factory-create-data-factories-programmatically/) for how to connect via the ADF SDK
var aadTokenCredentials = new TokenCloudCredentials(ConfigurationManager.AppSettings["SubscriptionId"], GetAuthorizationHeader());
var resourceManagerUri = new Uri(ConfigurationManager.AppSettings["ResourceManagerEndpoint"]);
var manager = new DataFactoryManagementClient(aadTokenCredentials, resourceManagerUri);
var pipeline = manager.Pipelines.Get(resourceGroupName, dataFactoryName, pipelineName);
var pipelineAsJson = JsonConvert.SerializeObject(pipeline.Pipeline, Formatting.Indented);
I was expecting something more complex but looking at the sdk source GitHub it is not doing anything special.
Our team has a deployment tool that takes git changes and deploy them appropriately. Everything is done asynchronously and being controlled and versioned through git.
In a nutshell our deployment has the following flow:
Any completed git merge request triggers a VSO build. This is simply
building the whole solution via MsBuild.
Every successful build is applied a Git tag for tracking of Last Known Good.
Next (if build succeeded) our .net ADFPublisher starts by taking only the changed data factory files and asynchronously publishing them based on their
git operation (modified, add, delete, etc.).
For some failures cases our ADFPublisher will perform a retry.
This whole process (Build + publish) takes ~ 65 seconds and has
already saved us from having several bugs. It also allows us to move
definitions from one environment to another very easily.
Let me know if you think this is something that you will be interested in and I will setup a way to share it with you
It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.
I'm looking for a way to realize the following use-case:
I have many modules and each one of them has a wire spec that
exposes its components
To assemble an application, I select the modules and use their wire-spec
The wire-spec of the application is the merge of wire-specs of used
modules: (3.1) I start by 'requiring' the wire-spec of each module
as objects. (3.2) Then, I merge the objects. (3.3) And, finally, I
return the result as the object defining the wire-spec of the
application.
Here is a sample of an application context-spec:
define(["jquery", "module1-wire-spec", "module2-wire-spec"], function(jquery, module1WireSpec, module2WireSpec) {
return jquery.extend(true, module1WireSpec, module2WireSpec);
});
I have read several times wire documentation hoping to find a 'native' way to do the above but I failed so far to find one.
A 'native' way would be a factory like the 'wire' factory but instead of creating a child-context for each module, I'm looking to see the components of each module as direct components of the application context.
Spring, for instance, allows importing a context definition into another one and the result is as if the content of the imported context has been inlined with the importing context.
A new feature has been added to cujojs/wire to allow import of contexts.
As of version 0.10.8, the keyword imports accepts:
a string for a single context import,
or an array for a list of contexts import.
Check here for more details.
I'm creating a flash campaign which will be loaded into a client's framework, which I have no control over. The framework will already have loaded a few things such as locale, fonts and copy, and will pass these things to my swf upon initialization.
Since the size of my swf (let's call it the shell) is restricted it will in turn display a campaign-specific preloader and then load another swf (let's call this the campaign) with the rest of the site.
The shell and the campaign will both be PureMVC modules. The shell will create a few proxies and populate these with data passed from the framework (locale constants, fonts etc), before loading in the campaign.
When the campaign is loaded it too will need locale and fonts etc. so my question is, what is the best way to pass this data along to the campaign module from the shell module?
I could create the same proxies in the campaign module and load the data again, which will be cached, but this obviously feels like the wrong way to go.
I've investigated the use of the pipes utility but this seems like a bit of an overkill in my case since the communication will be one-way and will just happen once during the initialization of the campaign.
Would it be "ok" from a design pattern point of view to pass the proxies to an init method of the campaign module and then register these proxies in the campaign module startup command? This seems wrong since these proxies have references to my shell application facade through notification names. Would it be ok if I move the notification names to some "NotificationConstants" class which both modules can use?
I could create similar proxies in the campaign module but this time populate them with the data objects from my old proxies passed to the previously mentioned init method? Spontaneously this feels like the best way to do it since the data objects don't have any references to my shell module but the "old" proxies do..
The solution I usually use is to create an interface:
interface Campaign {
function set campaignDetails(value:CampaignDetails):void;
//...
}
The campaign-module should implement this interface - in the implementation I recommend you to use a different proxy in the module, so that you would avoid having duplicated notifications and references.
When the shell is ready with the loading of the module it just has to:
if (module is Campaign)
{
(module as Campaign).campaignDetails = ...;
}
I'm sure I'm telling you nothing new. You just need to make sure to keep the acquaintance between the shell and the module only on an interface level. Then you just pass the data and leave the module MVC core to deal with it independently from the shell.
Scenario: I am using Managed Extensibility Framework to load plugins (exports) at runtime based on an interface contract defined in a separate dll. In my Visual Studio solution, I have 3 different projects: The host application, a class library (defining the interface - "IPlugin") and another class library implementing the interface (the export - "MyPlugin.dll").
The host looks for exports in its own root directory, so during testing, I build the whole solution and copy Plugin.dll from the Plugin class library bin/release folder to the host's debug directory so that the host's DirectoryCatalog will find it and be able to add it to the CompositionContainer. Plugin.dll is not automatically copied after each rebuild, so I do that manually each time I've made changes to the contract/implementation.
However, a couple of times I've run the host application without having copied (an updated) Plugin.dll first, and it has thrown an exception during composition:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions for more information
This is of course due to the fact that the Plugin.dll it's trying to import from implements a different version of IPlugin, where the property/method signatures don't match. Although it's easy to avoid this in a controlled and monitored environment, by simply avoiding (duh) obsolete IPlugin implementations in the plugin folder, I cannot rely on such assumptions in the production environment, where legacy plugins could be encountered.
The problem is that this exception effectively botches the whole Compose action and no exports are imported. I would have preferred that the mismatching IPlugin implementations are simply ignored, so that other exports in the catalog(s), implementing the correct version of IPlugin, are still imported.
Is there a way to accomplish this? I'm thinking either of several potential options:
There is a flag to set on the CompositionContainer ("ignore failing imports") prior to or when calling Compose
There is a similar flag to specify on the <ImportMany()> attribute
There is a way to "hook" on to the iteration process underlying Compose(), and be able to deal with each (failed) import individually
Using strong name signing to somehow only look for imports implementing the current version of IPlugin
Ideas?
I have also run into a similar problem.
If you are sure that you want to ignore such "bad" assemblies, then the solution is to call AssemblyCatalog.Parts.ToArray() right after creating each assembly catalog. This will trigger the ReflectionTypeLoadException which you mention. You then have a chance to catch the exception and ignore the bad assembly.
When you have created AssemblyCatalog objects for all the "good" assemblies, you can aggregate them in an AggregateCatalog and pass that to the CompositionContainer constructor.
This issue can be caused by several factors (any exceptions on the loaded assemblies), like the exception says, look at the ExceptionLoader to (hopefully) get some idea
Another problem/solution that I found, is when using DirectoryCatalog, if you don't specify the second parameter "searchPattern", MEF will load ALL the dlls in that folder (including third party), and start looking for export types, that can also cause this issue, a solution is to have a convention name on all the assemblies that export types, and specify that in the DirectoryCatalog constructor, I use *_Plugin.dll, that way MEF will only load assemblies that contain exported types
In my case MEF was loading a NHibernate dll and throwing some assembly version error on the LoaderException (this error can happen with any of the dlls in the directory), this approach solved the problem
Here is an example of above mentioned methods:
var di = new DirectoryInfo(Server.MapPath("../../bin/"));
if (!di.Exists) throw new Exception("Folder not exists: " + di.FullName);
var dlls = di.GetFileSystemInfos("*.dll");
AggregateCatalog agc = new AggregateCatalog();
foreach (var fi in dlls)
{
try
{
var ac = new AssemblyCatalog(Assembly.LoadFile(fi.FullName));
var parts = ac.Parts.ToArray(); // throws ReflectionTypeLoadException
agc.Catalogs.Add(ac);
}
catch (ReflectionTypeLoadException ex)
{
Elmah.ErrorSignal.FromCurrentContext().Raise(ex);
}
}
CompositionContainer cc = new CompositionContainer(agc);
_providers = cc.GetExports<IDataExchangeProvider>();