EDA based SOA and NServiceBus: Why not just use SSIS packages? - ssis

I have been investigating NServiceBus. I liked the idea of the pub-sub model, and that the only real coupling of the publisher and subscriber is the semantic of the message. Right now we use SQL replication to sync our data between the databases of different functional areas of our software. I hate this because our private schema is directly coupled to by the subscriber, and it makes it difficult to change on our side. I was thinking it would be great to replace this with NServiceBus publications, but the change seems a little drastic. What about just using something like SSIS? Can I accomplish the same decoupling using SSIS instead of NServiceBus?

SSIS is based on meta-data and therefore will still need to understand the inner schema of all your data sources and sinks. If the underlying meta-data for each source/sink changes your packages will have to change. You also are connecting via MS technologies and are thusly platform coupled. Since you are moving whole sets of data around it sounds like you may not be temporally coupled(system A has to wait for something to respond in system B). It's hard to tell without knowing more about the systems. Lastly SSIS must be aware of the physical locations of all the players in the exchange so you are spatially coupled as well.
In my opinion I don't think you can get to the same place as NSB without developing a lot of the NSB concepts into the packages. This would require using XML messages over the Sql Broker or something to that effect which has already been solved in NSB(see the NSB Contrib project on Github for the Sql Broker transport).

Related

Dynamically changing Report's Shared Data Source at Runtime

I'm looking to use SSRS for multi-tenant reporting and I'd like the ability to have runtime-chosen Shared Data Sources for my reports. What do I mean by this? Well, I could be flexible but I think the two most likely possibilities are (however, I'm also open to other possibilities):
The Shared Data Source is dictated by the client's authentication. In my case, the "client" is a .NET application and not the user, so if this is a viable path then I'd like to somehow have the MainDB (that's what I'm calling it) Shared Data Source selected by the Service Account that the client logs in as.
Pass the name of the Shared Data Source as a parameter and let that dictate which one to use. Given that all of my clients are "trusted players", I am comfortable with this approach. While each client will have its own representative Service Account, it's just for good measure and should not be important. So instead of just calling the data source MainDB, we could instead have Client1DB and Client2DB, etc. It's okay if a new data source means a new deployment but I need this to scale easily enough as well to ~50 different data sources over time.
Why? Because we have multiple/duplicate copies of our production application for multiple customers but we don't want to duplicate everything, just the web apps and databases. We're fine with some common "back-end" things. And for SSRS, because of how expensive licenses are (and how rarely reports are ran by our users), we really want to have just a single back-end for all of our customers (I actually have a second one on standby for manual disaster recovery situations - we don't need to be too fancy here as reports are the least important DR concern we have).
I have seen this question which points to this post but I was really hoping there was a better way than this. Because of all of those additional steps/efforts/limitations/etc, I'd rather just use PowerShell to script duplicate deployments of the reports with tweaked hardcoded data sources instead of standardizing on the steps in that post. That solution feels WAY too hacky to me and doesn't seem to scale very well at all.
I've done this a bunch of terrible ways (usually hardcoded in a dynamic script), and then I discovered its actually quite simple.
Instead of using Shared Connection, use the Embedded Connection and create your Connection string based on params (or any string manipulation code)....

Migration of Database

As a Front end developer I have less knowledge on Databases. But recently we started to develop an CRM application.
My question is, how feasible to migrate from one database to other. Lets say our application now supports mysql but later customer comes up with IBM's DB2 or sql lite. What are the things that we need to take care while developing to support easy migration ?
How cloud will help to solve my problem?
Just keep your data model separate from actual database calls and you should be good. Use a database abstraction layer in your model to make calls to the database. You'll only have to change the bottom layer for specific databases.
Some best practices:
Avoid DBMS specific features, data types and SQL/DDL constructs; keep to the SQL[92] standard. Test against e. g. SQLite, which keeps rather close to the standard.
Use an Entity Relationship Modeling tool that supports exporting DDL files for all targeted DBMS, or to standard SQL. Or write and maintain your DDL scripts by hand. Vendor specific tools usually don't do this.
Use the existing SQL abstraction layer that comes with your language/toolkit/environment, or implement one with an eye on portability (which reinvents the wheel another time).
Keep the logic in your application; the DB is for data only. Avoid triggers, stored procedures etc.
Generally apply the KISS principle to your data storage.
You may get more help on specific questions about general/abstract issues (not the implementation details, which belong here) over at Programmers.

Code first or Model first (Entity Framework/RIA Services)

We are developing LOB apps for many potential customers that are small or mid size, not large.
We will have to install the database for each new customer.
What do you think is the best approach for us, Model first (using edmx and using the wizzard to develop the model and metadata) or Code First.
We like the simplicity of Entity Framework/RIA Services, and we think that the Optimistic Concurrency is enough for our apps, but as the database must be installed from scratch in SQL Server (we won't use other database), we are not sure what approach is best for us.
As long as EF and the choice about edmx/code first I believe it's more a matter of personal preferences. Edmx model seems easier than code first however the designer is still a bit clunky even in VS2012 especially if you have 50+ entities. I've abandoned edmx since EF code first has become usable.
About WCF Ria Services I use it extensively in my LOB applications, even in a large one and most of the time it saves you a lot of glue code comparing to plain WCF. I'm sure you know about the well published features such as
server side filtering, paging and even grouping
Client code generation that ease the sharing of code between sl and the full clr
and many others, but maybe you're more interested in his limitations
you can't query from the client with a nested expression (i.e. Any) though you can always add a parameter to your query and apply the filter server side, but it's not quite the same
you can't directly expose many to many relations with silverlight (have however a look at M2M4RIA)
you must add the foreign key field to your model (to me it seems like DB leaking into the the model)
WCF Ria services do most of his work in the main thread (i.e. loading the DomainContext after a load/submitchanges)
If your application get big and you're thinking to split your domainservices/domaincontext, be aware that you'll encounter serious pain trying to submit the changes of the two domaincontext in an atomic transaction
Proxy generation will happen everytime you build your client and take (I think) longer than it should
Despite that I believe it's a good technologies for RAD and things may eventually turn better: Colin Blair has posted on his blog that him is pushing Microsoft in order to release WCF Ria as opensource, and this can really improve things given that Microsoft has killed Sl/Wcf ria development

What are the limitations of CLR Assemblies for SQL Server 2008?

I am planning some reports with quite a heavy load of calculations, which I thought is better to transfer to a custom .NET assembly loaded to Microsoft SQL Server. The companies who will use this will only use SQL Server Enterprise editions, so no problem with feature support.
The question is:
Is it actually a good idea?
I want to export this functionality because I want to be able to use features like:
Multi-threading (the number of threads will be the minimum between the processed entities, and the maximum number set in configuration file. I don't know any other upper limit I should specify.)
Unmanaged code (C++ libraries for stream processing)
Sometimes even COM Interop or shell commands, though this is less likely.
Will they work fine? Are there any limitations I should know about, in my case?
Everything you list is possible with some limitations. The Host Protection Attributes allow access to create threads, but prohibit access to Thread.Join etc. Read more about it on msdn
Now the question is "Should you do this"? I think the approach is not very sound as it will put a big processing load on your database server which will be very hard to scale if needed. I think a better approach is to add a Custom assembly to SQL Server Reporting Services and let the processing happen over there. If you run into scalability issues an additional reporting services machine can be added.
There are also no restrictions on methods & classes in assemblies loaded in Reporting Services.

Database and logic layer for ASP.NET MVC application

I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things.
I'm totally new to Linq but it seems that it is easier to use once you are familiar with it.
First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. DbLinq is what I also heard. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework?
I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right?
So questions basically are -
Is MySQL to Linq possible? If yes where can I get more details on it?
Pros and cons of using EntityFramework or DbLinq with MySQL?
Will it be easy to access data using EntityFramework or DbLinq with MySQL?
Will I be able to implement repository pattern which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL)
Does it fetches hell lot of data from database and then apply filter on it?
If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.
As I am fan of ALT.NET I would recomend you to use NHibernate for your project instead of EntityFramework, you may google for the advantages over it, I am convinced you'll choose it.
Based on the points you've mentioned, then I would seriously consider going with MS SQL instead of MySQL initially and implementing LINQ-to-SQL instead of Entity Framework, and here's why:
The fact that you are anticipating a lot of traffic initially tells me that you need to think about where you plan to end up, rather than where to start. I have considerably more experience with MS SQL than I do with MySQL, but if you're talking about starting with the community version of MySQL and upgrading later, you're going to be incurring a significant expense anyway with the Enterprise version.
I have heard there is a version of LINQ that supports MySQL, but, unless things have changed recently, it is still in beta. I am completing an 18-month web-based project that used ASP.NET MVC 1.0, LINQ-to-SQL, JavaScript, jQuery, AJAX, and MS SQL. I implemented the repository pattern, view models, interfaces, unit tests and integration tests using WatiN. The combination of technologies worked very well for me, and I plan to go with the same combination for a personal project I'm developing.
When you get MS SQL with a hosting plan, you typically have the ability to create multiple databases from that single instance. It looks like they give you more storage because they give you multiple MySQL databases, but that's only because the architecture only supports the creation of one database per instance.
I won't use the Entity Framework for my ASP.NET MVC projects, because I wasn't crazy about ADO.NET in the first place. I don't want to have to open a connection, create a command object, populate a parameter collection, issue the execute method, and then iterate through a one-way reader object to get my data. Once you see how LINQ-to-SQL simplifies the process, you won't want to go back either. In the project I mentioned earlier, I have over 60 tables in the database with about 200 foreign key relationships. Because I used LINQ-to-SQL with the repository pattern in my data layer, I was able to build the application using not a single stored procedure. LINQ-to-SQL automatically protects against SQL injection attacks and support optimistic and pessimistic concurrency checking.
I don't know what your project is, but you don't want to get into a situation where you're going to have trouble scaling the application later. Code for the end result, not for the starting point, and you'll save yourself a lot of headaches later.