Is NServiceBus (AsA_Server) without DTC possible? - linq-to-sql

I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}

When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.

There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>

With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});

When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}

Related

akka.net first published message ends up in the dead letter queue, handshake problem

I have an issue with an akka.net message send/Tell that ends up in the dead letter queue.
I developed a cluster based application using Akka.Cluster.Tools.PublishSubscribe with two ActorSystems each running in a 'console.application' on the same machine.
I start up one actor system with some actors. Then I start up my 2nd. application and immediatelly after I initialized the Actor system I publish the first Message Mediator.Tell(new Publish(Topics.Backend.SomeName, new MyInitialMessage())) to a Topic where the receiving actor is hosted in the 1st. application.
This message ends up in the dead letter queue always.
Ok now, instead of sending the message immediatelly I put in a delay of e.g 5sec. Then the message could be delivered properly.
This seems to me as a handshake problem.
Question: How do I find out when the 2nd. actor system is ready to receive any messages??
My current workaround is: I send scheduler based for each second a MyInitialMessage and wait for the first response message from my 2nd. application. Then I know my 2nd. app is now ready, handshake done.
But this seems to me just as a workaround. What would be a proper solution to this issue?
chris
Akka.Cluster.Tools.PublishSubscribe works over cluster. You need to await for cluster to become initialized before you'll be able to publish any messages. All of cluster operations are encapsulated in Cluster class that can be created from any actor system using Cluster.Get(actorSystem). In order to wait for cluster to initialize:
You can join to cluster programmatically by using await cluster.JoinAsync(address, cancellationToken) - you can use it to initialize both seed nodes (just make actor system join to itself) and new nodes. This will require to leave seed-nodes in your HOCON configuration empty.
If you're initializing cluster from configuration (using HOCON config file), you can register a callback function using cluster.RegisterOnMemberUp(callback) to postpone the rest of processing until local actor system successfully joined the cluster.
The fastest (in terms of performance and resource usage) way is to subscribe to cluster membership events from within a particular actor. In fact this is how other solutions described above are actually implemented under the hood.
class MyActor : ReceiveActor
{
readonly Cluster cluster = Akka.Cluster.Cluster.Get(Context.System);
public MyActor()
{
Receive<ClusterEvent.MemberUp>(up =>
{
if (up.Member.Address == cluster.SelfAddress)
{
Become(Ready);
}
});
}
protected override void PreStart()
{
cluster.Subscribe(Self, new[]{ typeof(ClusterEvent.IMemberEvent) });
}
protected override void PostStop()
{
// rember to unsubscribe once actor is stopping
cluster.Subscribe(Self);
}
void Ready()
{
// other receiver handlers
}
}

Spring Cloud AWS SQS Deletion Policy

We have a SQS listener, such as:
#MessageMapping("queueName")
void listen(String message) { ... }
This queue has redrive policy configured with an associated dead letter queue.
The problem is that the default Spring Cloud AWS implementation is deleting the message when polling it and wiring internally 3 retries for processing it, and failing afterwards.
I can see there is a SqsMessageDeletionPolicy enum with ALWAYS and ON_SUCCESS values, among others. I can't find in any documentation how can I change the QueueAttributes for that queue in order to change this behaviour.
Does anyone knows?
Seems like the solution is basically to use the SQS specific annotation instead of the generic one:
#SqsListener(value = "queueName", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
void listen(String message) { ... }
Accepted answer shows how to configure Deletion policy for single Queue,
If you want to set a Global deletion policy which will be used by all #SqsListener can be set by using a property:
cloud.aws.sqs.handler.default-deletion-policy=ON_SUCCESS

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

InstanceContextMode.Single in WCF wsHttpBinding,webHttpBinding and REST

I have recently started development on a relatively simple WCF REST service which returns JSON formatted results. At first everything worked great, and the service was quickly up and running.
The main function of the service is to return a large chunk of data extracted from a database. This data rarely changes, so I decided to try and setup a caching mechanism to speed things up. To do this I planned to set InstanceContextMode.Single and ConcurrencyMode.Multiple, and then with some thread locks, safely return a static cached result. Every 5 minutes or so, or whenever IIS decides to clear everything, the data would be re-fetched from the database.
My issue is InstanceContextMode.Single does not behave as expected. My understanding is a single instance of my WCF service class should be created and maintained. However the behaviour I have is a completely new instance of my Class is created per call. This include re-initialising all static variables.
I tried changing the web service from webHttpBinding (used for REST) to wsHttpBinding and using the service as a SOAP config, but this results in exactly the same behaviour.
What am I doing wrong!!! Have spent way too long trying to figure this out.
Any help would be great!.
Strange, can you try this and tell me what happen then?
ServiceThrottlingBehavior ThrottleBehavior = new ServiceThrottlingBehavior();
ThrottleBehavior.MaxConcurrentSessions = 1;
ThrottleBehavior.MaxConcurrentCalls = 1;
ThrottleBehavior.MaxConcurrentInstances = 1;
ServiceHost Host = ...
Host.Description.Behaviors.Add(ThrottleBehavior);
And [how] do you know your single service instance isn't "Single"? You saw multiple database connection from profiler? Is that what suggested to you why your service isn't a single instance? From your service operation implementation, do you do some of the work on a separate thread?

Logging Location

Where should I be logging exceptions? At the data service tier(ExecuteDataSet, etc.) and/or at the data access layer and/or at the business layer?
At a physical tier boundary.
Also in the top-level exception handler in the client.
I.e. if your business tier is running on a server, log exceptions before propagating to the client. This is easy if you're exposing your business tier as WCF web services, you can implement an error handler that does the logging before propagating a SOAP fault to the client.
If you are throwing the exception you should log it when it occurs and then bubble it up. Otherwise only the end user should log an exception (you may have lots of tracing on of course in which case it may get logged quite a bit).
The end user may be a UI component or a Service or something...
If you handle an exception in you code somewhere - then that is the end user and you should log it there. In most apps and in most cases it should be logged by the UI when it displays the error message to the user.
I usually allow exceptions to propagate up and log them when they reach the very top level. For example
main {
try {
application code
} catch {
preform logging
}
}
But that only makes sense for fatal exceptions. Other exceptions I usually log them in the block that handles the recover from said exception.