I'm interested in articles which have some concrete information about stateless and stateful design in programming. I'm interested because I want to learn more about it, but I really can't find any good articles about it. I've read dozens of articles on the web which vaguely discuss the subject, or they're talking about web servers and sessions - which are also 'bout stateful vs stateless, but I'm interested in stateless vs stateful design of attributes in coding. Example: I've heard that BL-classes are stateless by design, entity classes (or at least that's what I call them - like Person(id, name, ..)) are stateful, etc.
I think it's important to know because I believe if I can understand it, I can write better code (e.g. granularity in mind).
Anyways, really short, here's what I know 'bout stateful vs stateless:
Stateful (like WinForms): Stores the data for further use, but limits the scalability of an application, because it's limited by CPU or memory limits
Stateless (Like ASP.NET - although ASP tries to be stateful with ViewStates):
After actions are completed, the data gets transferred, and the instance gets handed back to the thread pool (Amorphous).
As you can see, it's pretty vague and limited information (and quite focussed on server interaction), so I'd be really grateful if you could provide me with some more tasty bits of information :)
Stateless means there is no memory of the past. Every transaction is performed as if it were being done for the very first time.
Stateful means that there is memory of the past. Previous transactions are remembered and may affect the current transaction.
Stateless:
// The state is derived by what is passed into the function
function int addOne(int number)
{
return number + 1;
}
Stateful:
// The state is maintained by the function
private int _number = 0; //initially zero
function int addOne()
{
_number++;
return _number;
}
Refer from: https://softwareengineering.stackexchange.com/questions/101337/whats-the-difference-between-stateful-and-stateless
A stateful app is one that stores information about what has happened or changed since it started running. Any public info about what "mode" it is in, or how many records is has processed, or whatever, makes it stateful.
Stateless apps don't expose any of that information. They give the same response to the same request, function or method call, every time. HTTP is stateless in its raw form - if you do a GET to a particular URL, you get (theoretically) the same response every time. The exception of course is when we start adding statefulness on top, e.g. with ASP.NET web apps :) But if you think of a static website with only HTML files and images, you'll know what I mean.
I suggest that you start from a question in StackOverflow that discusses the advantages of stateless programming. This is more in the context of functional programming, but what you will read also applies in other programming paradigms.
Stateless programming is related to the mathematical notion of a function, which when called with the same arguments, always return the same results. This is a key concept of the functional programming paradigm and I expect that you will be able to find many relevant articles in that area.
Another area that you could research in order to gain more understanding is RESTful web services. These are by design "stateless", in contrast to other web technologies that try to somehow keep state. (In fact what you say that ASP.NET is stateless isn't correct - ASP.NET tries hard to keep state using ViewState and are definitely to be characterized as stateful. ASP.NET MVC on the other hand is a stateless technology). There are many places that discuss "statelessness" of RESTful web services (like this blog spot), but you could again start from an SO question.
The adjective Stateful or Stateless refers only to the state of the conversation, it is not in connection with the concept of function which provides the same output for the same input. If so any dynamic web application (with a database behind it) would be a stateful service, which is obviously false.
With this in mind if I entrust the task to keep conversational state in the underlying technology (such as a coockie or http session) I'm implementing a stateful service, but if all the necessary information (the context) are passed as parameters I'm implementing a stateless service.
It should be noted that even if the passed parameter is an "identifier" of the conversational state (e.g. a ticket or a sessionId) we are still operating under a stateless service, because the conversation is stateless (the ticket is continually passed between client and server), and are the two endpoints to be, so to speak, "stateful".
Money transfered online form one account to another account is stateful, because the receving account has information about the sender.
Handing over cash from a person to another person, this transaction is statless, because after cash is recived the identity of the giver is not there with the cash.
Just to add on others' contributions....Another way is look at it from a web server and concurrency's point of view...
HTTP is stateless in nature for a reason...In the case of a web server, being stateful means that it would have to remember a user's 'state' for their last connection, and /or keep an open connection to a requester. That would be very expensive and 'stressful' in an application with thousands of concurrent connections...
Being stateless in this case has obvious efficient usage of resources...i.e support a connection in in a single instance of request and response...No overhead of keeping connections open and/or remember anything from the last request...
We make Webapps statefull by overriding HTTP stateless behaviour by using session objects.When we use session objets state is carried but we still use HTTP only.
I had the same doubt about stateful v/s stateless class design and did some research. Just completed and my findings has been posted in my blog
Entity classes needs to be stateful
The helper / worker classes should not be stateful.
Related
I am trying to figure out how I will manage sessions using json web tokens in a microservice architecture.
Looking at the design in this article what I currently have in mind is that the client will send a request that first goes through a firewall. This request will contain an opaque/reference token which the firewall sends to an authorization server. The authorization server responds with a value token containing all the session information for the user. The firewall then passes the request along with the value token to the API, and the value token will then get propagated to all the different microservices required to fulfill the request.
I have 2 questions:
How should updates to the session information in the value token be handled? To elaborate, when the session info in a token gets updated, it needs to be updated in the authorization server. Should each service that changes the token talk to the authorization server?
Should all the microservices use this single token to store their session info? Or would it be better for each service to have a personalized token? If it's the latter, please explain how to adjust the design.
A very(!) significant "fly in the ointment" of this kind of design ... which requires careful advance thought on your part ... is: “precisely what is meant by ‘session’ information.” In this architecture, “everyone is racing with everyone else.” If the session information is updated, you do not and basically cannot(!) know which of the agents knows about that change and which does not. To further complicate things, new requests are arriving asynchronously and will overlap other requests in unpredictable ways.
Therefore, the Authorization Server must be exactly that ... and, no more. It validates (authenticates ...) the opaque token, and supplies a trustworthy description of what the request is authorized to do. But, the information that it harbors basically cannot change. And specifically, it cannot hold “session state” data in the web server sense of that term.
Each microservice provider must maintain its own “tote board” *(my term ... “its own particular subset of what in a web-server would be ‘the session pool’”), and it is desirable but not always feasible that its board would be independent of the others. Almost certainly, it must use a central database (with transactions) to coordinate with other service-providers similarly situated. And still, if the truth is that the content of any of these “totes” is causally related to any other, you now have an out-of-sync issue between them.
Although microservice architecture has a certain academic appeal, IMHO designs must be carefully studied to be certain that they are, in fact, compatible with this approach.
I am seeking Python 2.7 alternatives to ZeroMQ that are released under the BSD or MIT license. I am looking for something that supports request-reply and pub-sub messaging patterns. I can serialize the data myself if necessary. I found Twisted from Twisted Matrix Labs but it appears to require a blocking event loop, i.e. reactor.run(). I need a library that will run in the background and let my application check messages upon certain events. Are there any other alternatives?
Give nanomsg, a ZeroMQ younger sister, a try - same father, same beauty
Yes, it is licensed under MIT/X11 license.
Yes, REQ/REP - allows to build clusters of stateless services to process user requests
Yes, PUB/SUB - distributes messages to large sets of interested subscribers
Has several Python bindings available
https://github.com/tonysimpson/nanomsg-python (recommended)
https://github.com/sdiehl/pynanomsg
https://github.com/djc/nnpy
Differences between nanomsg and ZeroMQ
( state as of 2014/11 v0.5-beta - courtesy nanomsg.org >>> a-click-thru to the original HyperDoc )
Licensing
nanomsg library is MIT-licensed. What it means is that, unlike with ZeroMQ, you can modify the source code and re-release it under a different license, as a proprietary product, etc. More reasoning about the licensing can be found here.
POSIX Compliance
ZeroMQ API, while modeled on BSD socket API, doesn't match the API fully. nanomsg aims for full POSIX compliance.
Sockets are represented as ints, not void pointers.
Contexts, as known in ZeroMQ, don't exist in nanomsg. This means simpler API (sockets can be created in a single step) as well as the possibility of using the library for communication between different modules in a single process (think of plugins implemented in different languages speaking each to another). More discussion can be found here.
Sending and receiving functions ( nn_send, nn_sendmsg, nn_recv and nn_recvmsg ) fully match POSIX syntax and semantics.
Implementation Language
The library is implemented in C instead of C++.
From user's point of view it means that there's no dependency on C++ runtime (libstdc++ or similar) which may be handy in constrained and embedded environments.
From nanomsg developer's point of view it makes life easier.
Number of memory allocations is drastically reduced as intrusive containers are used instead of C++ STL containers.
The above also means less memory fragmentation, less cache misses, etc.
More discussion on the C vs. C++ topic can be found here and here.
Pluggable Transports and Protocols
In ZeroMQ there was no formal API for plugging in new transports (think WebSockets, DCCP, SCTP) and new protocols (counterparts to REQ/REP, PUB/SUB, etc.) As a consequence there were no new transports added since 2008. No new protocols were implemented either. The formal internal transport API (see transport.h and protocol.h) are meant to mitigate the problem and serve as a base for creating and experimenting with new transports and protocols.
Please, be aware that the two APIs are still new and may experience some tweaking in the future to make them usable in wide variety of scenarios.
nanomsg implements a new SURVEY protocol. The idea is to send a message ("survey") to multiple peers and wait for responses from all of them. For more details check the article here. Also look here.
In financial services it is quite common to use "deliver messages from anyone to everyone else" kind of messaging. To address this use case, there's a new BUS protocol implemented in nanomsg. Check the details here.
Threading Model
One of the big architectural blunders I've done in ZeroMQ is its threading model. Each individual object is managed exclusively by a single thread. That works well for async objects handled by worker threads, however, it becomes a trouble for objects managed by user threads. The thread may be used to do unrelated work for arbitrary time span, e.g. an hour, and during that time the object being managed by it is completely stuck. Some unfortunate consequences are: inability to implement request resending in REQ/REP protocol, PUB/SUB subscriptions not being applied while application is doing other work, and similar. In nanomsg the objects are not tightly bound to particular threads and thus these problems don't exist.
REQ socket in ZeroMQ cannot be really used in real-world environments, as they get stuck if message is lost due to service failure or similar. Users have to use XREQ instead and implement the request re-trying themselves. With nanomsg, the re-try functionality is built into REQ socket.
In nanomsg, both REQ and REP support cancelling the ongoing processing. Simply send a new request without waiting for a reply (in the case of REQ socket) or grab a new request without replying to the previous one (in the case of REP socket).
In ZeroMQ, due to its threading model, bind-first-then-connect-second scenario doesn't work for inproc transport. It is fixed in nanomsg.
For similar reasons auto-reconnect doesn't work for inproc transport in ZeroMQ. This problem is fixed in nanomsg as well.
Finally, nanomsg attempts to make nanomsg sockets thread-safe. While using a single socket from multiple threads in parallel is still discouraged, the way in which ZeroMQ sockets failed randomly in such circumstances proved to be painful and hard to debug.
State Machines
Internal interactions inside the nanomsg library are modeled as a set of state machines. The goal is to avoid the incomprehensible shutdown mechanism as seen in ZeroMQ and thus make the development of the library easier.
For more discussion see here and here.
IOCP Support
One of the long-standing problems in ZeroMQ was that internally it uses BSD socket API even on Windows platform where it is a second class citizen. Using IOCP instead, as appropriate, would require major rewrite of the codebase and thus, in spite of multiple attempts, was never implemented. IOCP is supposed to have better performance characteristics and, even more importantly, it allows to use additional transport mechanisms such as NamedPipes which are not accessible via BSD socket API. For these reasons nanomsg uses IOCP internally on Windows platforms.
Level-triggered Polling
One of the aspects of ZeroMQ that proved really confusing for users was the ability to integrate ZeroMQ sockets into an external event loops by using ZMQ_FD file descriptor. The main source of confusion was that the descriptor is edge-triggered, i.e. it signals only when there were no messages before and a new one arrived. nanomsg uses level-triggered file descriptors instead that simply signal when there's a message available irrespective of whether it was available in the past.
Routing Priorities
nanomsg implements priorities for outbound traffic. You may decide that messages are to be routed to a particular destination in preference, and fall back to an alternative destination only if the primary one is not available.
For more discussion see here.
TCP Transport Enhancements
There's a minor enhancement to TCP transport. When connecting, you can optionally specify the local interface to use for the connection, like this:
nn_connect (s, "tcp://eth0;192.168.0.111:5555").
Asynchronous DNS
DNS queries (e.g. converting hostnames to IP addresses) are done in asynchronous manner. In ZeroMQ such queries were done synchronously, which meant that when DNS was unavailable, the whole library, including the sockets that haven't used DNS, just hung.
Zero-Copy
While ZeroMQ offers a "zero-copy" API, it's not true zero-copy. Rather it's "zero-copy till the message gets to the kernel boundary". From that point on data is copied as with standard TCP. nanomsg, on the other hand, aims at supporting true zero-copy mechanisms such as RDMA (CPU bypass, direct memory-to-memory copying) and shmem (transfer of data between processes on the same box by using shared memory). The API entry points for zero-copy messaging are nn_allocmsg and nn_freemsg functions in combination with NN_MSG option passed to send/recv functions.
Efficient Subscription Matching
In ZeroMQ, simple tries are used to store and match PUB/SUB subscriptions. The subscription mechanism was intended for up to 10,000 subscriptions where simple trie works well. However, there are users who use as much as 150,000,000 subscriptions. In such cases there's a need for a more efficient data structure. Thus, nanomsg uses memory-efficient version of Patricia trie instead of simple trie.
For more details check this article.
Unified Buffer Model
ZeroMQ has a strange double-buffering behaviour. Both the outgoing and incoming data is stored in a message queue and in TCP's tx/rx buffers. What it means, for example, is that if you want to limit the amount of outgoing data, you have to set both ZMQ_SNDBUF and ZMQ_SNDHWM socket options. Given that there's no semantic difference between the two, nanomsg uses only TCP's (or equivalent's) buffers to store the data.
Scalability Protocols
Finally, on philosophical level, nanomsg aims at implementing different "scalability protocols" rather than being a generic networking library. Specifically:
Different protocols are fully separated, you cannot connect REQ socket to SUB socket or similar.
Each protocol embodies a distributed algorithm with well-defined prerequisites (e.g. "the service has to be stateless" in case of REQ/REP) and guarantees (if REQ socket stays alive request will be ultimately processed).
Partial failure is handled by the protocol, not by the user. In fact, it is transparent to the user.
The specifications of the protocols are in /rfc subdirectory.
The goal is to standardise the protocols via IETF.
There's no generic UDP-like socket (ZMQ_ROUTER), you should use L4 protocols for that kind of functionality.
How do you sync data between two processes (say client and server) in real time over network?
I have various documents/datasets constructed on the server, which are downloaded and displayed by clients. Once downloaded, the document receives continuous updates in order to remain fresh.
It seems to be a simple and commonly occurring concept, but I cannot find any tools that provide this level of abstraction. I am not even sure what I am looking for. Perhaps there is a similar concept with solid tool support? Perhaps there is a chain of different tools that must be put together? Here's what I have considered so far:
I am required to propagate every change in a single hop (0.5 RTT), which rules out polling (typically >10 RTT) and cache invalidation techniques (1.5 RTT).
Data replication and simple notification broadcasts are not an option, because there is too much data and too many changes. Clients must be able to select specific documents to download and monitor for changes.
I am currently using message passing pattern, which does the job, but it is hopelessly unproductive. It works at way too low level of abstraction. It is laborious, error-prone, and it doesn't scale well with increasing application complexity.
HTTP and other RPC-like techniques are good for the initial fetch, but they encourage polling for subsequent synchronization. When performing reverse requests (from data source to data consumer), change notifications are possible, but it's even more complicated than message passing.
Combining RPC (for the initial fetch) with message passing (for updates) turned out to be a nightmare due to the complexity involved in coordinating communication over the two parallel connections as well as due to the impedance mismatch between the two paradigms. I need something unified.
WebSocket & Comet are popular methods to implement change notification, but they need additional libraries to be productive and I am not aware of any libraries suitable for my application.
Message queues merely put an intermediary on the network while maintaining the basic message passing pattern. Custom message filters/routers allow me to get closer to the live document concept, but I feel like I am implementing custom middleware layer on top of the MQ.
I have tons of additional requirements (native observable data structure API on both ends, incremental updates, custom message filters, custom connection routing, cross-platform, robustness & scalability), but before considering those requirements, I need to find some tools that at least attempt to do what I need. I am trying to avoid in-house frameworks for the standard reasons - cost, time to market, long-term maintenance, and keeping developers happy.
My conclusion at the moment is that there is no such live document synchronization framework. In-house solution is the way to go, but many existing components can be used as part of the solution.
It is pretty simple to layer live document logic on top of WebSocket or any other message passing platform. Server just sends the document as a separate message when the connection is initiated and then after every change. Automated reconnection and some connection monitoring must be added to handle network failures.
Serialization at both ends is a separate problem targeted by many existing libraries. Detecting changes in server-side data structures (needed to initiate push) is yet another separate problem that has its own set of patterns and tools. Incremental updates and many other issues can be solved by intermediaries intercepting the connection.
This approach will work with current technology at the cost of extensive in-house glue code. It can be incrementally substituted with standard components as they become available.
WebSocket already includes resource URIs, routing, and a few other nice features. Useful intermediaries and libraries will likely emerge in the future. HTTP with text/event-stream MIME type is a possible future alternative to WebSocket. The advantage of HTTP is that existing tools can be reused with little modification.
I've completely thrown away the pattern of combining RPC pull with separate push channel despite rich tool support. Pushing everything in 0.5 RTT requires the push channel to use exactly the same technology as the pull channel, i.e. reverse RPC. Reverse RPC is like message passing except it introduces redundant returns, throws away useful connection semantics, and makes it hard to insert content-agnostic intermediaries into the stream.
I'm thinking about system that will notify multiple consumers about events happening to a population of objects. Every subscriber should be able to subscribe to events happening to zero or more of the objects, multiple subscribers should be able to receive information about events happening to a single object.
I think that some message queuing system will be appropriate in this case but I'm not sure how to handle the fact that I'll have millions of the objects - using separate topic for every of the objects does not sound good [or is it just fine?].
Can you please suggest approach I should should take and maybe even some open source message queuing system that would be reasonable?
Few more details:
there will be thousands of subscribers [meaning not plenty of them],
subscribers will subscribe to tens or hundreds of objects each,
there will be ~5-20 million of the objects,
events themselves dont have to carry any message. just information that that object was changed is enough,
vast majority of objects will never be subscribed to,
events occur at the maximum rate of few hundreds per second,
ideally the server should run under linux, be able to integrate with the rest of the ecosystem via http long-poll [using node js? continuations under jetty?].
Thanks in advance for your feedback and sorry for somewhat vague question!
I can highly recommend RabbitMQ. I have used it in a couple of projects before and from my experience, I think it is very reliable and offers a wide range of configuraions. Basically, RabbitMQ is an open-source ( Mozilla Public License (MPL) ) message broker that implements the Advanced Message Queuing Protocol (AMQP) standard.
As documented on the RabbitMQ web-site:
RabbitMQ can potentially run on any platform that Erlang supports, from embedded systems to multi-core clusters and cloud-based servers.
... meaning that an operating system like Linux is supported.
There is a library for node.js here: https://github.com/squaremo/rabbit.js
It comes with an HTTP based API for management and monitoring of the RabbitMQ server - including a command-line tool and a browser-based user-interface as well - see: http://www.rabbitmq.com/management.html.
In the projects I have been working with, I have communicated with RabbitMQ using C# and two different wrappers, EasyNetQ and Burrow.NET. Both are excellent wrappers for RabbitMQ but I ended up being most fan of Burrow.NET as it is easier and more obvious to work with ( doesn't do a lot of magic under the hood ) and provides good flexibility to inject loggers, serializers, etc.
I have never worked with the amount of amount of objects that you are going to work with - I have worked with thousands ( not millions ). However, no matter how many objects I have been playing around with, RabbitMQ has always worked really stable and has never been the source to errors in the system.
So to sum up - RabbitMQ is simple to use and setup, supports AMQP, can be managed via HTTP and what I like the most - it's rock solid.
Break up the topics to carry specific events for e.g. "Object updated topic" "Object deleted"...So clients need to only have to subscribe to the "finite no:" of event based topics they are interested in.
Inject headers into your messages when you publish them and put intelligence into the clients to use these headers as message selectors. For eg, client knows the list of objects he is interested in - and say you identify the object by an "id" - the id can be the header, and the client will use the "id header" to determine if he is interested in the message.
Depending on whether you want, you may also want to consider ensuring guaranteed delivery to make sure that the client will receive the message even if it goes off-line and comes back later.
The options that I would recommend top of the head are ActiveMQ, RabbitMQ and Redis PUB SUB ( Havent really worked on redis pub-sub, please use your due diligance)
Finally here are some performance benchmarks for RabbitMQ and Redis
Just saw that you only have few 100 messages getting pushed out / sec, this is not a big deal for activemq, I have been using Amq on a system that processes 240 messages per second , and it just works fine. I use a thread pool of workers to asynchronously process the messages though . Look at a framework like akka if you are in the java land, if not stick with nodejs and the cool Eco system around it.
If it has to be open source i'd go for ActiveMQ, and an application server to provide the JMS functionality for topics and it has Ajax Support so you can access them from your client
So, you would use the JMS infrastructure to publish the topics for the objects, and you can create topis as you need them
Besides, by using an java application server you may be able to take advantages from clustering, load balancing and other high availability features (obviously based on the selected product)
Hope that helps!!!
Since your messages are very small might want to consider MQTT, which is designed for small devices, although it works fine on powerful devices as well. Key consideration is the low overhead - basically a 2 byte header for a small message. You probably can't use any simple or open source MQTT server, due to your volume. You probably need a heavy duty dedicated appliance like a MessageSight to handle your volume.
Some more details on your application would certainly help. Also you don't mention security at all. I assume you must have some needs in this area.
Though not sure about your work environment but here are my bits. Can you identify each object with unique ID in your system. If so, you can have a topic per each event type. for e.g. you want to track object deletion event, object updation event and so on. So you can have topic for each event type. These topics would be published with Ids of object whenever corresponding event happened to the object. This will limit the no of topics you needed.
Second part of your problem is different subscribers want to subscribe to different objects. So not all subscribers are interested in knowing events of all objects. This problem statement scoped to message selector(filtering) mechanism provided by messaging framework. So basically you need to seek on what basis a subscriber interested in particular object. Have that basis as a message filtering mechanism. It could be anything: object type, object state etc. So ultimately your system would consists of one topic for each event type with someone publishing event messages : {object-type:object-id} information. Subscribers could subscribe to any topic and with an filtering criteria.
If above solution satisfy, you can use any messaging solution: activeMQ, WMQ, RabbitMQ.
I've been evaluating messaging technologies for my company but I've become very confused by the conceptual differences between a few terms:
Pub/Sub vs Multicast vs Fan Out
I am working with the following definitions:
Pub/Sub has publishers delivering a separate copy of each message to
each subscriber which means that the opportunity to guarantee delivery exists
Fan Out has a single queue pushing to all listening
clients.
Multicast just spams out data and if someone is listening
then fine, if not, it doesn't matter. No possibility to guarantee a client definitely gets a message.
Are these definitions right? Or is Pub/Sub the pattern and multicast, direct, fanout etc. ways to acheive the pattern?
I'm trying to work the out-of-the-box RabbitMQ definitions into our architecture but I'm just going around in circles at the moment trying to write the specs for our app.
Please could someone advise me whether I am right?
I'm confused by your choice of three terms to compare. Within RabbitMQ, Fanout and Direct are exchange types. Pub-Sub is a generic messaging pattern but not an exchange type. And you didn't even mention the 3rd and most important Exchange type, namely Topic. In fact, you can implement Fanout behavior on a Topic exchange just by declaring multiple queues with the same binding key. And you can define Direct behavior on a Topic exchange by declaring a Queue with * as the wildcard binding key.
Pub-Sub is generally understood as a pattern in which an application publishes messages which are consumed by several subscribers.
With RabbitMQ/AMQP it is important to remember that messages are always published to exchanges. Then exchanges route to queues. And queues deliver messages to subscribers. The behavior of the exchange is important. In Topic exchanges, the routing key from the publisher is matched up to the binding key from the subscriber in order to make the routing decision. Binding keys can have wildcard patterns which further influences the routing decision. More complicated routing can be done based on the content of message headers using a headers exchange type
RabbitMQ doesn't guarantee delivery of messages but you can get guaranteed delivery by choosing the right options(delivery mode = 2 for persistent msgs), and declaring exchanges and queues in advance of running your application so that messages are not discarded.
Your definitions are pretty much correct. Note that guaranteed delivery is not limited to pub/sub only, and it can be done with fanout too. And yes, pub/sub is a very basic description which can be realized with specific methods like fanout, direct and so on.
There are more messaging patterns which you might find useful. Have a look at Enterprise Integration Patterns for more details.
From an electronic exchange point of view the term “Multicast” means “the message is placed on the wire once” and all client applications that are listening can read the message off the “wire”. Any solution that makes N copies of the message for the N clients is not multicast. In addition to examining the source code one can also use a “sniffer” to determine how many copies of the message is sent over the wire from the messaging system. And yes, multicast messages are a form the UDP protocol message. See: http://en.wikipedia.org/wiki/Multicast for a general description. About ten years ago, we used the messaging system from TIBCO that supported multicast. See: https://docs.tibco.com/pub/ems_openvms_c_client/8.0.0-june-2013/docs/html/tib_ems_users_guide/wwhelp/wwhimpl/common/html/wwhelp.htm#context=tib_ems_users_guide&file=EMS.5.091.htm