How to implement Event-carried State Transfer? - event-driven

I watched Mr. Martin Fowler's seminar on Event-Drivent Architecture. I see the benefits of Event-carried State Transfer but still haven't found a way to do it as he said. How can I copy data from this database to the database continuously, and can this copy cause errors?

Copy directly from one database to another is usually a bad idea as it creates coupling. A better approach is for one service to publish events about the changes, events that other then can subscribe to.
The publishing of events can be implemented in many different ways. For example:
The publisher can publish an ATOM feed that the subscribers can poll and traverse for changes. For example the EventStoreDB publishes ATOM feeds to support this.
The publisher can publish its events to Kafka, that then subscribers can consume events from.

Related

Detect data anomalies in data pipe and trigger scheduled datapipeline

In Foundry, we have a data pipeline where we want to insert a code node (repo or workbook) that detects anomalies and then sends and email or some other alert about the problem.
Having trouble finding this in the documentation, can someone point me to it?
Ideally we would love to have the code trigger the Scheduler to do a pipeline run to create a REPORT, (maybe even Quiver, to do some timeline analysis). Is this possible? Are there examples in the documentation?
Check out the documentation in the Data Health section of the platform documentation. There are a number of patterns possible, including defining data expectations in your code.
Whether defined as expectations or dataset health checks, failures can be set up to create Issues within the platform, which can have default assignees (individuals or groups) that will also send notifications, which are both in platform and over email (depending on per-user configuration).
Health check failures will also automatically populate the data health tab in the Project Catalog view, which can serve as a dashboard to view the overall health of the project. You can also surface these in the Data Lineage view with a coloring based on Data Health to understand issues across the breadth of the pipeline.
For a comprehensive approach to pipeline health, review the Pipelines and best practices section in the Code Repositories documentation.

Syncing File Name for Drive Realtime Document

My real-time document allows the user to edit the file name within the editor (much like Google's own apps). I represent this as a collaborative string so all collaborators see the file renames as soon as possible.
I'm trying to determined the best and most efficient way to keep this collaborative string in sync with the actual file name. There are two scenarios to consider:
In Editor Changes
If a user edits the document name within the editor. In this case we need to use the Drive API to push that change out to the file on Google drive. To avoid race conditions, it is best if only one of the collaborators pushes the change out. The easiest way to do this seems to check if the rename event was local.
I also found it best to add a delay so we are not pushing the rename out to the Drive API with every character change. If a few seconds pass with no more name changes at that point it pushes the change out. This all seems to work well.
External Changes
The harder one and the one I am interested in requesting advice on, the case when the file name is changed externally. For example, if the user renamed the file within the Drive interface itself. We want this change to update our collaborative string to match.
My application is entirely client-side so I can't use webhook push notifications. So my only solution is to poll the file name every X seconds (currently set to 10). But this presents the following problems:
It is API intensive. If you have 4 collaborators that keep the screen open for 8 hour that is 11520 API calls. If my app has lots of users with lots of documents I could see how this might push me past my API limits.
To avoid race conditions (and reduce API calls) we only want one collaborator to check for changes and update the collaborative string if the file name has changed. But how to pick when collaborators might join/exit at any time? Currently I am having each collaborator check anytime the collaborators change if they are the "leader". The "leader" is the collaborator whose session id is the highest. This seems to work but it all seems fairly hackey. Also if collaborators join close together I wonder if it might be possible that a race condition would cause multiple collaborators to think they are the leader.
Is there an easier way? An real-time API function I am missing?
It would be ideal if the real-time API just provided a method that stored the document name. Anytime the real-time API checks for mutations it could grab the latest document name.
I think you've identified the options. There isn't any built in functionality currently to sync it via the Realtime API specifically.
Personally I'd probably back off the poll time a lot.. its probably not critical that the title is always exactly up to date, so asking every few minutes is probably sufficient and would greatly reduce your qps.
In terms of identifying a "leader", I can't think of anything better than something deterministic based on the session id. So long as each rechecks on each session join/leave event, I don't think there should be any issues.

Message format advice

I'm new in using messaging system as a middleware between applications and I'm trying to get few concepts clear, amongst which is the format of the message. I assume there is no right answer here, but could you share your experience of how can this be done, and what are pros and cons?
The kind of messages which work best in my opinion are Commands and Events.
A command message is a message which is sent from one system directly to another system, and it is an instruction for something to happen. Here are some example commands:
Issue Risk To Coverholder
Process Renewal Request
Begin Employee On Boarding
An event message is broadcast, or published by one system to all interested systems, and is a notification that something has happened. Here are some example events:
Policy Document Received
Quote Decision Completed
Financial Transaction Parked
What you notice about these commands and events is that they have business meaning. So the messages which represent these commands and events are easily understood by name.
Try avoiding using CRUDy language (eg create, update, delete, etc)in the naming of your commands and events.
I think this is the best policy when it comes to messaging.

Beanstalkd to ZeroMQ: is it possible to distribute work in the same way?

A common beanstalkd workflow would be to have many workers listening for jobs on a queue/tube, locking that job while they process it, and deleting that job so that no other workers can re-process it. If the job fails (eg. resources are unavailable to complete processing) the job can slip back onto the queue for another worker to pick up the job.
Is this approach possible with ZeroMQ? Eg, using the pub/sub model can multiple subscribers receive the same job and process it at the same time? Would push/pull or req/rep provide a similar setup?
I'm certain ZeroMQ can provide this for you. However keep in mind that ZeroMQ is not really a queue. It's an advanced networking library. Naturally, with the provided primatives, you can do what you describe.
You specific case seems like it could be implemented as a pub/sub system, if you don't mind having the same work done many times over. I recommend reading the ZeroMQ guide and especially chapter 5.
Although I'm certain you can do what you describe with ZeroMQ, I would first search for a queue which does this already.

Message queuing solution for millions of topics

I'm thinking about system that will notify multiple consumers about events happening to a population of objects. Every subscriber should be able to subscribe to events happening to zero or more of the objects, multiple subscribers should be able to receive information about events happening to a single object.
I think that some message queuing system will be appropriate in this case but I'm not sure how to handle the fact that I'll have millions of the objects - using separate topic for every of the objects does not sound good [or is it just fine?].
Can you please suggest approach I should should take and maybe even some open source message queuing system that would be reasonable?
Few more details:
there will be thousands of subscribers [meaning not plenty of them],
subscribers will subscribe to tens or hundreds of objects each,
there will be ~5-20 million of the objects,
events themselves dont have to carry any message. just information that that object was changed is enough,
vast majority of objects will never be subscribed to,
events occur at the maximum rate of few hundreds per second,
ideally the server should run under linux, be able to integrate with the rest of the ecosystem via http long-poll [using node js? continuations under jetty?].
Thanks in advance for your feedback and sorry for somewhat vague question!
I can highly recommend RabbitMQ. I have used it in a couple of projects before and from my experience, I think it is very reliable and offers a wide range of configuraions. Basically, RabbitMQ is an open-source ( Mozilla Public License (MPL) ) message broker that implements the Advanced Message Queuing Protocol (AMQP) standard.
As documented on the RabbitMQ web-site:
RabbitMQ can potentially run on any platform that Erlang supports, from embedded systems to multi-core clusters and cloud-based servers.
... meaning that an operating system like Linux is supported.
There is a library for node.js here: https://github.com/squaremo/rabbit.js
It comes with an HTTP based API for management and monitoring of the RabbitMQ server - including a command-line tool and a browser-based user-interface as well - see: http://www.rabbitmq.com/management.html.
In the projects I have been working with, I have communicated with RabbitMQ using C# and two different wrappers, EasyNetQ and Burrow.NET. Both are excellent wrappers for RabbitMQ but I ended up being most fan of Burrow.NET as it is easier and more obvious to work with ( doesn't do a lot of magic under the hood ) and provides good flexibility to inject loggers, serializers, etc.
I have never worked with the amount of amount of objects that you are going to work with - I have worked with thousands ( not millions ). However, no matter how many objects I have been playing around with, RabbitMQ has always worked really stable and has never been the source to errors in the system.
So to sum up - RabbitMQ is simple to use and setup, supports AMQP, can be managed via HTTP and what I like the most - it's rock solid.
Break up the topics to carry specific events for e.g. "Object updated topic" "Object deleted"...So clients need to only have to subscribe to the "finite no:" of event based topics they are interested in.
Inject headers into your messages when you publish them and put intelligence into the clients to use these headers as message selectors. For eg, client knows the list of objects he is interested in - and say you identify the object by an "id" - the id can be the header, and the client will use the "id header" to determine if he is interested in the message.
Depending on whether you want, you may also want to consider ensuring guaranteed delivery to make sure that the client will receive the message even if it goes off-line and comes back later.
The options that I would recommend top of the head are ActiveMQ, RabbitMQ and Redis PUB SUB ( Havent really worked on redis pub-sub, please use your due diligance)
Finally here are some performance benchmarks for RabbitMQ and Redis
Just saw that you only have few 100 messages getting pushed out / sec, this is not a big deal for activemq, I have been using Amq on a system that processes 240 messages per second , and it just works fine. I use a thread pool of workers to asynchronously process the messages though . Look at a framework like akka if you are in the java land, if not stick with nodejs and the cool Eco system around it.
If it has to be open source i'd go for ActiveMQ, and an application server to provide the JMS functionality for topics and it has Ajax Support so you can access them from your client
So, you would use the JMS infrastructure to publish the topics for the objects, and you can create topis as you need them
Besides, by using an java application server you may be able to take advantages from clustering, load balancing and other high availability features (obviously based on the selected product)
Hope that helps!!!
Since your messages are very small might want to consider MQTT, which is designed for small devices, although it works fine on powerful devices as well. Key consideration is the low overhead - basically a 2 byte header for a small message. You probably can't use any simple or open source MQTT server, due to your volume. You probably need a heavy duty dedicated appliance like a MessageSight to handle your volume.
Some more details on your application would certainly help. Also you don't mention security at all. I assume you must have some needs in this area.
Though not sure about your work environment but here are my bits. Can you identify each object with unique ID in your system. If so, you can have a topic per each event type. for e.g. you want to track object deletion event, object updation event and so on. So you can have topic for each event type. These topics would be published with Ids of object whenever corresponding event happened to the object. This will limit the no of topics you needed.
Second part of your problem is different subscribers want to subscribe to different objects. So not all subscribers are interested in knowing events of all objects. This problem statement scoped to message selector(filtering) mechanism provided by messaging framework. So basically you need to seek on what basis a subscriber interested in particular object. Have that basis as a message filtering mechanism. It could be anything: object type, object state etc. So ultimately your system would consists of one topic for each event type with someone publishing event messages : {object-type:object-id} information. Subscribers could subscribe to any topic and with an filtering criteria.
If above solution satisfy, you can use any messaging solution: activeMQ, WMQ, RabbitMQ.