I'm learning about the Google cloud functions and I'm setting them up to be triggered by the messages placed in the queue. I think I'm really failing to grasp some concepts here as I have a bunch of questions and can't find answers anywhere. There are a lot of examples explaining functions and clients, but I haven't found examples merging the two.
Functions get triggered by the topic and not by the subscription. This one is weird because as a single topic can have multiple subscriptions and even multiple subscribers per subscription, this would mean the function doesn't acknowledge the messages as it doesn't know which message to acknowledge.
Building on the first question, when a message arrives on the topic, do all the subscriber functions get executed? What about the functions that are in the process of doing some work? What about multiple subscribers on a single subscription?
Can a real pull subscription then even be implemented in a function? That would mean the function runs constantly because of the need to pull the items, which is costly and the wrong thing to do.
Can a message be nacked from the function? It seems the functions are retried only if they are deployed with allowing retries turned on, but then they try to rerun the function immediately and for as long as the retry period is set (default is 7 days) which can cause extreme costs if a function is buggy, and is a totally crap pattern.
All of this makes me think that:
It would be much better implementation to trigger functions from subscriptions and for subscriptions to be able to ack / nack them than listening to topics
I should choose push subscriptions alongside HTTP functions, which seem much more controllable (I might be wrong, haven't tried it)
Can anyone shed some light on this? Can I control the messages easily from the function and can I expect the function to be rerun if a message is nacked or resent?
Perhaps the piece of information that is key is that when you hook a Cloud Pub/Sub topic to a Cloud Function, a push subscription is created by the system in order to send messages to that Cloud Function.
Every cloud function you tie to a topic will have its own subscription and will receive all messages published to the topic. If an instance of the function is already doing work, then another instance could be created to handle the load (or will just be load balanced among instances that are already running). Push subscriptions don't really have a notion of multiple subscribers for the same subscription. From Cloud Pub/Sub's perspective, there is a single endpoint to which to push messages. Cloud Functions receives those messages and distributes them among instances of your Function that the service is running.
It would be very tough to implement a pull subscription as a Cloud Function. You would need a trigger to start the Function and it would have to do all of its work in the time allotted for it to run.
It sounds like you want to nack with a backoff on retrying the message. That is not a feature supported currently, but we are aware of the limitation and are looking to make improvements here soon.
Related
Cannot find a clean way to set Stackdriver alert notifications on errors in cloud functions
I am using a cloud function to process data to cloud data store. There are 2 types of errors that I want to be alerted on:
Technical exceptions which might cause function to 'crash'
Custom errors that we are logging from the cloud function
I have done the below,
Created a log metric searching for specific errors (although this will not work for 'crash' as the error message can be different each time)
Created an alert for this metric in Stackdriver monitoring with parameters as in below code section
This is done as per the answer to the question,
how to create alert per error in stackdriver
For the first trigger of the condition I receive an email. However, on subsequent triggers lets say on the next day, I don't. Also the incident is in 'opened' state.
Resource type: cloud function
Metric:from point 2 above
Aggregation: Aligner: count, Reducer: None, Alignment period: 1m
Configuration: Condition triggers if: Any time series violates, Condition:
is above, Threshold: 0.001, For: 1 min
So I have 3 questions,
Is this the right way to do to satisfy my requirement of creating alerts?
How can I still receive alert notifications for subsequent errors?
How to set the incident to 'resolved' either automatically/ manually?
I was having a similar problem and managed to at least get a mail every time. The "trick" seems to be to use sum instead of count in combination with for most recent value - see the screenshot below.
This causes Stackdriver to send a mail everytime a matching log entry is found and closing the issue a minute later.
Normally, alerts resolve themselves once the alerting policy stops firing. The problem you're having with your alerts not resolving is because your metric only writes non-zero points - if there are no errors, it doesn't write zero. That means that the policy never gets an unambiguous signal that everything is fine, so the alerts just sit there (they'll automatically close after 7 days, but I imagine that's not all that useful for you).
This is a common problem and it's a tricky one to solve. One possibility is to write your policy as a ratio of errors to something non-zero, like request count. As long as the request count is non-zero, the ratio will compute zero if there are no errors, and so an alert on the ratio will automatically resolve. You need to be a bit careful about rounding errors, though - if your request count is high enough, you might potentially miss a single error because the ratio could round to zero.
Aaron Sher, Stackdriver engineer
We got around this issue by having the insertId as a label of the log-based metric we created for every log record we get from the pods running our services.
In the alerting policy, this label helped in two things:
We grouped by it (named as record_id) which served in making each incident unique, so it got reported without waiting for other incidents to get resolved and at the same time it got resolved instantly.
We used it in the documentation of the notification to include a direct link to the issue (log record) itself which was a nice and essential feature to have. https://console.cloud.google.com/logs/viewer?project=MY_PROJECT&advancedFilter=insertId%3D%22${metric.label.record_id}%22
As #Aaron Sher mentioned in his answer, it is a tricky problem. We might have done something not recommended or not efficient, but it works fine and of course we are open for improvement recommendations.
I have been fighting the same very simple problem with NServiceBus all day today. The problem is that there is lots of documentation on how to change the configuration, but almost nothing that helps me to know what configuration I need.
There are sample applications, and they work, but there is nothing explaining how they work, what limitations they have, or how to do something just a little bit different than the sample. The sample applications also present a "Hello world" type simplicity, and in any real application you need something different from the sample application, but again there is no help on how to make these changes, or the implications of configuration choices.
From all the things that are very difficult to guess from the documentation, it is the relationship between the endpoint name, the UnicastBusConfig mappings, and pub/sub persistence that is causing the most frustration right now.
Is the endpoint name the name of the MSMQ queue? Does that mean that every application has only one input queue for all message types? Does adding a mapping in UnicastBusConfig cause a subscription message to be sent to the publisher, or does it add a subscription record in subscription DB? Why can't you add the same message type more than once to UnicastBusConfig? Why can't I just subscribe to messages of a certain type without having to know which server they come from?
For someone that understands NServiceBus this probably seems so simple that it wasn't worth documenting, but for someone coming to this for the first time, it's the very simple stuff that's the most difficult to infer from the morass of low level detail.
Is the endpoint name the name of the MSMQ queue?
Yes.
Does that mean that every application has only one input queue for all message types?
Yes. Each endpoint has a single queue associated with it, so all messages for that endpoint go through the same queue.
Does adding a mapping in UnicastBusConfig cause a subscription message to be sent to the publisher, or does it add a subscription record in subscription DB?
Neither really. The UnicastBusConfig section is for setting up the relationship between types (or assemblies) and endpoints. So it doesn't actually cause a subscription to be set up (per se), but it tells the framework where the messages will be coming from (and therefore how to subscribe to them).
The actual subscription gets created when the system starts up and NSB finds a handler for a particular type of message that matches a section in the UnicastBusConfig (assuming auto-subscribing is turned on).
This also works for sending Commands--the config section lets the framework know to which endpoint to Send() a Command.
Why can't you add the same message type more than once to UnicastBusConfig?
Because a Command can have only one (logical) endpoint that handles it, and an Event can have only one (logical) endpoint that publishes it.
Why can't I just subscribe to messages of a certain type without having to know which server they come from?
This question is a bit more difficult to answer definitively, as it gets into the philosophy of having a central broker (hub and spoke) vs. bus-style architecture.
But in a nutshell, something, somewhere needs to know how to find the publisher in order to subscribe to it. Because NServiceBus does not have a central broker or routing table, it is left to the client to be configured with knowledge of the endpoints it consumes.
You might want to check out the NServiceBus documentation at http://docs.particular.net/nservicebus/, it's quite comprehensive and should provide answers to most of your questions.
I'm thinking about system that will notify multiple consumers about events happening to a population of objects. Every subscriber should be able to subscribe to events happening to zero or more of the objects, multiple subscribers should be able to receive information about events happening to a single object.
I think that some message queuing system will be appropriate in this case but I'm not sure how to handle the fact that I'll have millions of the objects - using separate topic for every of the objects does not sound good [or is it just fine?].
Can you please suggest approach I should should take and maybe even some open source message queuing system that would be reasonable?
Few more details:
there will be thousands of subscribers [meaning not plenty of them],
subscribers will subscribe to tens or hundreds of objects each,
there will be ~5-20 million of the objects,
events themselves dont have to carry any message. just information that that object was changed is enough,
vast majority of objects will never be subscribed to,
events occur at the maximum rate of few hundreds per second,
ideally the server should run under linux, be able to integrate with the rest of the ecosystem via http long-poll [using node js? continuations under jetty?].
Thanks in advance for your feedback and sorry for somewhat vague question!
I can highly recommend RabbitMQ. I have used it in a couple of projects before and from my experience, I think it is very reliable and offers a wide range of configuraions. Basically, RabbitMQ is an open-source ( Mozilla Public License (MPL) ) message broker that implements the Advanced Message Queuing Protocol (AMQP) standard.
As documented on the RabbitMQ web-site:
RabbitMQ can potentially run on any platform that Erlang supports, from embedded systems to multi-core clusters and cloud-based servers.
... meaning that an operating system like Linux is supported.
There is a library for node.js here: https://github.com/squaremo/rabbit.js
It comes with an HTTP based API for management and monitoring of the RabbitMQ server - including a command-line tool and a browser-based user-interface as well - see: http://www.rabbitmq.com/management.html.
In the projects I have been working with, I have communicated with RabbitMQ using C# and two different wrappers, EasyNetQ and Burrow.NET. Both are excellent wrappers for RabbitMQ but I ended up being most fan of Burrow.NET as it is easier and more obvious to work with ( doesn't do a lot of magic under the hood ) and provides good flexibility to inject loggers, serializers, etc.
I have never worked with the amount of amount of objects that you are going to work with - I have worked with thousands ( not millions ). However, no matter how many objects I have been playing around with, RabbitMQ has always worked really stable and has never been the source to errors in the system.
So to sum up - RabbitMQ is simple to use and setup, supports AMQP, can be managed via HTTP and what I like the most - it's rock solid.
Break up the topics to carry specific events for e.g. "Object updated topic" "Object deleted"...So clients need to only have to subscribe to the "finite no:" of event based topics they are interested in.
Inject headers into your messages when you publish them and put intelligence into the clients to use these headers as message selectors. For eg, client knows the list of objects he is interested in - and say you identify the object by an "id" - the id can be the header, and the client will use the "id header" to determine if he is interested in the message.
Depending on whether you want, you may also want to consider ensuring guaranteed delivery to make sure that the client will receive the message even if it goes off-line and comes back later.
The options that I would recommend top of the head are ActiveMQ, RabbitMQ and Redis PUB SUB ( Havent really worked on redis pub-sub, please use your due diligance)
Finally here are some performance benchmarks for RabbitMQ and Redis
Just saw that you only have few 100 messages getting pushed out / sec, this is not a big deal for activemq, I have been using Amq on a system that processes 240 messages per second , and it just works fine. I use a thread pool of workers to asynchronously process the messages though . Look at a framework like akka if you are in the java land, if not stick with nodejs and the cool Eco system around it.
If it has to be open source i'd go for ActiveMQ, and an application server to provide the JMS functionality for topics and it has Ajax Support so you can access them from your client
So, you would use the JMS infrastructure to publish the topics for the objects, and you can create topis as you need them
Besides, by using an java application server you may be able to take advantages from clustering, load balancing and other high availability features (obviously based on the selected product)
Hope that helps!!!
Since your messages are very small might want to consider MQTT, which is designed for small devices, although it works fine on powerful devices as well. Key consideration is the low overhead - basically a 2 byte header for a small message. You probably can't use any simple or open source MQTT server, due to your volume. You probably need a heavy duty dedicated appliance like a MessageSight to handle your volume.
Some more details on your application would certainly help. Also you don't mention security at all. I assume you must have some needs in this area.
Though not sure about your work environment but here are my bits. Can you identify each object with unique ID in your system. If so, you can have a topic per each event type. for e.g. you want to track object deletion event, object updation event and so on. So you can have topic for each event type. These topics would be published with Ids of object whenever corresponding event happened to the object. This will limit the no of topics you needed.
Second part of your problem is different subscribers want to subscribe to different objects. So not all subscribers are interested in knowing events of all objects. This problem statement scoped to message selector(filtering) mechanism provided by messaging framework. So basically you need to seek on what basis a subscriber interested in particular object. Have that basis as a message filtering mechanism. It could be anything: object type, object state etc. So ultimately your system would consists of one topic for each event type with someone publishing event messages : {object-type:object-id} information. Subscribers could subscribe to any topic and with an filtering criteria.
If above solution satisfy, you can use any messaging solution: activeMQ, WMQ, RabbitMQ.
I'm trying to use RabbitMq in a more unconventional way (though at this point i can pick any other message queue implementation if needed). Instead of leaving Rabbit push messages to my consumers, the consumer connects to a queue and fetches a batch of N messages (during which it consumes some and possible rejects some), after which it jumps to another queue and so on. This is done for redundancy. If some consumers crash all messages are guaranteed to be consumed by some other consumer.
The problem is that I have multiple consumers and I don't want them to compete over the same queue. Is there a way to guarantee a lock on a queue? If not, can I at least make sure that if 2 consumers are connected to the same queue they don't read the same message? Transactions might help me to some degree but I've heard talk that they'll get removed from RabbitMQ.
Other architectural suggestions are welcomed too.
Thanks!
EDIT:
As pointed in the comment there's an a particularity in how I need to process the messages. They only make sense taken in groups and there's a high probability that related messages are clumped together in a queue. If for example I pull a batch of 100 messages, there's a high probability that I'll be able to do something with messages 1-3, 4-5,6-10 etc. If I fail to find a group for some messages I'll resubmit them to the queue. WorkQueue wouldn't work because it would spread messages from the same group to multiple workers that wouldn't know what to do with them.
Have you had a look at this free online book on Enterprise Integration Patterns?
It sounds like you really need a workflow where you have a batcher component before the messages get to your workers. With RabbitMQ there are two ways to do that. Either use an exchange type (and message format) that can do the batching for you, or have one queue, and a worker that sorts out batches and places each batch on its own queue. The batcher should probably also send a "batch ready" message to a control queue so that a worker can discover the existence of the new batch queue. Once the batch is processed the worker could delete the batch queue.
If you have control over the message format, you might be able to get RabbitMQ to do the batching implicitly in a couple of ways. With a topic exchange, you could make sure that the routing key on each message is of the format work.batchid.something and then a worker that learns of the existence of batch xxyzz would use a binding key like #.xxyzz.# to only consume those messages. No republishing needed.
The other way is to include a batch id in a header and use the newer headers exchange type. Of course you can also implement your own custom exchange types if you are willing to write a small amount of Erlang code.
I do recommend checking the book though, because it gives a better overview of messaging architecture than the typical worker queue concept that most people start with.
Have your consumers pull from just one queue. They will be guaranteed not to share messages (Rabbit will round-robin the messages among the currently-connected consumers) and it's heavily optimized for that exact usage pattern.
It's ready-to-use, out of the box. In the RabbitMQ docs it's called the Work Queue model. One queue, multiple consumers, with none of them sharing anything. It sounds like what you need.
You can set a channel/consumer level prefetch count to consume messages in batches. In order to re-submit messages, you should use the basic.reject AMQP method and those messages can be chosen to be requeued or forwarded to a dead letter queue. Multiple consumers trying to pull messages from the same queue is not an issue asthe AMQP basic.get method will be synchronized to handle concurrent consumers.
https://groups.google.com/forum/#!topic/rabbitmq-users/hJ8f5du-GCA
We are doing some optimization of our app that heavily uses EWS and one point is about cleaning subscriptions that are no longer needed. We are using PullSubscription type so naturally first thing I did was to make sure there is a Unsubscribe method call for each of subscriptions that should be removed.
To my surprise according to Exchange performance counters number of subscriptions after Unsibscribe calls decreases only by several subscriptions not to 0 (for testing purposes I call Unsubscribe for all of open subscriptions). Like we have 200 mailboxes, for each 3 subscriptions (each for different kind of items: emails, appointments, etc.) which equals 600 active subscriptions. And after Unsubscribe calls (for all 600 of them) counters show that only 10 or so subscriptions were removed. If we run our app few times number of subscriptions grows each time.
So does Exchange somehow buffer or delay or do whatever with those subscriptions? Is Unsubscribe call enough or should I do something additional to be sure that subscription is removed and not hanging on server eating resources? Or maybe it is something about config of server and how EWS service works?
Of course EWS documentation is so vocal about it as in most other cases (which means only basic class reference, no possible problems solutions, nothing useful in solving issues) so I hope somebody here will throw me some hints.
You cannot do more than unsubscribe from all subscriptions. Exchange should handle that and discard old subscriptions over time...