Google Pubsub - Receive delivery attempt for push subscription - google-cloud-functions

I have a Google cloud function that is being triggered by a Pubsub push subscription.
I wish to know the current delivery attempt of the given message.
In pull subscription it works by setting dead letter topic, however I am not able to get the delivery attempt in a push subscription message attributes. Tried to configure a dead letter topic and delivery_attempt attribute is not in message attributes.
Is there a way to get the delivery attempt parameter in a push subscription?

For push subscriptions, use deliveryAttempt, not delivery_attempt. The documentation calls out this here:
When Pub/Sub forwards undeliverable messages from a push subscription, every message you receive from the subscription includes the deliveryAttempt field.

I am not sure it is possible to get that data explicitly... but I can suggest an idea of a workaround for consideration.
Every time a cloud function is invoked, you can (in your code) create/update a firestore document with some unique id (either a message event id, or some unique business related identifier). That document may have attributes, one of which - is the time and/or the attempt number.

in java you can use a static method of Subscriber class.
for a PubsubMessage message;
com.google.cloud.pubsub.v1.Subscriber.getDeliveryAttempt(message);
As de documentation sais Returns the delivery attempt count for a received PubsubMessage.
Integer maxAttemps = 5;
private void killMessage(Event<String> event, Exception en,
BasicAcknowledgeablePubsubMessage acknowledgeable) {
Integer attemps = Subscriber.getDeliveryAttempt(acknowledgeable.getPubsubMessage());
if(attemps.intValue() < maxAttemps) {
acknowledgeable.nack();
return;
}
sendToErrorQueue(event, en, acknowledgeable);
}
https://googleapis.dev/java/google-cloud-pubsub/latest/com/google/cloud/pubsub/v1/Subscriber.html

Related

My messages are delivered out of flow sequence order and how do I compensate?

I wish to use Twilio in the context of an adventure game. As the gamer (Geocacher) progresses on an actual treasure (cache) hunt, clues are given by text when certain code words or numbers are entered as part of the thread. I am very comfortable creating the flow in Studio and have a working flow that is debugged and ready to go.
Not so fast, grasshopper! Strange things started to happen when beta testing the flow. Basically texts that show as being sent arrive to the user out of sequence in the thread. The SM logs show everything is working correctly (message sent) but, what I call Zombie messages arrive to the user after a previous message has arrived. The Zombies are legitimate messages from the Flow but out of the correct sequence and that makes the thread unusable for my purposes.
I learned too late in my "programming" that Twilio states, "If you send multiple SMS messages to the same user in a short time, Twilio cannot guarantee that those messages will arrive in the order that you sent them." Ugh!
So , I started with the Help Techs at Twillio and one solution is to create a subflow that basically is inserted after a Send Message Widget. This sub flow basically Fetches the message via the SMS SID to check for SMS status. If status is "delivered", we can safely say the message has been received by the recipient and then permit the next message in the flow.
That sound great but I am not a programmer and will never be able to integrate the suggested code much less debug it when things don't work. There might be many other approaches that you guys can suggest. The task is 1.) Send a message, 2.) Run a subflow that checks for message delivery, 3.) send the next message in the sequence.
I need to move on to implementation and this type of sub flow is out of my wheelhouse. I am willing to pay for programming help.
I can post the JSON code that was created as a straw man but have no idea how to use it and if it is the optimum solution if that is of help. It would seem that a lot of folks experience this issue and would like a solution. A nice tight JSON subflow with directions on how to insert would seem to be a necessary part of the Widget toolkit provided by Twillio in Studio.
Please Help Me! =)
As you stated, the delivery of the message cannot be guaranteed. Checking the status of the sent message is the most reliable, using a subflow, a Twilio Function, or a combination. Just keep in mind that Twilio Functions have a 10s execution time limit. I don't expect delivering the SMS will take longer than 10s is most cases. If you're worried about edge cases, you'd have to loop the check for the status multiple times. I'll share a proof of concept later for this.
An easier way, but it still doesn't guarantee delivery order, would be to add some delay between each message. There's no built-in delay widget, but here's code on how to create a Twilio Function to add delays, up to 10s.
A more hacky way to implement delays without having to use this Twilio Function, is to use the Send & Wait For Reply Widget and configure the "Stop Gathering After" property to the amount of delay you'd like to add. If the user responds, connect to the next widget, if they don't also connect to the widget.
As mentioned earlier, here's th Subflow + Function proof of concept I hacked together:
First, create a Twilio Functions Service, in the service create two functions:
/delay:
// Helper function for quickly adding await-able "pauses" to JavaScript
const sleep = (delay) => new Promise((resolve) => setTimeout(resolve, delay));
exports.handler = async (context, event, callback) => {
// A custom delay value could be passed to the Function, either via
// request parameters or by the Run Function Widget
// Default to a 5 second delay
const delay = event.delay || 5000;
// Pause Function for the specified number of ms
await sleep(delay);
// Once the delay has passed, return a success message, TwiML, or
// any other content to whatever invoked this Function.
return callback(null, `Timer up: ${delay}ms`);
};
/get-message:
exports.handler = function(context, event, callback) {
const messageSid = event.message_sid,
client = context.getTwilioClient();
if(!event.message_sid) throw "message_sid parameter is required.";
client.messages(messageSid)
.fetch()
.then(message => callback(null, message))
.catch((error) => {
console.error(error);
return callback(error);
});
};
Then, create a Studio Flow named something like "Send and Wait until Delivered".
In this flow, you send the message, grabbing the message body passed in from the parent flow, {{trigger.parent.parameters.message_body}}.
Then, you run the /get-message Function, and check the message status.
If delivered, set status variable to delivered. This variable will be passed back to the parent flow. If any of these accepted,queued,sending,sent, then the message is still in route, so wait a second using the /delay function, then loop back to the /get-message function.
If any other status, it is assumed there's something wrong and status is set to error.
Now you can create your parent flow where you call the subflow, specifying the message_body parameter. Then you can check the status variable for the subflow, whether it is 'delivered' or 'error'.
You can find the export for the subflow and the parent flow in this GitHub Gist. You can import it and it could be useful as a reference.
Personally, I'd add the /delay function, and use that after every message, adding a couple of seconds delay. I'd personally assume the delay adds enough buffer for no zombie messages to appear.
Note: The code, proof of concept, and advice is offered as is without liability to me or Twilio. It is not tested against a production workload, so make sure you test this thoroughly for your use case!

Google pubsub into HTTP triggered cloud function?

Is it possible to trigger an HTTP cloud function in response to a pubsub message?
When editing a subscription, google makes it possible to push the message to an HTTPS endpoint, but for abuse reasons one has to be able to prove that you own the domain in order to do this, and of course you can't prove that you own google's own *.cloudfunctions.net domain which is where they get deployed.
The particular topic I'm trying to subscribe to is a public one, projects/pubsub-public-data/topics/taxirides-realtime. The answer might be use a background function rather than HTTP triggered, but that doesn't work for different reasons:
gcloud functions deploy echo --trigger-resource projects/pubsub-public-data/topics/taxirides-realtime --trigger-event google.pubsub.topic.publish
ERROR: gcloud crashed (ArgumentTypeError): Invalid value 'projects/pubsub-public-data/topics/taxirides-realtime': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
This seems to indicate this is only permitted on topics I own, which is a strange limitation.
It is possible to publish from a pub/sub topic to a cloud function. I was looking for a way to publish messages from a topic in project A to a function in project B. This was not possible with a regular topic trigger, but it is possible with http-trigger. Overall steps to follow:
Creata a http-triggered function in project B.
Create a topic in project A.
Create a push subscription on that topic in project A.
Domain verification
Push subscription
Here we have to fill in three things: the endpoint, the audience and the service account under which the function runs.
Push Endpoint: https://REGION-PROJECT_ID.cloudfunctions.net/FUNC_NAME/ (slash at end)
Audience: https://REGION-PROJECT_ID.cloudfunctions.net/FUNC_NAME (no slash at end)
Service Account: Choose a service account under which you want to send the actual message. Be sure the service account has the "roles/cloudfunctions.invoker" role on the cloud function that you are sending the messages to. Since november 2019, http-triggered functions are automatically secured because AllUsers is not set by default. Do not set this property unless you want your http function to be public!
Domain verification
Now you probably can't save your subscription because of an error, that is because the endpoint is not validated by Google. Therefore you need to whitelist the function URL at: https://console.cloud.google.com/apis/credentials/domainverification?project=PROJECT_NAME.
Following this step will also bring you to the Google Search Console, where you would also need to verify you own the endpoint. Sadly, at the time of writing this process cannot be automated.
Next we need to add something in the lines of the following (python example) to your cloud function to allow google to verify the function:
if request.method == 'GET':
return '''
<html>
<head>
<meta name="google-site-verification" content="{token}" />
</head>
<body>
</body>
</html>
'''.format(token=config.SITE_VERIFICATION_CODE)
Et voila! This should be working now.
Sources:
https://github.com/googleapis/nodejs-pubsub/issues/118#issuecomment-379823198,
https://cloud.google.com/functions/docs/calling/http
Currently, Cloud Functions does not allow one to create a function that receives messages for a topic in a different project. Therefore, specifying the full path including "projects/pubsub-public-data" does not work. The gcloud command to deploy a Cloud Function for a topic expects the topic name only (and not the full resource path). Since the full resource path contains the "/" character, it is not a valid specification and results in the error you see.
The error you are getting seems to be that you are misspelling something in the gcloud command you are issuing.
ERROR: gcloud crashed (ArgumentTypeError): Invalid value 'projects/pubsub-public-data/topics/taxirides-realtime': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long
Are you putting a newline character in the middle of the command?

NATS - just one subscriber to take action for published event in a microservicearchitecture

I'm new to NATS and have read all the examples for:
https://nats.io/documentation/concepts/nats-messaging/
I'm in Microservciearchitecture where in microservice-Y (MSY) need to store some information published from other microservice-X (MSX) I have 2-10 instances of MSY so when changes are made in MSX and MSX-instance publishes event I want that only 1 instance of MSY should save information so not all of them save the same data.
I have read Request-Repy:
https://nats.io/documentation/concepts/nats-req-rep/
but there seems that all of instances receives message (and will handle it) even if it is point-to-point and reply is handled just for the one instance that is quickest to reply
Is this correct or have I missunderstood example?
If I only need that 1 instance of MSY should handle given message (store data in db) what can I do to acheve this?
Use queue groups. If you have multiple subscriptions on the same subject with the same queue group, only one of the members of the group will receive the message.
Check this out: https://nats.io/documentation/concepts/nats-queueing/

Mosquitto auth plug in ACL check wont get called for subscriptions

I am using mosquitto 1.4.5 build.
I am trying to have a separate plug-in do the ACL check for mosquitto broker's topic subscription and publish using the provided header.
Just to test the auth plug-in's integration I have just printed a message as follows with the use of provided header for mosquitto auth plug in ( mosquitto_plugin.h ).
int mosquitto_auth_acl_check(void *user_data, const char *clientid, const char *username, const char *topic, int access)
{
mosquitto_log_printf( MOSQ_LOG_INFO , "ACL Check called");
return MOSQ_ERR_SUCCESS;
}
After making the shared object and having the config file's auth_plugin attribute changed I tried with a client simulation to see if Subscribe and Publish would call the mosquitto_auth_acl_check.
What I realized is despite what it says in the provided header's comments, it never gets called for subscription.
In publishing scenario, i can see the ACL Check called message being logged, therefore can assume that it calls the function.
In subscription scenario the message is not being logged therefore i am assuming that the function is not being called.
What could be the reason for it to be not called only for subscription?
It's not currently called on subscription because of the relative difficulty of comparing a wildcard subscription against a wildcard acl.
ACLs are checked at the point when a message is about to be sent to a client, which amounts to the same thing but isn't as efficient.

Different IP while making two requets throught UrlFetchApp in same script

Can we relly that while Google App Script is executed by a Time Trigger, and makes two subsequest request using UrlFetchApp, both are made by same server with the same IP?
I need to ensure it, because in one request I query for an Access token for a remote service and with another I'm using this Access token. The remote service that I'm quering checks if the Access token was requested by the client with the same IP as requests that use this Access Token.
EDIT
I examined the behavior by time-triggering some dumb scripts with just few consecutive UrlFetchApp requests in them and checked server logs. I had two clear observations:
IP may vary in consecutive calls within one trigger
There is clear rotation of the IPs, sometimes there is a group of 7 consecutive calls with the same IP, sometimes 6. But in general there are always groups.
Because I wanted to only use Google infrastructure for my script and occasional failure was not a problem, I came up with a ugly ugly but working solution:
function batchRequest(userLogin, userPassword, webapiKey, resource, attributes, values ){
var token = requestToken(userLogin, userPassword, webapiKey ); // requestToken method uses UrlFetchApp.fetch
var result = request(resource, token, attributes, values); // requestToken method uses UrlFetchApp.fetch with options.muteHttpExceptions set to true so that we can read the response code
var i = 0;
while (result.getResponseCode() == 500 && i < 10){
token = requestToken(userLogin, userPassword, webapiKey ); // requestToken method uses UrlFetchApp.fetch
result = request(resource, token, attributes, values);
i++;
}
return result;
}
So I simply try hard max 10 times and hope to hook up to have the two requests — one for token and another for some bussiness logic — done in a same ‘IP group’.
I put more detailed description here: https://medium.com/p/dd0746642d7
Within the same trigger call yes. From another trigger no. Based on experience nce but i havent seen this docummented.