DynamoDB: handling throttling with boto - boto

According to DynamoDB docs, requests causing database throttling are automatically retried if using the supported SDKs. However, I was unable to find any mention about how boto handles throttling cases. Does boto automatically retry throttled requests or should I start catching ProvisionedThroughputExceededException?

Boto does automatically retry ProvisionedThroughputExceededException errors. There is a special retry handler in the boto.dynamodb.layer1 module that handles this. It uses shorter wait intervals and retries a maximum of 10 times. After that, it throws a DynamoDBThroughputExceededError exception. The boto library also keeps track of the total number of ThroughputExceededErrors that are caught in the attribute throughput_exceeded_events of the Layer1 object.

Related

Get CALL_EXCEPTION details

I am running private geth node and I am wondering if there is any way to find the root cause of transaction exception. When I send the transaction, all I can see is:
transaction failed [ See:
https://links.ethers.org/v5-errors-CALL_EXCEPTION ]
And when I run the same transaction in hardhat network, I get more details:
VM Exception while processing transaction: reverted with panic code
0x11 (Arithmetic operation underflowed or overflowed outside of an
unchecked block)
Is it possible to get the same info from my geth node?
The revert reason is extracted using transaction replay, see the example implementation. This sets requirements for what data your node must store in order to be able to replay the transaction. See your node configuration and detailed use case for further diagnosis.
Your node must support EIP-140 and EIP-838. This has been case for many years now so it is unlikely your node does not support this.
Unless a smart contract explicitly reverts, the default reverts (payable function called with value, math errors) JSON-RPC error messages depend on the node type and may vary across different nodes
Hardhat is internally using Ganache simulated node, not GoEtheruem

Is Exponential Backoff inbuilt in AWS SDK Boto3?

I am using Mechanical Turk with Boto3 SDK.
As per the general documentation https://docs.aws.amazon.com/general/latest/gr/api-retries.html , “each AWS SDK implements exponential backoff algorithm” – so why do we need to implement it again in our code?
(I am also referring to AWS' answer here: https://forums.aws.amazon.com/thread.jspa?threadID=307015)
Default client back-off might not be enough for every use case.
I'm not familiar with this particular service client, but you generally can detect retries by using logging level logging.DEBUG. It will log retry attempts, so you can check how often and how many there are.
Some services have very specific rate limits in terms of N attempts in M time, so you can override default back-off by using botocore.config.Config property called retries and constructing service client while supplying config keyword.

Using pika.BlockingConnection I would like to consume messages from a queue then exit

I have written a consumer using pika.BlockingConnection and channel.start_consuming() that consumes messages from a specific queue, when the messages are depleted from the queue, the consumer waits indefinitely for the next message.
Is there a way I could specify some sort of timeout duration by which the start_consuming() would exit gracefully if no message has been fetched from the queue by the consumer within the specific time period.
I am using python 3.7.4 and pika 1.1.0 to consume from RabbitMQ 3.7.12.
Use the "consume generator" with a timeout.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Different Retry policies for different messages in MSMQ

We have a requirement for an API, which allows asynchronous updates via a MSMQ message queue, that I'm putting together which will allow the developer consuming the API to specify different retry policies per message. So a high priority client system, e.g. for sales will submit all messages with 5 delivery attempts (retries) and 15 minutes between each attempt, whereas a low priority client system, e.g. back-end mail shot system will allow users to update their marketing preferences, submitting messages with 3 retries and an hour between each attempt.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries, retry delay and things like whether messages are sent to a dead letter queue or just deleted? (and if so, how?)
I would be open to using other messaging frameworks if they fulfilled this requirement.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries
Depending on which operating system/msmq version you're using, specifying retry semantics is highly sophisticated in WCF. The following is for Windows 2008 and MSMQ4 using a transactional queue.
The main setting on the binding is called MaxRetryCycles. One retry cycle is an attempt to successfully read a message from a queue and process it inside the handling method. This "attempt" can actually be made up of multiple attempts, as defined by the msmq binding property ReceiveRetryCount. ReceiveRetryCount is the number of times an application will try to read the message and process it before rolling back the de-queue transaction. This marks the end of one retry cycle.
You can also introduce a delay in between cycles with the RetryCycleDelay property.
A more complicated consideration is what to do with the messages which fail even after multiple retry cycles.
allow the developer consuming the API to specify different retry policies per message
I am not sure how you could do this with MSMQ - as far as I'm aware it's only possible to set retry semantics on a per-endpoint basis. If you're using transactions then you can't even allow API users to set the priority of the messages being sent (transactional queues guarantee delivery in order).
The only thing you could do is host a another instance of your API as high-priority and one for low priority. These could be hosted on different environments, and this has the added benefit that low priority messages won't be competing for system resources with high priority messages.
Hope this helps.

Google Drive SDK - 500: Internal Server error: File uploads successfully most of the time

The Google Drive REST API sometimes returns a 500: Internal Server Error when attempting to upload a file. Most of these errors actually correspond to a successful upload. We retry the upload as per Google's recommendations only to see duplicates later on.
What is the recommended way of handing these errors?
Google's documentation seems to indicate that this is an internal error of theirs, and not a specific error that you can fix. They suggest using exponential backoff, which is basically re-attempting the function at increasing intervals.
For example, the function fails. Wait 2 seconds and try again. If that fails, wait 4 seconds. Then 8 seconds, 16, 32 etc. The bigger gaps mean that you're giving more and more time for the service to right itself. Though depending on your need you may want to cap the time eventually so that it waits a maximum of 10 minutes before stopping.
The retrying package has a very good set up for this. You can just from retrying import retry and then use retry as a decorator on any function that should be re-attempted. Here's an example of mine:
#retry(wait_exponential_multiplier=1000, wait_exponential_max=60*1000, stop_max_delay=10*60*1000)
def find_file(name, parent=''):
...
To use the decorator you just need to put #retry before the function declaration. You could just use retry() but there are optional parameters you can pass to adjust how the timing works. I use wait_exponential_multiplier to adjust the increase of waiting time between tries. wait_exponential_max is the maximum time it can spend waiting between attempts. And stop_max_delay is the time it will spend retrying before it raises the exception. All their values are in milliseconds.
Standard error handling is described here: https://developers.google.com/drive/handle-errors
However, 500 errors should never happen, so please add log information, and Google can look to debug this issue for you. Thanks.