In APIM currently we have product subscription key level throttling. But obviously if we have multiple API's within the same product, one API could consumes more quota
than expected and prevent others being able to use the application. So as per the MS documentation (https://learn.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling) we can use combine policies.
The question is with that approach whether we can use as below,
API-1 300 calls per 60 seconds where product subscription key =123
API-2 200 calls per 60 seconds where product subscription key =123
API-3 200 calls per 60 seconds where product subscription key =123
If so what could be the the total number of calls for the product subscription key? if it make sense.
I took below approach to have combine policies. But it doesn't like.
<rate-limit-by-key calls="50" renewal-period="60" counter-key="#("somevalue" + context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key"))" />
<rate-limit calls="10" renewal-period="30">
<api name="AddressSearch API dev" calls="5" renewal-period="30" />
<operation name="Search_GetAddressSuggestions" calls="3" renewal-period="30" />
</rate-limit>
It's important to understand that counters of rate-limit-by-key and rate-limit are independent.
When rate-limit-by-key allows request to pass it increases it's counter. When rate-limit allows request to pass it increases it's counters. In your configuration when rate-limit-by-key throttles request rate-limit will not be executed and will not count a request.
What that means is that in most cases lower limit wins. Your configuration will allow one subscription to make 50 calls per minute, but it's unlikely to make any difference, because second rate-limit policy will throttle after 10 calls to same product thus the first one will not have any chance to do anything.
If you want limits as in your sample, you could use configuration as follows:
<rate-limit calls="0" renewal-period="0">
<api name="API-1" calls="100" renewal-period="60" />
<api name="API-2" calls="200" renewal-period="60" />
<api name="API-3" calls="300" renewal-period="60" />
</rate-limit>
So to have the rate limiting API level I have come up with below which addressed my requirement.
<choose>
<when condition="#(context.Operation.Id.Equals("End point name1"))">
<rate-limit-by-key calls="40" renewal-period="30" counter-key="#(context.Api.Name + context.Operation.Name + context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key"))" />
</when>
<when condition="#(context.Operation.Id.Equals("End point name2"))">
<rate-limit-by-key calls="20" renewal-period="30" counter-key="#(context.Api.Name + context.Operation.Name + context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key"))" />
</when>
<otherwise>
<rate-limit-by-key calls="15" renewal-period="30" counter-key="#(context.Api.Name + context.Operation.Name + context.Request.Headers.GetValueOrDefault("Ocp-Apim-Subscription-Key"))" />
</otherwise>
</choose>
Hope this helps.
Just to confirm - you are setting three throttling policies on an API level, based on the subscription key:
API-1: 300 calls per 60 seconds
API-2: 200 calls per 60 seconds
API-3: 200 calls per 60 seconds
In this case, if these are your only APIs, the maximum number of requests per subscription key per 60 seconds is:
300 + 200 + 200 = 700.
If you have more APIs, they will not be throttled unless you specify a policy for them as well.
Related
I want my consumers to process large batches, so I aim to have the consumer listener "awake", say, on 1800mb of data or every 5min, whichever comes first.
Mine is a kafka-springboot application, the topic has 28 partitions, and this is the configuration I explicitly change:
Parameter
Value I set
Default Value
Why I set it this way
fetch.max.bytes
1801mb
50mb
fetch.min.bytes+1mb
fetch.min.bytes
1800mb
1b
desired batch size
fetch.max.wait.ms
5min
500ms
desired cadence
max.partition.fetch.bytes
1801mb
1mb
unbalanced partitions
request.timeout.ms
5min+1sec
30sec
fetch.max.wait.ms + 1sec
max.poll.records
10000
500
1500 found too low
max.poll.interval.ms
5min+1sec
5min
fetch.max.wait.ms + 1sec
Nevertheless, I produce ~2gb of data to the topic, and I see the consumer-listener (a Batch Listener) is called many times per second -- way more than desired rate.
I logged the serialized-size of the ConsumerRecords<?,?> argument, and found that it is never more than 55mb.
This hints that I was not able to set fetch.max.bytes above the default 50mb.
Any idea how I can troubleshoot this?
Edit:
I found this question: Kafka MSK - a configuration of high fetch.max.wait.ms and fetch.min.bytes is behaving unexpectedly
Is it really impossible as stated?
Finally found the cause.
There is a broker fetch.max.bytes setting, and it defaults to 55mb. I only changed the consumer preferences, unaware of the broker-side limit.
see also
The kafka KIP and the actual commit.
An address consumes 20,000 gas via SSTORE.
Given is a gas price of 35 Gwei.
If I store 10,000 addresses in a map, it will cost me:
20,000 gas * 10,000 = 200,000,000 gas
200,000,000 Gas * 35 Gwei = 7 Ether.
Is the calculation correct?
If I do the same on a layer2 chain, does the whole thing cost me 7 matic for example, or is there something else I need to consider?
Your calculation is correct.
I'm assuming you want to store the values in an array instead of 10k separate storage variables. If it's a dynamic-length array, you should also consider the cost of sstore while updating a (non-zero to non-zero) value of the slot holding the array length (currently 2900 gas for each .push() function resizing the array).
You should also consider the block gas limit - a transaction costing 200M gas is not going to fit into a block on probably any network, so any miner won't mine it.
So based on your use case, you might want to change the approach. For example, if the addresses are used for validation, you might be able to store just the merkle tree root (1 value instead of the 10k) and then validate against it using the address and its merkle proof.
Since there is a free quota of 2500, I am wondering if there's anything I could do to optimize the number of requests I make to the API.
If I make a single request with 1 origin address & 2 destination addresses, does that count as 2 requests in terms of the quota?
Thank you
The answer to your question is NO, In Distance Matrix API you have a usage limit of 2500 free elements per day (Standard Plan)[2].
where the:
Nº of Elements = Nº Origins x Nº Destinations [1]
and you can have:
A maximum of 25 origins or 25 destinations per request.
A Maximum 100 elements per request.
A Maximum 100 elements per second*, calculated as the sum of client-side and server-side queries.[2]
[1] https://developers.google.com/maps/faq#usage_quotacalc
[2] https://developers.google.com/maps/documentation/javascript/distancematrix#UsageLimits
I have created 2 method invocation measures in Dynatrace for 2 method calls in the backend.
I want to create an Incident in Dynatrace if method 1 is called less than 80% of the times method 2 is called.
Is there a way to do this in Dynatrace?
When I open the dialog in Dynatrace to create an incident, I see we can add multiple measures in the condition. But I couldn't find a way to set the threshold for method invocation measure 1 to be 80% of the total number of calls for method invocation measure for a given timeframe.
I think the "rate measure" can do what you are looking for. I.e. with this you define one base measure and a fraction measure. The counts of these two compared to each other are the rate computed by that new measure.
See e.g. https://answers.dynatrace.com/storage/attachments/4622-capture.jpg and https://answers.dynatrace.com/spaces/148/uem-open-q-a_2/questions/186446/how-to-setup-a-rate-measure-correctly.html for sample configurations.
I have a simple Spark Streaming process (1.6.1) which receives data from Azure Event Hub. I am experimenting with back pressure and maxRate settings. This is my configuration:
spark.streaming.backpressure.enabled = true
spark.streaming.backpressure.pid.minRate = 900
spark.streaming.receiver.maxRate = 1000
I use two receivers, therefor per microbatch I would expect 2000 messages (in total). Most of the time this works fine (the total event count is below or equal the maxRate value). However sometimes I have spikes which violate the maxRate value.
My test case is as follows:
- send 10k events to azure event hub
- mock job/cluster downtime (no streaming job is running) 60sec delay
- start streaming job
- process events and assert events number smaller or equal to 2000
In that test I can observe that the total number of events sometime is higher than 2000, for example: 2075, 2530, 2040. It is not significantly higher and the processing is not time consuming however I would still expect the total number of events per microbatch to obey the maxRate value. Furthermore sometime the total number of events is smaller than backpressure.pid.minRate, for example: 811, 631.
Am I doing something wrong?