Max duration for azure Api Management cache - azure-api-management

Trying to find somewhere where it explicitely says the maximum number of seconds allowed for the duration in the cache-lookup policy for Azure Api Management, but can't see any result
I have seen a sample where it's set to match the max-age value from the header the backend service sends, so I assume i is the seconds in a year as a minimum. At the same time I have seen it's stored as an int?so maybe is the maximum int value in .net?
Thanks

Related

User based rate limit implementation

I have an express.js backend server, and I am using MySQL to store the users' information. I want to implement an API rate limit that limits a certain number of requests a user can make in a minute; however, if I store the request count per minute in a database and get the count every time an API request is made, this system can easily be abused. Is there a better way to do this?

Trying to create a local copy of our google drive with rclone bringing down all the files, constantly hitting rate limits

As the title states, I'm trying to create a local copy of our entire google drive, we currently are using it as a file storage service which is obviously not the best use-case, but to migrate else where I of course need to get all the files, the entire google drive is around 800gb~.
I am using rclone specifically the copy command to copy the files FROM google drive TO the local server, however I am constantly running into user Rate Limit errors.
I am using a google service account to authenticate this as well, which I believe should provide more usage limits.
2021/11/22 07:39:50 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may
consider re-evaluating expected per-user traffic to the API and adjust project quota
limits accordingly. You may monitor aggregate quota usage and adjust limits in the API
Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?
project=, userRateLimitExceeded)
But I don't really understand since according to my usage it is not even coming close, I am just wondering what exactly can I do to either increase my rate limit (even if that means paying) or is there some sort of solution to this issue? Thanks
Error 403: User
Rate Limit Exceeded. Rate of requests for user exceed configured project quota.
User rate limit is the number of requests your user is making per second. its basically flood protection. You are flooding the server. It is unclear how google calculates this, beyond the 100 requests per user per second. If you are getting the error there is really nothing you can do besides slow down your code. Its also unclear from your question how you are running these requests.
If you could include the code we could see how the requests are being preformed. How ever as you state you are using something called rclone so there is no way of knowing how that works.
Your only option would be to slow your code down if you have any control over that though this third party application. If not you may want to contact the owner of the product for direction as to how to fix it.

How much time ManageIQ takes to reflect data After adding Provider(Azure,AWS etc.)

I added Azure Provider and AWS provider to the ManageIQ its been more than a Day no data about Instances reflected in ManageIQ. As the provider is Authenticated but still no report displayed about Instances in ManageIq with Azure provider and AWS provider.
The time it takes for the initial refresh to complete depends on both the provider type and the amount of inventory in the provider. For most typically sized Azure or Amazon providers, usually it takes on the order of 10s of minutes or less. If you are not seeing anything after a day, there is very likely a problem. For more information, take a look in the evm.log, aws.log, and/or azure.log found in /var/www/miq/vmdb/log.

Long-term access token in API Management REST API

I use API Management REST API to create and manage users, and I have to regenerate the access token every 30 days. It's a bit of a hassle to update my web applications every month.
Is there a better way?
I know I can do it programmatically, too, but I am not sure it's a healthy practice to do it every time.
Whatever you do via APIM SAS token you can do via ARM. With ARM you can create service identity and use it to manage your resources: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal?view=azure-cli-latest
There's a lot of approaches to handle the access token.
To be simplest, you can put the token to a cache system, and set the expire time (like using SETEX in Redis), which is 30 days after in your case. So after 30 days, your access token will be removed automatically by the cache system.
When you need access token but you found it is missing, it means that the access token has expired or you never generated the access token. Whatever, you can generate the access token in your program and put it to the cache system. After that, in next 30 days, you can always use that access token.
In addition, due to time differences between your machine and the servers, it's recommended that set the expire time in your cache system a little bit before the real expire time. So in your case, you can set the expire time with 29 days in your cache system, which guarantee the access token is absolutely valid in remote server.

Amazon API submitting requests too quickly

I am creating a games comparison website and would like to get Amazon prices included within it. The problem I am facing is using their API to get the prices for the 25,000 products I already have.
I am currently using the ItemLookup from Amazons API and have it working to retrieve the price, however after about 10 results I get an error saying 'You are submitting requests too quickly. Please retry your requests at a slower rate'.
What is the best way to slow down the request rate?
Thanks,
If your application is trying to submit requests that exceed the maximum request limit for your account, you may receive error messages from Product Advertising API. The request limit for each account is calculated based on revenue performance. Each account used to access the Product Advertising API is allowed an initial usage limit of 1 request per second. Each account will receive an additional 1 request per second (up to a maximum of 10) for every $4,600 of shipped item revenue driven in a trailing 30-day period (about $0.11 per minute).
From Amazon API Docs
If you're just planning on running this once, then simply sleep for a second in between requests.
If this is something you're planning on running more frequently it'd probably be worth optimising it more by making sure that the length of time it takes the query to return is taken off that sleep (so, if my API query takes 200ms to come back, we only sleep for 800ms)
Since it only says that after 10 results you should check how many results you can get. If it always appears after 10 fast request you could use
wait(500)
or some more ms. If its only after 10 times, you could build a loop and do this every 9th request.
when your request A lot of repetition.
then you can create a cache every day clear context.
or Contact the aws purchase authorization
I went through the same problem even if I put 1 or more seconds delay.
I believe when you begin to make too much requests with only one second delay, Amazon doesn't like that and thinks you're a spammer.
You'll have to generate another key pair (and use it when making further requests) and put a delay of 1.1 second to be able to make fast requests again.
This worked for me.