I use the free tariff for the openshift of backend my application.
In the example given on page https://www.openshift.com/products/pricing load of the following characteristics:
15 pages / second
Hundreds of articles
~ 50k visitors per month
but does that mean that the application will be disabled until next month, if the number of requests to it are exceeding the allowable number? and if so, what is the number?
That does not mean the application will be disabled. That is just letting you know about the amount of traffic that a small gear can handle.
Related
I'm trying to calculate the total number of Giabytes that my website will need for outbound data transfer before deciding whether I will host in AWS or other hosting companies, since AWS only provide a monthly free limit of 15 GB outbound. I need to know the size of a page visit, and then will multiply this by the total number of visits per month. I'm still having my website as a localhost, so I couldn't use the online tools yet. I tried to use "Save as" which downloaded the page along with all CSS files and images, so could this be an accurate size of a page ?
I am planning on using data set that's available in SOCRATA platform. I am planning on hitting the REST endpoints instead of downloading and managing data on my own.
I have below questions.
is there a guaranteed uptime?
1000 requests per hour is that a hard limit?
do you have any metrics on response times?
Any help is appreciated
Thanks
Ravi
Per your questions:
is there a guaranteed uptime - You will want to check Socrata's maintenance windows to time any downloads.
1000 requests per hour is that a hard limit? - 1,000 records per request is only applicable to version 1.0 of their API. Version 2.0 has a maximum of 50,000 records and version 2.1 has no limit. See how you can determine the API version for the dataset you are using.
do you have any metrics on response times? - In my experience, it's highly variable, usually depending on your local ISP and network activity. Overnight and weekend jobs are usually faster while mid-day jobs are a bit slower. I'd recommend running some tests.
I am creating a games comparison website and would like to get Amazon prices included within it. The problem I am facing is using their API to get the prices for the 25,000 products I already have.
I am currently using the ItemLookup from Amazons API and have it working to retrieve the price, however after about 10 results I get an error saying 'You are submitting requests too quickly. Please retry your requests at a slower rate'.
What is the best way to slow down the request rate?
Thanks,
If your application is trying to submit requests that exceed the maximum request limit for your account, you may receive error messages from Product Advertising API. The request limit for each account is calculated based on revenue performance. Each account used to access the Product Advertising API is allowed an initial usage limit of 1 request per second. Each account will receive an additional 1 request per second (up to a maximum of 10) for every $4,600 of shipped item revenue driven in a trailing 30-day period (about $0.11 per minute).
From Amazon API Docs
If you're just planning on running this once, then simply sleep for a second in between requests.
If this is something you're planning on running more frequently it'd probably be worth optimising it more by making sure that the length of time it takes the query to return is taken off that sleep (so, if my API query takes 200ms to come back, we only sleep for 800ms)
Since it only says that after 10 results you should check how many results you can get. If it always appears after 10 fast request you could use
wait(500)
or some more ms. If its only after 10 times, you could build a loop and do this every 9th request.
when your request A lot of repetition.
then you can create a cache every day clear context.
or Contact the aws purchase authorization
I went through the same problem even if I put 1 or more seconds delay.
I believe when you begin to make too much requests with only one second delay, Amazon doesn't like that and thinks you're a spammer.
You'll have to generate another key pair (and use it when making further requests) and put a delay of 1.1 second to be able to make fast requests again.
This worked for me.
I created a program that downloads an entire user's drive. To improve the performance, it's a .NET multi-threaded application and I increased the value of System.Net.ServicePointManager.DefaultConnectionLimit to increase the limit of simultaneous connections. I can confirm that if the application asks for 50 concurrent connections, they are correctly opened and used.
Currently, what I have experimented is that I can increase the number of the threads to improve the number of files processed per second. However, after a certain numbers of threads, there is no difference in terms of performance (throttling?).
I have profiled the bandwidth and it seems to have a limit around 1.5 Mo/s (maximum).
The application can download as many files as the bandwidth allows and after a certain threshold, the threads that download lose in speed.
Does Google limit the number of concurrent connections or the amount of bandwidth? In the documentation, I only saw that they impose a limit of API calls per day.
Thanks for your help.
I'm working on a SaaS application. Each user will buy a plan on this application and he will be given a certain amount of storage corresponding to amount of information on the app. For example, the Free user will get 1GB free storage, the Basic user will get 5GB storage.
Currently, all information are stored in MySQL database and it is just plain text without any binary data on disk such as images or videos.
Let's imaging Gmail without attachment as an example of this application.
How can I implement this function on my application? Do we need a method that somehow calculates the amount of info contains in database for a specific user and does some validation on that?
Thank you in advance!
You should keep a running tally of how much space each user has consumed, which is then updated every time a write is made against their quota. Continually computing it is not going to be very efficient.