Delete Network processes hung up - google-compute-engine

Deleting network request hung up and it's not stopping. It's causing my rate limit to be exceeded and I can't even see the list of operations.

Finally the hung API requests stopped after 550k request, about 150 requests per sec. Not sure if this is a bug....

Related

Drop dangling queries on Laravel/PHP side when the HTTP request gets timed out

I have a system where some queries are executed on the webapp server and others on the worker machines, that consume jobs from a queue.
Some clients make atipical requests for the database that exceed the 60 seconds for the HTTP transaction, receiving a Timeout error on the frontend.
On the other hand, the database keeps running those long queries. The users usually end up hitting the refresh button which leads to a overload on the database.
Is there a way to close those queries that were executed by the web server once the request has timed out? I understand I could place some limits on the MySQL server, but I do want the worker to be able to make long requests.

Amazon API submitting requests too quickly

I am creating a games comparison website and would like to get Amazon prices included within it. The problem I am facing is using their API to get the prices for the 25,000 products I already have.
I am currently using the ItemLookup from Amazons API and have it working to retrieve the price, however after about 10 results I get an error saying 'You are submitting requests too quickly. Please retry your requests at a slower rate'.
What is the best way to slow down the request rate?
Thanks,
If your application is trying to submit requests that exceed the maximum request limit for your account, you may receive error messages from Product Advertising API. The request limit for each account is calculated based on revenue performance. Each account used to access the Product Advertising API is allowed an initial usage limit of 1 request per second. Each account will receive an additional 1 request per second (up to a maximum of 10) for every $4,600 of shipped item revenue driven in a trailing 30-day period (about $0.11 per minute).
From Amazon API Docs
If you're just planning on running this once, then simply sleep for a second in between requests.
If this is something you're planning on running more frequently it'd probably be worth optimising it more by making sure that the length of time it takes the query to return is taken off that sleep (so, if my API query takes 200ms to come back, we only sleep for 800ms)
Since it only says that after 10 results you should check how many results you can get. If it always appears after 10 fast request you could use
wait(500)
or some more ms. If its only after 10 times, you could build a loop and do this every 9th request.
when your request A lot of repetition.
then you can create a cache every day clear context.
or Contact the aws purchase authorization
I went through the same problem even if I put 1 or more seconds delay.
I believe when you begin to make too much requests with only one second delay, Amazon doesn't like that and thinks you're a spammer.
You'll have to generate another key pair (and use it when making further requests) and put a delay of 1.1 second to be able to make fast requests again.
This worked for me.

How to increase time of app engine request handler as it abort each request in 60 sec?

I have an application deployed on GAE having endpoints. Each endpoint make a connection with database , get data and close connection and return data. Normally everything works fine but when there is hike in requests it starts taking more than 60 sec and requests get aborted. Due to this it does not close database connection and there mysql got 1000+ connections and then each requests starts aborting and it shows deadline exceeded error. Is there any solution for this ?
You could wrap the "get data" portion with a try... finally... statement and move the "close connection" portion in the finally section. Then start an "about to exceed deadline" timer before "get data" (something like say 45 seconds) and raise an exception if the timer expires, allowing you to close the connection in the finally portion, which should take care of the orphan open connections (but would not prevent errors in those requests).
If your application tolerates it you could also look into using task queues which have a 10 min deadline, which could help reducing/eliminating the errors in the requests as well.
You can also find some general advice for addressing deadline exceeded errors here: https://cloud.google.com/appengine/articles/deadlineexceedederrors, donno if applicable to your app.
EDIT: actually the suggestion in the first paragraph above doesn't work on GAE as the Python sandbox doesn't allow installing a custom signal handler:
signal.signal(signal.SIGALRM, timer_expired)
AttributeError: 'module' object has no attribute 'signal'
After seeing your code a somehow equivalent solution would be to replace your cursor.fetchall() with a loop of cursor.fetchone() or cursor.fetchmany() to split your operation in smaller pieces:
http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchone.html. You'd get a start timestamp (with time.time() for example) when entering your request handler. Then inside the loop you'd get another timestamp to measure the time elapsed so far since the start timestamp and you'd break out of the loop and close the DB connection when deadline expiration nears. Again, this won't help with actually replying successfully to the requests if it takes so much time to prepare the replies.
You can use this solution to close connections when deadlines are exceeded:
Dealing with DeadlineExceededErrors
This way you won't have any open connections hanging there forever.
Think about the design of your application -
1.Use the deadline exception handling - Design smells
Because there will be situation(s) where db operation takes more than 60 seconds , If its a simple query then its well and good , but reconsider the design of the application . User Expierence is going to be hurt.
2.Lets change the design to use the endpoints-
https://cloud.google.com/appengine/docs/java/endpoints/
The way to go ,future proof.
3.Using Back-end or Task-queues as descibed in this post
Max Time for computation on Google App Engine
You can set the Timeouts
interactive_timeout
and / or
wait_timeout
based on connection Type they use one of them

Railo websites can't connect during MySQL backup

I have a weekly backup that runs for one of my mysql databases for one of my websites (ccms). This backup is about 1.2GB and takes about 30 min to run.
When this database backup runs, all my other railo websites can not connect and go "down" for the duration of the backup.
One of the errors I have managed to catch was:
"[show] railo.runtime.exp.RequestTimeoutException: request (:119) is run into a
timeout (1200 seconds) and has been stopped. open locks at this time (c:/railo/webapps/root/ccms/parsed/photo.view.cfm,
c:/railo/webapps/root/ccms/parsed/profile.view.cfm, c:/railo/webapps/root/ccms/parsed/album.view.cfm,
c:/railo/webapps/root/ccms/parsed/public.dologin.cfm)."
What I believe is happening is that the tables required for those pages (the "ccms" website) are being locked due to the backup, which is fair enough.
But, why is that causing the other railo websites to time out? For example, the error I pasted above was actually taken from a different website, not the "ccms" website that it references in the error. Any website I try and run fails and throws an error that references the "ccms" website, which is the one being backed up. How do I avoid this?
Any insight would be greatly appreciated.
Thanks
One possibility is that because your timeout appears to be 20 minutes is that each time a request comes in to the site which IS being backed up, that thread blocks on waiting for the DB.
Railo has a pool of worker threads to handle requests and now one of them is tied up. As requests continue to come in, any requests to the affected site tie up another thread. Eventually there are no more workers in the pool and all subsequent requests are queued up to be processed once workers become available.
I'm not an expert on debugging Railo, but the above seems plausible to me. You could consider running different Railo processes for different sites, which would isolate them or drastically lowering your DB timeout (if acceptable).

403 rate limit after only 1 insert per second

My app is occasionally (once per day) running a bulk insert of around 1,000 files. After a handful of inserts I start getting 403 rate limit responses. Since my app does the inserts sequentially, my attempted insert rate is never higher than 1 per second.
I've checked that I have billing enabled and that my quota limits are 100+ per second, so I don't understand why I'm getting throttled so aggressively. The consequence is that the insert is taking over an hour which isn't a great advert for Drive :-(
Seems the answer is that Drive will allow up to 30 or so inserts before rejecting with 403 errors. The precise figure, and the rate at which the rate limit is de-restricted are not made public. See also 403 rate limit on insert sometimes succeeds
You need to implement exponential backoff as Google describes in their documentation.
You can change the rate in the api console. Set it to a larger value like 10000/sec.