I am currently running a Django site on ec2. The site sends a csv back to the client. The CSV is of varying sizes. If it is small the site works fine and client is able to download the file. However, if the file gets large, I get an ERR_EMPTY_RESPONSE. I am guessing this is because the connection is aborting without giving adequate time for the process to run fully. Is there a way to increase this time span?
Here's what my site is returning to the client.
with open('//home/ubuntu/Fantasy-Fire/website/optimizer/lineups.csv') as myfile:
response = HttpResponse(myfile, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=lineups.csv'
return response
Is there some other argument that can allow me to ignore this error and keep generating the file even if it is taking awhile or is large?
I believe that you have any sort of backend proxy server which resets the connection to the Django backend and returns ERR_EMPTY_RESPONSE for the case. You should re-configure timeouts on your backend proxy. Usually that is nginx or apache used as a reverse proxy server.
What is Reverse Proxy Server
A reverse proxy server is an intermediate connection point positioned at a network’s edge. It receives initial HTTP connection requests, acting like the actual endpoint.
Essentially your network’s traffic cop, the reverse proxy serves as a gateway between users and your application origin server. In so doing it handles all policy management and traffic routing.
A reverse proxy operates by:
Receiving a user connection request
Completing a TCP three-way handshake, terminating the initial connection
Connecting with the origin server and forwarding the original request
More info at https://www.imperva.com/learn/performance/reverse-proxy/
One more possible case - your reverse proxy backend server doesn't have enough free space to process response from Django and aborts the request. You can also check free space on your reverse proxy balancer.
Within gunicorn, there is an argument for timeout, -t. When you run gunicorn, the default timeout is 30 seconds. Increase that to something your comfortable with like 90 or 120 seconds, whatever you think fits your application.
Related
I am trying to hit an https client api which is working fine on postman(gives response in 800ms) and in local mule flow but it is not working fine on cloudhub . I am getting Connect Timeout error. It tries connecting for 30 secs(as per logs) and then gives HTTP:CONNECTIVITY error.
failed: Connect timeout.
errorType=HTTP:CONNECTIVITY
cause=org.mule.extension.http.api.error.HttpRequestFailedException
Response Timeout that I have set is 5 mins.
The flow was working fine when deployed on cloudhub before.It stopped working a few days ago though I didn't make any changes to my code.I am unable to debug this issue as it is not reproducible on my local env(it works perfectly). Any help would be appreciated.
There are 4 different types of general timeouts mule HTTP calls offer. Each has its own differences.
Connection Idle Timeout
Response Timeout
Max Idle Timeout
Query or Transactions Timeout ( Applies for DB Connectors)
Since you are getting
HTTP:CONNECTIVITY ERROR.
Applying a 5 min Response Timeout doesn't help.
Response Timeout (means taking longer time to respond) should be worried only after Establishing a Connection Handshake.
Your problem is with the Connection itself.
The only possible way you could try fixing this is by Applying a Connection Idle Timeout and a Reconnection Strategy with some frequency gaps.
Since you are so sure about tests in local. I suggest you the below two steps:
1. Try using the same HTTP connector configuration in a separate new mule APP. Try with a simple listener and the failing requestor. Also add one more freely available online REST services into your code in other extra flow. Now try to test both. See which one is working and which is failing.
This would tell if it's a real HTTP CONNECTIVITY problem or anything else related to some mule bug.
2. Check your configurations once again and make sure if your hitting the same endpoint in the cloudhub version.
Finally, I hope you did not accidentally put any proxy conf in the local version.
If it was working, probably there was a networking change in the other side that prevents access from the CloudHub application. You didn't share the URL so it is not clear if it is an internal host or a public host. We also don't know if there is some kind of whitelisting on the server side.
You can test connectivity to the HTTP host and port using the Network Tools application, to see if it accessible from your CloudHub environment.
how to prevent automatic repeat http request from client machine to tomcat server?
i have a problem in my live web applicaton working through local LAN.when a network failure occur sometimes multiple insertion take place in my mysql data base.
Any one know about these type of issues?
my application use tomcat as web sever and centos as interface
I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.
I have a server which contains the data to be served upon API requests from mobile clients. The data is kind of persistent and update frequency is very low (say once in a week). But the table design is pretty heavy which makes API requests to be served slowly
The Web Service is implemented with Yii + Postgre SQL.
Using memcached is a way to solve this problem? If yes, how can I manage, if the cached data becomes dirty?
Any alternative solution for this? Postgre has any built-in mechanism like MEMORY in MySQL?
How about redis?
You could use memcached, but again everybody would hit you database server. In your case, you are saying the query results are kind of persistent so it might make more sense to cache the JSON responses from your Web Service.
This could be done using a Reverse Proxy with a built in cache. I guess an example might help you the most how we do it with Jetty (Java) and NGINX:
In our setup, we have a Jetty (Java) instance serving an API for our mobile clients. The API is listening on localhost:8080/api and returning JSON results fetched from some queries on a local Mysql database.
At this point, we could serve the API directly to our clients, but here comes the Reverse Proxy:
In front of the API sits an NGINX webserver listening from 0.0.0.0:80/ (everywhere, port 80)
When a mobile client connects to 0.0.0.0:80/api the built-in Reverse Proxy tries to fetch the exact query string from it's cache. If this fails, it fetches it from localhost:8080/api, puts it in it's cache and serves the new value found in the cache.
Benefits:
You can use other NGINX goodies: automatic GZIP compression of the cached JSON files
SSL endpoint termination at NGINX.
NGINX workers might benefit you, when you have a lot more connections, all requesting data from the cache.
You can consolidate your service endpoints
Think about cache-invalidation:
You have to think about cache-invalidation. You can tell NGINX to hold on it's cache, say, for a week for all HTTP 200 request for localhost:8080/api, or 1 minute for all other HTTP status codes. But if there comes the time, where you want to update the API in under a week, the cache is invalid, so you have to delete it somehow or turn down the caching time to an hour or day (so that most people will hit the cache).
This is what we do: We chose to delete the cache, when it is dirty. We have another JOB running on the Server listening to an Update-API event triggered via Puppet. The JOB will take care of clearing the NGINX cache for us.
Another idea would be to add the clearing cache function inside your Web Service. The reason we decided against this solution is: The Web Service would have to know it runs behind a reverse proxy, which breaks separation of concerns. But I would say, it depends on what you are planning.
Another thing, which would make your Web Service more right would be to serve correct ETAG and cache-expires headers with each JSON file. Again, we did not do that, because we have one big Update Event, instead of small ones for each file.
Side notes:
You do not have to use NGINX, but it really easy to configure
NGINX and Apache have SSL support
There is also the famous Reverse Proxy (https://www.varnish-cache.org), but to my knowledge it does not do SSL (yet?)
So, if you were to use Varnish in front of your Web Service + SSL, you would use a configuration like:
NGINX -> Varnish -> Web Service.
References:
- NGINX server: http://nginx.com
- Varnish Reverse Proxy: https://www.varnish-cache.org
- Puppet IT Automation: https://puppetlabs.com
- NGINX reverse proxy tutorial: http://www.cyberciti.biz/faq/howto-linux-unix-setup-nginx-ssl-proxy/ http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html
I have this project for my classes i'm currently workin' on. here it is:
WebPage client for Telnet not on standard ports, with ability to choose a port and connect
I have machines with telnet servers on them, just waiting for connection.
So my idea was to set up a nodeJS with express server on a dedicated machine. This would handle connections through telnet and host a page for clients, that would use socket.io to exchange information with server side.
But as i'm new to such technologies (telecommunications student) i wonder if it is possible. I spotted something like this - jsterm.com by Peter Nitsch, but i see there are some massive gaps in code and the demo does not really work so i don't know if it actually works. Did anyone try this?
My other problem is - when i send information to nodeJS server through websockets, which seems achievable for me, what do i do with this information? Do i just set up another websocket to pass the same data i got from client websocket directly to the telnet port?
Can sockets connect directly to specific port, without any websocket waiting on the other side?
If my idea is wrong, could anyone help me - maybe there exists some nice solution - i was thinking about Anyterm for example but i see that it requires an apache server and runs completely different technologies...
Just to be clear, WebSocket connections are not raw TCP socket connections. They have extra header information in each packet, browser to server data is masked using a running XOR, etc.
In order for the browser to communicate with a normal TCP server (e.g. a telnet server) you will need some sort of bridge service. It just so happens that such a thing already exists. websockify is a server that accepts WebSocket connections and bridges them to a raw TCP server.
In fact, the websockify project already includes a working telnet client as an example application. However, note that one limitation of websockify (for security reasons) is that the client cannot pick an arbitrary server address/port to connect to. The target address(es) must be predefined, either as a single target specified on the command line for websockify, or as multiple targets specified in a configuration file (and selected via a token in the WebSocket connect string).
There are multiple implementations of websockify in different languages (python, C, node, ruby, Clojure) however, only the python version currently supports multiple targets via a configuration file.
Disclaimer: I created websockify.