Timeout on websocket in aws-sdk-go v2 does not appear to have a configuration to change the timeout interval - aws-sdk-go-v2

In starting a session using the aws-sdk-go-v2 library I have tried various settings for changing the websocket timeout value (30 seconds is the current timeout I see). In the AWS documentation they suggested the following:
ctx, _ := context.WithTimeout(context.Background(), time.Hour)
ses, err := ssm.NewFromConfig(cfg).StartSession(ctx, in)
But this timeout is for your request processing, not the steady state communications link. The SDK is effectively setting up a websocket that is to be maintained. I was able to work around this while setting up an RDP session as I could create layer 7 application traffic to keep the socket alive for an hour or more. But in another case I want to use the same SDK to connect to a database and in this case there is no layer 7 that I have access to.
The questions I have are:
Is there a configuration parameter for keeping the SDK websocket alive for more than 30 seconds?
Is this SDK websocket only for transient data exchanges and is therefore designed to terminate within 30 seconds?
Are there any workarounds for a database connection scenario where I might inject layer 7 traffic to keep the websocket alive?

Related

Connect Timeout Error on cloudhub : Mule version:4.2.2

I am trying to hit an https client api which is working fine on postman(gives response in 800ms) and in local mule flow but it is not working fine on cloudhub . I am getting Connect Timeout error. It tries connecting for 30 secs(as per logs) and then gives HTTP:CONNECTIVITY error.
failed: Connect timeout.
errorType=HTTP:CONNECTIVITY
cause=org.mule.extension.http.api.error.HttpRequestFailedException
Response Timeout that I have set is 5 mins.
The flow was working fine when deployed on cloudhub before.It stopped working a few days ago though I didn't make any changes to my code.I am unable to debug this issue as it is not reproducible on my local env(it works perfectly). Any help would be appreciated.
There are 4 different types of general timeouts mule HTTP calls offer. Each has its own differences.
Connection Idle Timeout
Response Timeout
Max Idle Timeout
Query or Transactions Timeout ( Applies for DB Connectors)
Since you are getting
HTTP:CONNECTIVITY ERROR.
Applying a 5 min Response Timeout doesn't help.
Response Timeout (means taking longer time to respond) should be worried only after Establishing a Connection Handshake.
Your problem is with the Connection itself.
The only possible way you could try fixing this is by Applying a Connection Idle Timeout and a Reconnection Strategy with some frequency gaps.
Since you are so sure about tests in local. I suggest you the below two steps:
1. Try using the same HTTP connector configuration in a separate new mule APP. Try with a simple listener and the failing requestor. Also add one more freely available online REST services into your code in other extra flow. Now try to test both. See which one is working and which is failing.
This would tell if it's a real HTTP CONNECTIVITY problem or anything else related to some mule bug.
2. Check your configurations once again and make sure if your hitting the same endpoint in the cloudhub version.
Finally, I hope you did not accidentally put any proxy conf in the local version.
If it was working, probably there was a networking change in the other side that prevents access from the CloudHub application. You didn't share the URL so it is not clear if it is an internal host or a public host. We also don't know if there is some kind of whitelisting on the server side.
You can test connectivity to the HTTP host and port using the Network Tools application, to see if it accessible from your CloudHub environment.

Django ERR_EMPTY_RESPONSE

I am currently running a Django site on ec2. The site sends a csv back to the client. The CSV is of varying sizes. If it is small the site works fine and client is able to download the file. However, if the file gets large, I get an ERR_EMPTY_RESPONSE. I am guessing this is because the connection is aborting without giving adequate time for the process to run fully. Is there a way to increase this time span?
Here's what my site is returning to the client.
with open('//home/ubuntu/Fantasy-Fire/website/optimizer/lineups.csv') as myfile:
response = HttpResponse(myfile, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=lineups.csv'
return response
Is there some other argument that can allow me to ignore this error and keep generating the file even if it is taking awhile or is large?
I believe that you have any sort of backend proxy server which resets the connection to the Django backend and returns ERR_EMPTY_RESPONSE for the case. You should re-configure timeouts on your backend proxy. Usually that is nginx or apache used as a reverse proxy server.
What is Reverse Proxy Server
A reverse proxy server is an intermediate connection point positioned at a network’s edge. It receives initial HTTP connection requests, acting like the actual endpoint.
Essentially your network’s traffic cop, the reverse proxy serves as a gateway between users and your application origin server. In so doing it handles all policy management and traffic routing.
A reverse proxy operates by:
Receiving a user connection request
Completing a TCP three-way handshake, terminating the initial connection
Connecting with the origin server and forwarding the original request
More info at https://www.imperva.com/learn/performance/reverse-proxy/
One more possible case - your reverse proxy backend server doesn't have enough free space to process response from Django and aborts the request. You can also check free space on your reverse proxy balancer.
Within gunicorn, there is an argument for timeout, -t. When you run gunicorn, the default timeout is 30 seconds. Increase that to something your comfortable with like 90 or 120 seconds, whatever you think fits your application.

Apache HTTPClient doesn't allow more than 1500 reusable connections

I'm using Apache HTTPClient (4.2.2) / Java7 to open many reusable connections to a tomcat 7 server (to simulate many users repeatedly hitting the service). Both client and server on Ubuntu 12 (but different machines). I made sure that systctl.conf and limits.conf allow this scenario.
This works well up to about 1500 simulated users / connections. The connections get reused as expected. Somewhere between 1500 and 1600 simulated users however, connections are no longer reused and closed/ re-opend all the time. Why might this be the case?
I don't think the problem is on the server side as when I start multiple simulation clients on different machines against the same server, the server has no problems reusing the connections as long as each client doesn't go beyond 1500 connections.
There can be various reasons as to why connections are not longer being re-used depending on the configuration of the connection manager OR server side configuration. The easiest way to find out the reason is to run HttpClient with context logging on as described in the 'context logging for connection management / request execution' example in the Logging Guide
You might need to increase the number of available workers,at least check if there are workers free when you run out of connections by going to server-status

HTML 5 - Web Sockets on Browser close

How does a Server know to close a Web Socket connection in HTML5 on below scenarios and other cases.
Browser closed abruptly
Browser Refresh(A new Socket connection creation or still it will use existing Connection)
System abrupt power off
In case the client quits without being able to notify the server, the basic characteristics of the TCP implementation define the behavior.
As long as your application (and host system itself) do not attempt to send any data over this broken connection, the host will not realize that something is wrong. Hence, the connection could stay 'open' for a long while and allocate resources, from the server's point of view.
However, in the moment data is attempted to be sent to the remote end, the remote end will not acknowledge the retrieval and TCP retransmission comes into play. It involves a certain number of repetitions and used timeouts. The exact parameters depend on the implementation (operating system in use). When the retransmission finally fails, the TCP connection is closed and resources are freed on the server side. So you can
rely on the fact that at some point your application might want to write to the missing remote end and while doing so trigger the detection of the dead connection or
detect missing remote ends yourself by using something like pings on the application level or
use something like pings on the operating system level, via TCP keepalive techniques.
The easiest part of your question is the browser refresh part. IE,FF and Chrome will close the opened connection and open a new one. I guess, that any other browser will do the same.
Point 1 and 3 i can only guess: If the client can still close the tcp connection cleanly, the server will immediately recognize that the connection has been closed. If you are using tomcat, the onClose method of the MessageInbound instance will be called.
If the client could not close the tcp connection cleanly, the server will wait for some kind of timeout. The server will definitely timeout fast when it tries to write something to the socket. You could implement a heartbeat mechanism to do this. Websockets seem to have the option of an automatic heartbeat but not all browsers and servers seem to support it.
If a user closes a browser tab with an opened web socket, the server will not know this one has been closed right away. However, as Jan-Philip says, if you attempt to write the operation will fail and using the error given you know the current state for the connection.
For example, when using the ws lib for nodejs, if you try to send data to a closed websocket an exception will be thrown, saying something like [Error: not opened]. Their you know the connection no longer exists and you can do any cleanup needed.

Synchronization and time keeping of multiple applications

How would I implement a system that will keep 20 applications running on a closed network to stay synchronized whilst performing various tasks?
Each application will be identical, on an identical machine. These machines will have a socket connection to the master application that will issue TCP commands to the units such as Play:"Video1.mp4". It is vital that these videos are played at the same time and keep time with each other.
The only difference between each unit is that the window will be offset on the desktop, so that each one has a different view port on the application - as this will be used in a multi-projector set up.
any solutions/ideas would be greatly appreciated.
I did it some years ago. 5 computers running 5 instances of the same flash app. Evey app was displaying a "slice" of the same huge app and everything needed to be synchronized at fractions of seconds precision.
I used a simple Python script (running on a 6th machine) that was sending OSC messages on the local network. the flash apps were listening through FLOSC to this packets, and were sending to the Python script message about their status.
The stuff was running at the Withney Museum (NY) and at Palais de Tokyo (Paris), so I'm quite confident about the solution :) I hope it helps you
You have to keep tracking and latest updated data in your master application. you have to broadcast your newly updated data to all connected client to deliver updated data. after any update from any client you have to send updated data to all connected clients.
In FMS remote shared object is used to maintain data centrally across the network connected application via FMS. when any client is sending any updated OnSync Event is fired to all client application and data is sync with FMS Remote Shared Object. So this kind of Flow you have to develop for proper synchronization of data across network.
you can also use the RPC system to sync data between all connected application to the Master application. in that you have to init RPC to the client to Master application to send data update and Master application send RPC to all other client which are connected to the Master application.