Openshift time-out error (configure timeout client) - openshift

I have an app hosted on Openshift. We have a funtionality that let the user upload file onto $OPENSHIFT_DATA_DIR, then a nodeJS funtion is called to insert into our DB. In case of big tables this operation may take 5-7 minutes to be completed.
BUT, before the server complete the operation the client side got disconected and a Gateway Time-out error appears at 120000ms, the server side process continue the operation, and after sometime is completed, but the client side goes with this horrible error.
I need to know where I can edit those 120000ms. I edited the haproxy with different values but timeout is still 120s. Is there another file somewhere?
retries 6
timeout http-request 8m
timeout queue 8m
timeout connect 8m
timeout client 8m
timeout server 8m
timeout http-keep-alive 8m
found 2 haproxy files:
haproxy/conf/haproxy/haproxy.cfg
haproxy/versions/1.4/configuration/haproxy.cfg
both are edited
I guess there is multiple timeouts out there, but need to know where they are, or how to change client-side timeout
The app Gears: 3
haproxy-1.4 (Web Load Balancer)
Gears: Located with nodejs-0.10
nodejs-0.10 (Node.js 0.10)
postgresql-9.2 (PostgreSQL 9.2)
Gears: 1 small
smarterclayton-redis-2.6 (Redis)

5-7 minutes is an awfully long time for a web request. It sounds like this would be the perfect opportunity for you to explore using background tasks. Try uploading your data from the client and processing it in the background with something similar to delayed_job in rails.

Related

MariaDB stops working without reason on small centos8 server

I have small web/mail server with apache/mariadb. Last week we changed some of the WWW code and to make it work I changed in php.ini line :
max_input_vars to 5000 (now 4000, it was 1000 at the start)
And it seems changed something because our mariadb 10.3.28 starts making problems.
It just stops reciving any information.
Restart of mysql (and httpd) helps for 24h now ...
Log:
2022-10-05 14:28:58 2796199 [Warning] Aborted connection 2796199 to db: 'ACTIVEDB' user: 'USER' host: 'localhost' (Got an error reading communication packets)
This kind of warnings shows up sometimes but now we got dozens every hour.
In PHP i decrased max_input_vars, in my.cnf I addedd
max_allowed_packet = 124M
max_connections = 400
log_warnings = 3
Everything was at default values before.
Log level was for some time at level 4 but it stared to get too big without any time give to "crush".
Disk is nvme 500GB, Intel and shows no problems.
I would like to hear :
how to check/connect mariadb when it looks inactive
what and how (step by step) to check
Thanks all
This is not an answer, but too long for a comment.
The error "Aborted connection ... (Got an error reading communication packets)" occurs if a client disconnected without sending a COM_CLOSE notification to the server before.
This behavior is easily reproducible, e.g. by starting the command line client and killing the command line client from another session. Depending on the log_warning level, the server will write a log entry and increase the server status variable aborted_clients (or aborted_connects if this happens during connection handshake).
Here are only a few possible reasons:
Before 10.2.3 the default log_warning level was 1 (no logging of aborted_connections), since 10.2.4 default value is 2 (log aborted connections) - if the server was recently upgraded from 10.2.3 or lower the problem may have already existed in previous installation, but was not written into log file.
The PHP script(s) doesn't close the connection: As soon the script is ready with its work, make sure that all transactions were committed, memory (result sets) were freed, and the connection was closed properly.
A timeout occurred, e.g. wait_timeout was set too low and exceeded or PHP's max_execution_time exceeded (and script was killed) or net_read/write_timeouts occurred.
DNS problems: In this case enable skip-name-resolve, use IP's and verify against IP's.
Network or firewall problems
I would like to thank for sugestions and your time.
Looks like there was a problem with one table, I don't know how/why but after some time it was set up as READ-ONLY. There was no information in log about it up to level 3 (level 4 generated too much information for me :( )
The DB works some time fine (expect for one table) and in time it looks like it just hangs whole DB.
The case is still "under investigation".
About "damaged" table :
listed and read-only related thinks works fine
insert/change hangs the whole DB for 2-3 minutes and then gets back to work
after few "hangs" the DB just freezes
there was nothing strange in logs level 3
copy table to new and then change names to switch tables works (I hope so)
If anyone have any idea how to check table would be great (standard operatinos like check, analyse did nothing).
Thanks again.

Unexpected MySql PoolExhaustedException

I am running two EC2 instances on AWS to serve my application one for each. Each application can open up to 100 connections to MySql for default. For database I use RDS and t2.medium instance which can handle 312 connections at a time.
In general, my connection size does not get larger than 20. When I start sending notifications to the users to come to the application, the connection size increases a lot (which is expected.). In some cases, MySql connections increases unexpectedly and my application starts to throw PoolExhaustedException:
PoolExhaustedException: [pool-7-thread-92] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:100; busy:100; idle:0; lastwait:30000].
When I check the database connections from Navicat, I see that there are about 200 connections and all of them are sleeping. I do not understand why the open connections are not used. I use standard Spring Data Jpa to save and read my entities which means I do not open or close the connections manually.
Unless I shut down one of the instance and the mysql connections are released, both instances do not response at all.
You can see the mysql connection size change graphic here, and a piece of log here.

Increase the amount of connections in my server MySQL

I have aplications that connect to a remote server (MySQL 5.5 on Windows Server 2012), at first I started receiving "too many connections" message which I solved by increasing MAX_CONNECTION value in my.inf to 500, then I start getting "can't create new thread" message so I decrease decrease timeouts to avoid idle connections using a socket, which didn't completely work. Now I get odd messages like 'file not found', as soon as I restart the service I stop getting the messages and everything works correctly.
The problem occurs when the server reaches around 170 connections at the same time.
Is there some configuration I'm missing?, I really don't know what info you need to give me a hint to fix this. I mean, there are servers that accept a lot morw of connections at the same time, right? waht I'm missing.
RAM and CPU of the system dosen't reach 35-40% at max connections (170).
Edit: Error occur at 2 'places', when running a query or at the attempt of conennection, it's like the MySQL service rejects the attempt. VB6 is the language used in the client app (ODBC connector). The app opens, executes and closes the connection.
Note: I have full control over client app and server config.

Sidekiq returns "ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)" on AWS RDS

I am creating PDF documents on AWS server with Sidekiq for processing this job on background.
While the process of creating the PDF file, the [Rails] application is pooling database to check out whether the PDF file was created or not (the interval: 2 seconds).
This morning I got this error message on the Sidekiq side:
ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)
I am using Amazon RDS with MySQL on it.
As a temporary solution, I increased the pool parameter from 10 to 30 in database.yml, however I realize this is just a temporary patch.
How to fix it properly?
Thank you
I think that your solution is actually the correct one.
ActiveRecord::ConnectionPool is thread based, i.e. it tries to obtain a separate connection for each thread that wants to work with the database. If there are more threads wanting to access the database then the total size of the connection pool (configured with the pool option in database.yml), ConnectionPool tries to wait up to 5 seconds by default if a connection from some other thread is freed. After these 5 seconds time out, the ActiveRecord::ConnectionTimeoutError exception is raised.
Now, Sidekiq uses 25 worker threads by default. So, under higher load, it is perfectly possible that there will be up to 25 jobs (threads) trying to access the db at the same time. If your pool was set to 10, the excess workers had to wait for the other ones to complete and probably some thread had to wait too long.
So, either enlarge the size of the connection pool to at least a little higher value then 25 (the number of sidekiq workers), just as you did, or run your sidekiq with less workers by running it like sidekiq -c 5. Finally, always ensure that you allow enough incoming connections on the MySQL side (by default it's over 100).
Polling generally doesn't scale as a solution.
Without knowing more about the structure of your application I would be tempted to leverage some concurrency constructs from a gem like concurrent-ruby.
Specifically, the creation of a PDF file maps quite closely to the concept of a Future or a Promise
I would look at rearchitecting your worker:
Your PDF generation code should be a Promise. It should open a db connection only long enough to write the resulting PDF to the database; not while it is doing pdf generation as well.
Your main application code should spin up a PDF generation promise on Sidekiq as usual. Instead of polling the database; this code simply waits for the Promise to complete or fail; if the promise completes succesfully the PDF is in the database, if it fails you have an exception trace etc.
As always, ymmv

How can I configure HAProxy to work with server sent events?

I'm trying to add an endpoint to an existing application that sends Server Sent Events. There often may be no event for ~5 minutes. I'm hoping to configure that endpoint to not cut off my server even when the response has not been completed in ~1min, but all other endpoints to timeout if the server fails to respond.
Is there an easy way to support server sent events in HAProxy?
Here is my suggestion for HAProxy and SSE: you have plenty of custom timeout options in HAProxy, and there is 2 interesting options for you.
The timeout tunnel specifies timeout for tunnel connection - used for Websockets, SSE or CONNECT. Bypass both server and client timeout.
The timeout client handles the situation where a client looses their connection (network loss, disappear before the ACK of ending session, etc...)
In your haproxy.cfg, this is what you should do, first in your defaults section :
# Set the max time to wait for a connection attempt to a server to succeed
timeout connect 30s
# Set the max allowed time to wait for a complete HTTP request
timeout client 50s
# Set the maximum inactivity time on the server side
timeout server 50s
Nothing special until there.
Now, still in the defaults section :
# handle the situation where a client suddenly disappears from the net
timeout client-fin 30s
Next, jump to your backend definition and add this:
timeout tunnel 10h
I suggest a high value, 10 hours seems ok.
You should also avoid using the default http-keep-alive option, SSE does not use it. Instead, use http-server-close.