Configure Timeout in Apache 2.2 Server - configuration

I tried to set timeout for my application in httpd.conf file to 1200 seconds. But it seems to timeout after 300 seconds.
This is how it is in my httpd.conf and I have made it available globally and outside <VirtualHost>
Timeout 1200
I also made the same change in httpd-default.conf file. Is/Are there any other way(s) I can configure the timeout?

In my application, since Apache was the load balancer and Weblogic was my server, I had to add/modify this change in httpd.conf file
WLIOTimeoutSecs 1200
This change worked for me.

Related

Asterisk Realtime Crashing on load when using HAProxy to Galera Cluster

Works fine under little load on our test bench but once we add to production the whole thing crashes and we are unable to get asterisk to function correctly. Almost as if there is a lag or delay in accessing the MariaDB cluster.
Our architecture and configs below;
Asterisk 13 Realtime with HAProxy(1.5.18) --> 6 x MariaDB(10.4.11) on independent Datacentres with Galera syncing them (1 only as backup)
Galera Sync is working fine and other services are able to read/write via the HAProxy 100%
Only seems to become and issue when we add load or we reload the dialplan or restart asterisk etc.
[haproxy.cfg]
global
user haproxy
group haproxy
defaults
mode http
log global
retries 2
timeout connect 3000ms
timeout server 10h
timeout client 10h
listen stats
bind *:8404
stats enable
stats hide-version
stats uri /stats
listen mysql-cluster
bind 127.0.0.1:3306
mode tcp
option mysql-check user haproxy_check
balance roundrobin
server mysql_server1 10.0.0.1:3306 check
server mysql_server2 10.0.0.2:3306 check
server mysql_server3 10.0.0.3:3306 check
server mysql_server4 10.0.0.4:3306 check
server mysql_server5 10.0.0.5:3306 check
server mysql_server6 10.0.0.6:3306 check backup
Really we would like to know if firstly Asterisk 13 Realtime will work via HAProxy and if so are there config changes we need to make to get it working.
Can provide more info if required
Try use Realtime->ODBC->haproxy.
If not help, use debugging, for example, gdb traces.
There is no way to determine what issue you have. Need more logs and configs.

504 Gateway Timeout error while selecting 300,000 rows from MySQL database

I have a table orders with 30000rows. I am using Linode Server with 2GB RAM
but when i executed my query using phpmyadmin it give me 504 Gatetimeout Error
SELECT * FROM `orders`
I don't understand what's is the problem? i am getting that error you can see the image below
Add the following line to the file /etc/nginx/nginx.conf in the http{} block:
fastcgi_read_timeout 360;
Restart nginx :
sudo service nginx restart
504 Gateway Timeout error it appears in those cases when a server that hosts the website is unable to return to the set time limit HTTP-response.
As a solution to suit the increase in PHP max_execution_time parameter value
504 Gatetimeout is a HTTP Error not a database error.
The database takes too long to collect your data.
You probably have to increase max_execution_time in your php.ini
Get 30k records at ones - it isn't a good idea, any way, you need to check your sql server configuration, 30k records isn't enough to get over default timeout. B/w if u just change timeout in web server it does not affect you b/c browsers have a default timeout to. Probably mysqltuner can help you to find configuration error.
nano /usr/share/phpMyAdmin/libraries/config.default.php
Add / edit:
$cfg['ExecTimeLimit'] = 1800000;
I hope you're gonna get rid of it.

phpMyAdmin Service Unavailable

I have just installed fresh fedora 21, httpd, mysql and phpMyAdmin. I get an error if I visit localhost/phpMyAdmin
Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Mysql seems to work fine, I can connect to it.
In the file /etc/php.ini look for open_basedir parameter and assign the path of the app for example /var/www/html/phpmyadmin. If you have multiple dirs concatenate them with :
open_basedir=/usr/share/webapps/:/etc/webapps/:/tmp/:/srv/http/arch/cacti/:/var/www/html/phpmyadmin
Don`t forget to restart the httpd server.

Openshift time-out error (configure timeout client)

I have an app hosted on Openshift. We have a funtionality that let the user upload file onto $OPENSHIFT_DATA_DIR, then a nodeJS funtion is called to insert into our DB. In case of big tables this operation may take 5-7 minutes to be completed.
BUT, before the server complete the operation the client side got disconected and a Gateway Time-out error appears at 120000ms, the server side process continue the operation, and after sometime is completed, but the client side goes with this horrible error.
I need to know where I can edit those 120000ms. I edited the haproxy with different values but timeout is still 120s. Is there another file somewhere?
retries 6
timeout http-request 8m
timeout queue 8m
timeout connect 8m
timeout client 8m
timeout server 8m
timeout http-keep-alive 8m
found 2 haproxy files:
haproxy/conf/haproxy/haproxy.cfg
haproxy/versions/1.4/configuration/haproxy.cfg
both are edited
I guess there is multiple timeouts out there, but need to know where they are, or how to change client-side timeout
The app Gears: 3
haproxy-1.4 (Web Load Balancer)
Gears: Located with nodejs-0.10
nodejs-0.10 (Node.js 0.10)
postgresql-9.2 (PostgreSQL 9.2)
Gears: 1 small
smarterclayton-redis-2.6 (Redis)
5-7 minutes is an awfully long time for a web request. It sounds like this would be the perfect opportunity for you to explore using background tasks. Try uploading your data from the client and processing it in the background with something similar to delayed_job in rails.

Timeout error when i download files using FTP Task?

I have ssis package in that i used FTP Task.
When i Using FTP Task i download .csv file, i got operation timeout error.
The file size is 20 MB.
Please help me how to fix this.
Thanks
If you set the timeout period to 0 it will not timeout. You should be aware that the remote host could still close your connection, so you will want to catch that situation.
The setting is in the FTP Connection Manager:
If you are just downloading files, look at your "Use passive mode" setting in the "FTP Connection Manager Editor" and make sure it is selected.