Really need help to figure out what is going wrong.
Application Background: It is a nodejs with socket.io application.
I am testing my site load using siege windows 3.0.5 tool.
70 concurrent users with 10 seconds
siege -c70 -t10s http://localhost:8082
and getting below error messages.
[error] socket: unable to connect sock.c:230: Address family not supported by protocol [error] socket: unable to connect sock.c:230: No such file or directory [error] socket: unable to connect sock.c:230: No such file or directory [error] socket: unable to connect sock.c:230: No such file or directory [error] socket: unable to connect sock.c:230: No such file or directory [error] socket: unable to connect sock.c:230: No such file or directory [error] socket: unable to connect sock.c:230: No such file or directory HTTP/1.1 200 8.96 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.96 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.96 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.96 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.96 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.98 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.98 secs: 3415 bytes ==> GET / HTTP/1.1 200 8.98 secs: 3415 bytes ==> GET /
it works fine, if I use
siege -c60 -t50s http://localhost:8082
Does this means I can only serve 60 concurrent users. What are above errors messages and how to resolve them.
I am already using connectionPool concept for mysql module and releasing connection after using it.
db.getConnection(function(err,connection)
{
if(!err)
{
connection.query(query,function(err,rows){
connection.release();
if(err) {
throw err;
} else {
callback("null",{rows : rows,tablename : ticker});
}
});
}
});
Related
I got this error from Nginx after deploying the code to AWS EB. The weird thing it is that I can access by ssh to the EC2 instance and the folder and file both EXISTS /var/www/html/webroot/index.php
2022/11/01 15:45:10 [error] 1456#1456: *36 testing "/var/www/html/webroot" existence failed (2: No such file or directory) while logging request, client: 0x2.21.03.15, server: , request: "GET / HTTP/1.1", host: "0x2.21.03.1"
2022/11/01 15:45:20 [error] 1456#1456: *37 "/var/www/html/webroot/index.php" is not found (2: No such file or directory), client: 0x2.21.03.1, server: , request: "GET / HTTP/1.1", host: "0x2.21.03.1"
UPDATE: I fixed the other error by removing a bespoke Nginx.conf file I was pushing with each deployment, but now I am getting this error:
2022/11/01 12:42:28 [error] 13146#13146: *25 open() "/var/www/html/webroot/.git/config" failed (2: No such file or directory), client: 142.xx.xx.1, server: , request: "GET /.git/config HTTP/1.1", host: "3.xx.xx3.x5"
I do not understand why and where EB is checking for a /.git/config file. I have the same code in a different instance type (t3.micro) and it works fine. I never had these issues before, it starts happening when I created a new environment with an instance type "t4g.micro"
Any ideas?
note: Both environments works with Amazon Linux 2 and Nginx server.
I keep getting the following messages. But there is nothing in my nginx logs which indicates that requests were returned with status 5xx. Also, app seems to working as expected. Any pointers why I might be getting these?
Message:
Environment health has transitioned from Ok to Warning. 50.0 % of the requests are failing with HTTP 5xx. Insufficient request rate (12.0 requests/min) to determine application health. 1 out of 2 instances are impacted. See instance health for details.
eb logs show the following events around the same time. And they look like a hack attempt. My guess is that these POST requests failure are making EB think that instances are unhealthy. Any advise, how can we prevent this ? Thanks.
2019/02/10 23:49:37 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /51314.php HTTP/1.1", upstream: "http://172.17.0.2:80/51314.php", host: "xxx.xxx.xxx.xxx"
2019/02/10 23:49:37 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /fusheng.php HTTP/1.1", upstream: "http://172.17.0.2:80/fusheng.php", host: "xxx.xxx.xxx.xxx"
2019/02/10 23:49:38 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /repeat.php HTTP/1.1", upstream: "http://172.17.0.2:80/repeat.php", host: "xxx.xxx.xxx.xxx"
2019/02/10 23:49:39 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /api.php HTTP/1.1", upstream: "http://172.17.0.2:80/api.php", host: "xxx.xxx.xxx.xxx"
2019/02/10 23:49:40 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /xiaodai.php HTTP/1.1", upstream: "http://172.17.0.2:80/xiaodai.php", host: "xxx.xxx.xxx.xxx"
2019/02/10 23:49:40 [error] 3263#0: *23308 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.221, server: , request: "POST /xiaodai.php HTTP/1.1", upstream: "http://172.17.0.2:80/xiaodai.php", host: "xxx.xxx.xxx.xxx"
Thanks.
Example reasons can be
nginx proxy crashed on the instance
high CPU usage on the instance
high memory usage on the instance
deployment failure on the instance
I have ubuntu 14.04 and on the server I have nginx & mysql.
Everything works fine but after 5-10 requests to the API the nginx crashes.
The site has been loading for a long time ends up with 404 not found error.
When I restart the service service nginx restart my site is up again.
I have a strong server with
64GB Ram, 1Gbit Port 33TBMonth,
1TB Disk. 12Cores 24Threads.
I don't understand what's the error and how to solve it.
This is the nginx.conf:
https://pastebin.com/raw/eQtMSKAY
error log nginx:
2017/07/30 06:55:43 [error] 18441#0: *6302 connect()
to unix:/var/run/php5-fpm.sock failed (11: Resource
temporarily unavailable) while connecting to upstream,
client: XX.XX.XX.XX, server: 4107.150.4.82, request:
"GET /panel/ajax/user/tools/server?method=getstatus&port=25565 HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock:",
host: "pay2play.co.il", referrer:
"http://pay2play.co.il/panel/panel?id=15"
2017/07/30 06:55:43 [error] 18441#0: *6302 open()
"/usr/share/nginx/html/50x.html" failed (2: No such file
or directory), client: 5.29.8.30, server: 107.150.44.82,
request: "GET /panel/ajax/user/tools/server?method=getstatus&port=25565 HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock",
host: "pay2play.co.il", referrer:
"http://pay2play.co.il/panel/panel?id=15"
Based on what you pasted, the actual error is on the PHP side. The 404 is just nginx attempting to render a "nice" error page for the 503/4, contained in 50x.html. While your pasted version doesn't include it, it's likely contained in one of the includes (which are more relevant to the question than the top-level configuration shown here).
I expect there is something like (from the nginx docs actually):
error_page 500 502 503 504 /50x.html;
When I pressed f5 button in my webpage (5~ seconds kept, test purpose)
my website displays
"502 bad gateway"
I am using symfony 2.6 and mysql with PHP-fpm my VPS have 1GB RAM and 1 core CPU. the nginx log display:
2015/05/12 12:41:17 [error] 9541#0: *661 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable)
while connecting to upstream,
client: 127.0.0.1,
server: localhost,
request: "GET /app_dev.php/ HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock:",
host: "localhost"
The connection is lost after the test and return after 30 seconds~
Question: How can I configure Nginx to prevent that? I need that my website don't fall on many http request
I am trying to deploy a simple RoR in openshift.I am using Ruby-1.9 and mysql-5.1 , the app works fine with local deployment and it is not working in openshift.Previously i had the same issue and there was an open bug with passenger fusion in openshift has any one fixed this or is this still an issue.If there is any work around please do let me know.
I am posting my database yml config in here.
mysql: &mysql
adapter: mysql2
database: "<%=ENV['OPENSHIFT_APP_NAME']%>"
username: "<%=ENV['OPENSHIFT_MYSQL_DB_USERNAME']%>"
password: "<%=ENV['OPENSHIFT_MYSQL_DB_PASSWORD']%>"
host: <%=ENV['OPENSHIFT_MYSQL_DB_HOST']%>
port: <%=ENV['OPENSHIFT_MYSQL_DB_PORT']%>
enter code here
my rhc tail output is as follows
==> app-root/logs/ruby.log <==
10.80.227.1 - - [03/Jun/2014:01:11:48 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:01:11:48 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:02:11:28 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:02:11:28 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:03:11:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:03:11:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:04:13:34 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.80.227.1 - - [03/Jun/2014:04:13:34 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
==> app-root/logs/mysql.log <==
140602 21:52:59 mysqld_safe Logging to '/var/lib/openshift/538ce6295973caef290000fd/mysql//stdout.err'.
140602 21:52:59 mysqld_safe Starting mysqld daemon with databases from /var/lib/openshift/538ce6295973caef290000fd/mysql/data/
140602 21:52:59 mysqld_safe Starting mysqld daemon with databases from /var/lib/openshift/538ce6295973caef290000fd/mysql/data/
140602 21:52:59 InnoDB: Initializing buffer pool, size = 32.0M
140602 21:52:59 InnoDB: Completed initialization of buffer pool
140602 21:53:00 InnoDB: Started; log sequence number 0 44233
140602 21:53:01 [Note] Event Scheduler: Loaded 0 events
140602 21:53:01 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.1.73' socket: '/var/lib/openshift/538ce6295973caef290000fd/mysql//socket/mysql.sock' port: 3306 Source distribution
For testing purposes i have a welcome controller and index page in rails that works fine with local machine.
Can any one please point me to the right resource or help it would be great help for me.
You have mysql: &mysql in your database.yml file. Try this
production:
adapter: mysql2
encoding: utf8
database: <%=ENV['OPENSHIFT_APP_NAME']%>
pool: 30
timeout: 30000
checkout_timeout: 30000
host: <%=ENV['OPENSHIFT_MYSQL_DB_HOST']%>
port: <%=ENV['OPENSHIFT_MYSQL_DB_PORT']%>
username: <%=ENV['OPENSHIFT_MYSQL_DB_USERNAME']%>
password: <%=ENV['OPENSHIFT_MYSQL_DB_PASSWORD']%>