I am new to nginx. I am transfering my server from apache to nginx but my many projects on the CodeIgniter core PHP sites working perfectly but the CodeIgniter is not work.
My sample url like this:
http://example.com/track/
this is redirect to:
http://example.com/track/index.php/sessions/login
but it returns 404 Not Found.
my server configure like this:
server {
listen 80;
server_name 192.168.0.80;
location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /usr/share/nginx/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
my error log file like this
2013/05/15 10:21:37 [error] 2474#0: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 192.168.0.11, server: 192.168.0.80, request: "GET /favicon.ico HTTP/1.1", host: "192.168.0.80"
2013/05/15 10:21:37 [error] 2474#0: *1 FastCGI sent in stderr: "Unable to open primary script: /usr/share/nginx/html/index.php (No such file or directory)" while reading response header from upstream, client: 192.168.0.11, server: 192.168.0.80, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.0.80"
2013/05/15 10:22:05 [error] 2474#0: *1 open() "/usr/share/nginx/html/track/index.php/sessions/login" failed (20: Not a directory), client: 192.168.0.11, server: 192.168.0.80, request: "GET /track/index.php/sessions/login HTTP/1.1", host: "192.168.0.80"
2013/05/15 10:26:46 [error] 2474#0: *5 open() "/usr/share/nginx/html/track/index.php/sessions/login" failed (20: Not a directory), client: 192.168.0.11, server: 192.168.0.80, request: "GET /track/index.php/sessions/login HTTP/1.1", host: "192.168.0.80"
2013/05/15 10:28:33 [error] 2474#0: *7 open() "/usr/share/nginx/html/track/index.php/sessions/login" failed (20: Not a directory), client: 192.168.0.11, server: 192.168.0.80, request: "GET /track/index.php/sessions/login HTTP/1.1", host: "192.168.0.80"
2013/05/15 10:29:59 [error] 2497#0: *1 open() "/usr/share/nginx/html/track/index.php/sessions/login" failed (20: Not a directory), client: 192.168.0.11, server: 192.168.0.80, request: "GET /track/index.php/sessions/login HTTP/1.1", host: "192.168.0.80"
What's wrong? I did a search in google but not working perfectly.
Check out this link, should fix your rewriting issues.
If you have further questions you can ask.
Code igniter for nginx
Edit:
Ok so there's several ways to fix your case, but I'll describe the closest to your case
If you're going to edit in the same file as nginx conf as how I think you're doing right now then try adding this
location /track {
try_files $uri $uri/ /track/index.php;
}
Not sure if the routing needs $request_uri appended to index.php or not.
And i think you should configure CodeIgniter to be aware of this sub folder
$config['base_url'] = "192.168.0.80/track"
This isn't really the cleanest way to do this configuration, I'd prefer adding a domain name in the /etc/hosts and create a new separate server in nginx.
I ran into a similar problem earlier today: the code igniter URI routing requires that both the REQUEST_URI and the SCRIPT_NAME variables be defined. Many of the Nginx/PHP-FPM guides don't define the SCRIPT_NAME variable, so you end up with routing failures. You'll need to add a line that looks like this:
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
and then you can use REQUEST_URI in your config.php file and the routing should work.
Related
I got this error from Nginx after deploying the code to AWS EB. The weird thing it is that I can access by ssh to the EC2 instance and the folder and file both EXISTS /var/www/html/webroot/index.php
2022/11/01 15:45:10 [error] 1456#1456: *36 testing "/var/www/html/webroot" existence failed (2: No such file or directory) while logging request, client: 0x2.21.03.15, server: , request: "GET / HTTP/1.1", host: "0x2.21.03.1"
2022/11/01 15:45:20 [error] 1456#1456: *37 "/var/www/html/webroot/index.php" is not found (2: No such file or directory), client: 0x2.21.03.1, server: , request: "GET / HTTP/1.1", host: "0x2.21.03.1"
UPDATE: I fixed the other error by removing a bespoke Nginx.conf file I was pushing with each deployment, but now I am getting this error:
2022/11/01 12:42:28 [error] 13146#13146: *25 open() "/var/www/html/webroot/.git/config" failed (2: No such file or directory), client: 142.xx.xx.1, server: , request: "GET /.git/config HTTP/1.1", host: "3.xx.xx3.x5"
I do not understand why and where EB is checking for a /.git/config file. I have the same code in a different instance type (t3.micro) and it works fine. I never had these issues before, it starts happening when I created a new environment with an instance type "t4g.micro"
Any ideas?
note: Both environments works with Amazon Linux 2 and Nginx server.
After filling out a form on my live website, I get this message in the browser when I hit submit:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /contact.
Reason: Error reading from remote server
Apache/2.4.18 (Ubuntu) Server at (mysite) Port 80
ubuntu error log:
Mon Jan 22 00:12:21.518850 2018] [proxy:error] [pid 18331:tid 139839210997504] [client 24.245.70.116:62084] AH00898: Error reading from remote server returned by /contact, referer: (myurl)
.conf file:
<VirtualHost *:80>
ServerName www.(mysite).net
<Location "/">
ProxyPreserveHost On
ProxyPass http://localhost:3210/
ProxyPassReverse http://localhost:3210/
</Location>
</VirtualHost>
~
I have ubuntu 14.04 and on the server I have nginx & mysql.
Everything works fine but after 5-10 requests to the API the nginx crashes.
The site has been loading for a long time ends up with 404 not found error.
When I restart the service service nginx restart my site is up again.
I have a strong server with
64GB Ram, 1Gbit Port 33TBMonth,
1TB Disk. 12Cores 24Threads.
I don't understand what's the error and how to solve it.
This is the nginx.conf:
https://pastebin.com/raw/eQtMSKAY
error log nginx:
2017/07/30 06:55:43 [error] 18441#0: *6302 connect()
to unix:/var/run/php5-fpm.sock failed (11: Resource
temporarily unavailable) while connecting to upstream,
client: XX.XX.XX.XX, server: 4107.150.4.82, request:
"GET /panel/ajax/user/tools/server?method=getstatus&port=25565 HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock:",
host: "pay2play.co.il", referrer:
"http://pay2play.co.il/panel/panel?id=15"
2017/07/30 06:55:43 [error] 18441#0: *6302 open()
"/usr/share/nginx/html/50x.html" failed (2: No such file
or directory), client: 5.29.8.30, server: 107.150.44.82,
request: "GET /panel/ajax/user/tools/server?method=getstatus&port=25565 HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock",
host: "pay2play.co.il", referrer:
"http://pay2play.co.il/panel/panel?id=15"
Based on what you pasted, the actual error is on the PHP side. The 404 is just nginx attempting to render a "nice" error page for the 503/4, contained in 50x.html. While your pasted version doesn't include it, it's likely contained in one of the includes (which are more relevant to the question than the top-level configuration shown here).
I expect there is something like (from the nginx docs actually):
error_page 500 502 503 504 /50x.html;
I have a directory/web app that is located outside of the web root directory of my site.
Say the site is here:
/var/www/site/htdocs/
And the external app is located here:
/var/www/apps/coolapp/
My question is how can I configure nginx to map/route all requests that are like www.mysite.com/coolapp/* (asterisk being wildcard) to the external location /var/www/apps/coolapp/? For example, www.mysite.com/coolapp/test.php should server /var/www/apps/coolapp/test.php.
I have added an alias directive in production.conf that the main nginx.conf file includes. This worked fine for everything except .php files because there is another location that is catching .php files instead. So I nested a location in with the alias to catch .php files, but now nginx is telling me it can't find the .php files "404 File Not Found". Here is what production.conf currently looks like
server {
listen 80;
listen 443 ssl;
ssl_certificate /blah/blah/blah;
ssl_certificate_key /blah/blah/blah;
ssl_protocols blah blah blah;
ssl_ciphers blahblahblah;
ssl_prefer_server_ciphers blahblah;
access_log /var/log/nginx/www.mysite.com-access.log;
error_log /var/log/nginx/www.mysite.com-error.log error;
server_name mysite.com www.mysite.com;
root /var/www/site/htdocs;
include conf/magento_rewrites.conf;
include conf/magento_security.conf;
include /var/www/site/nginx/*.conf;
#-------CODE IN QUESTION-------
location /coolapp/ {
alias /var/www/apps/coolapp/;
location ~ \.php {
# Copied from "# PHP Handler" below
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
fastcgi_param HTTPS $fastcgi_https;
rewrite_log on;
# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}
}
# PHP handler
location ~ \.php {
## Catch 404s that try_files miss
if (!-e $request_filename) { rewrite / /index.php last; }
## Store code is defined in administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
fastcgi_param HTTPS $fastcgi_https;
rewrite_log on;
# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}
# 404s are handled by front controller
location #magefc {
rewrite ^(.*) /index.php?$query_string last;
}
# Last path match hands to magento or sets global cache-control
location / {
## Maintenance page overrides front controller
index index.html index.php;
try_files $uri $uri/ #magefc;
expires 24h;
}
}
conf/magento_fcgi.conf looks like this:
fastcgi_pass phpfpm;
## Tell the upstream who is making the request
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
# Ensure the admin panels have enough time to complete large requests ie: report generation, product import/export
proxy_read_timeout 1600s;
# Ensure PHP knows when we use HTTPS
fastcgi_param HTTPS $fastcgi_https;
## Fcgi Settings
include fastcgi_params;
fastcgi_connect_timeout 120;
fastcgi_send_timeout 320s;
fastcgi_read_timeout 1600s;
fastcgi_buffer_size 128k;
fastcgi_buffers 512 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors off;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/apps/coolapp$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
# nginx will buffer objects to disk that are too large for the buffers above
fastcgi_temp_path /tmpfs/nginx/tmp 1 2;
#fastcgi_keep_conn on; # NGINX 1.1.14
expires off; ## Do not cache dynamic content
Here are some errors messages I pulled from error.log
2014/02/28 11:10:17 [error] 9215#0: *933 connect() failed (111: Connection refused) while connecting to upstream, ................. client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://[::1]:9000", host: "www.mysite.com"
2014/02/28 11:10:17 [error] 9215#0: *933 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mysite.com"
2014/02/28 11:11:59 [error] 9220#0: *1193 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mysite.com"
Does anyone see what I'm doing wrong?
This appears to have been fixed in a newer version. As long as SCRIPT_FILENAME is set to $request_filename, it should work as expected. (This differs from previous versions, where $request_filename wouldn't work in all cases.) Additionally, you need to omit any try_files directive in the inner location block. Re-evaluating $uri appears to throw off $request_filename.
Ok, second read after coffee. Get the SCRIPT_FILENAME out of the included fastcgi configuration and set it in both location blocks. If this doesn't fix things then in the coolapp location hardcode the doc root path and see if that fixes things.
I set up a Vagrant VirtualBox box for Debian Wheezy following this instruction.
I installed nginx and php5-fpm on this virtual machine. I can access my guest machine via 127.0.0.1:8080 from host. It can also serve php files and phpinfo() works correctly, too.
However, when I try to access a remote MySQL server from a php file, the request always times out and I get a 504 Gateway Timeout error.
I noticed the followings.
In my nginx conf file, I have this line fastcgi_pass unix:/var/run/php5-fpm.sock;.
In /etc/php5/fpm/pool.d/www.conf, I have listen = /var/run/php5-fpm.sock.
php5-fpm.sock exists in /var/run/.
If I use 127.0.0.1:9000 instead of the socket, I still get the 504 Gateway Timeout error, but I get the error immediately without any waiting.
I added proxy_read_timeout 300; in my nginx conf, but this did not solve the issue.
My Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "wheezy32"
config.vm.provision :shell, :path => "dev/bootstrap.sh"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network :forwarded_port, guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network :private_network, ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network :public_network
end
My nginx conf
server {
root /var/www/sites/mysite/public_html;
index index.html index.htm index.php;
# Make site accessible from http://localhost/
server_name localhost;
access_log /var/www/logs/mysite/mysite.access_log;
error_log /var/www/logs/mysite/mysite.error_log;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
proxy_read_timeout 300;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
/var/www/logs/mysite/mysite.error_log
2013/06/16 23:47:27 [error] 2567#0: *23 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.0.2.2, server: localhost, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "127.0.0.1:8080"
2013/06/16 23:47:27 [error] 2567#0: *23 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.0.2.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "127.0.0.1:8080"
Here is how I attempt to connect to the remote MySQL server.
require_once('/var/www/sites/mysite/includes/db_constants.php');
try {
$dsn = 'mysql:host=172.16.0.51;dbname=' . DB_NAME . ';charset=utf8';
$db = new PDO($dsn,DB_USER,DB_PASS);
$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
} catch (PDOException $e) {
header('HTTP/1.1 500');
exit;
} catch (Exception $e) {
header('HTTP/1.1 500');
exit;
}
What am I missing?
As #cmur2 said, I was using the private IP to connect to the remove server, and that was why it did not work. I changed it to a public IP and now it is working correctly.