I spent over 6 hours on this trouble.
I have nginx/1.2.7 server, and php-fpm on 127.0.0.1:9000.
I have base nginx config:
server
{
listen 80;
server_name example.org www.example.org;
index index.php index.html;
access_log /srv/www/example.org/logs/access.log;
error_log /srv/www/example.org/logs/error.log;
location /
{
root /var/www/html/example.org/public_html;
try_files $uri $uri/ /index.php;
location ~ \.php$
{
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
And it works fine! All php files work like they must.
But I have some separate yii-project, which need to execute in other than main root folder.
And I have this configuration added at bottom:
Where /srv/www/example.org/yiitest — it is a root of yiitest project (with 'protected' folder and other inside it).
location /yiitest
{
root /srv/www/example.org/yiitest;
try_files $uri $uri/ /index.php?$args;
location ~ \.php$
{
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
But it doesn't work. I've got 'File not found'.
And maximum what I can get, it is: example.org/yiitest/ the main page works fine.
And if I go to example.org/yiitest/site/contact/ I'll get file not found. :(
I can't understand, how to correctly setup yii-project in separate subdirectory of a server.
create symlink
cd /var/www/html/example.org/public_html
ln -s ../yiitest/public yiitest
configure nginx
root /var/www/html/example.org/public_html;
location / {
...
}
location /yiitest/ {
index index.php;
# front end
if (!-e $request_filename) {
rewrite ^(.*)$ /yiitest/index.php last;
break;
}
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1: 9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
Then configure yii framework. You should set 'basePath' in your config:
<?php
return array(
'basePath' => 'yiitest',
...
);
Related
I have a Mediawiki farm and everything works except image display. Image upload works in the sense that they get put in the correct folder along with thumbs, but images don't display. I'd like to keep hosting images outside of site root though.
The Mediawiki installation is in: /var/www/mediawiki
The image folder is in: /var/cats.wiki/images
My nginx config is:
server {
listen 80;
server_name cats.wiki; #made up name for example
root /var/www/mediawiki;
client_max_body_size 100M;
location /images {
alias /var/cats.wiki/images; #relevant part
}
location / {
index index.php;
error_page 404 = #mediawiki;
}
location #mediawiki {
rewrite ^/w([^?]*)(?:\?(.*))? /index.php?title=$1&$2 last;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param MW_INSTALL_PATH /var/www/mediawiki;
fastcgi_param WIKI_PATH "catwiki.php";
}
location ~* \.(js|css|svg|png|jpg|jpeg|gif|ico)$ {
try_files $uri /index.php;
expires 365d;
log_not_found off;
gzip_static on;
gzip_comp_level 5;
access_log off;
add_header Cache-Control private;
}
}
and the relevant section of my LocalSettings, the logo also doesn't display on browser
$wgLogo = "/var/cats.wiki/images/logo.png";
$wgEnableUploads = true;
$wgUseImageMagick = true;
$wgImageMagickConvertCommand = "/usr/bin/convert";
$wgUploadDirectory = "/var/cats.wiki/images";
$wgUploadPath = "/images";
Thanks! :)
It is not clear from your question whether the server directive from your nginx config applies to your whole farm or only one wiki. You definitely can have one server for all wikies; I do in my own wiki farm setup.
In my wiki farm setup, the section for the image folder says (simplified and adapted to your example):
location ~* ^/images(?<image_subpath>/.+)$ {
root $images_root;
try_files $image_subpath #mediawiki;
# ... (some code to neutralise potentially malicious uploads)
}
where $images_root is set previously, in http directive, with a map directive (simplified and adapted to your example):
map $host $images_root {
cats.wiki /var/cats.wiki/images;
dogs.wiki /var/dogs.wiki/images;
# ...
}
I have an existing server which is working well hosting a number of sites using nginx and ISPconfig. However I have created a new site and wish to use Laravel.
I've installed Laravel successfully via composer and have got as far as seeing the familiar welcome blade displayed when I visit mywebsite.com/public
What I want to do next is make some clean urls. My experience with vhost files is somewhat limited and I'm having a bit of trouble with the config.
My routes file looks like this
Route::get('/', function () {
return view('welcome');
});
Route::get('/test', function () {
return view('test');
});
and I'd hoped to see mywebsite.com/test display the contents of test.blade.php
I'm aware I need to do some work with the vhost file before I can expect this to work but my experience with vhosts is limited and I'm at a bit of a loss.
My current file looks like this
server {
listen *:80;
server_name mywebsite.com ;
root /var/www/mywebsite.com/web;
index index.html index.htm index.php index.cgi index.pl index.xhtml;
error_page 400 /error/400.html;
error_page 401 /error/401.html;
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 405 /error/405.html;
error_page 500 /error/500.html;
error_page 502 /error/502.html;
error_page 503 /error/503.html;
recursive_error_pages on;
location = /error/400.html {
internal;
}
location = /error/401.html {
internal;
}
location = /error/403.html {
internal;
}
location = /error/404.html {
internal;
}
location = /error/405.html {
internal;
}
location = /error/500.html {
internal;
}
location = /error/502.html {
internal;
}
location = /error/503.html {
internal;
}
error_log /var/log/ispconfig/httpd/mywebsite.com/error.log;
access_log /var/log/ispconfig/httpd/mywebsite.com/access.log combined;
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location /stats/ {
index index.html index.php;
auth_basic "Members Only";
auth_basic_user_file /var/www/clients/client1/web5/web/stats/.htpasswd_stats;
}
location ^~ /awstats-icon {
alias /usr/share/awstats/icon;
}
location ~ \.php$ {
try_files /5e26a1d85cb98f7191261e023385e60d.htm #php;
}
location #php {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/lib/php5-fpm/web5.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
}
}
Now on another server I have this working with this simple directive
server {
root /var/www/public;
index index.php index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
}
But I am limited to what I can do with the vhost on the current server as ISPconfig writes most of it for me and it refuses to write the above config that worked elsewhere. Also I feel editing the file directly will be bad practice, I'd always be on edge that ISPconfig will rewrite the file for me, so I'm not really sure how best to proceed with this.
My options would be to just go ahead and edit the vhost and hope for the best, but if I do that how would I ensure ISPconfig could not overwrite the file without resorting to "hacky" methods?
Alternatively, is there a config I can enter via ISPconfig that will allow rewrites to happen properly in a way that suits Laravel? In this instance, any directive entered would need to take precedence over the ~ .php$ clause as that is written by ISPconfig before any directives entered via the control panel.
I just had the same problem recently. Digging the ISPConfig's sources, I understood it can insert/ merge/ delete location blocks of that default vhosts file. So i did the following:
Sites' menu > choose website > Options
Then I inputed the following on the "nginx Directives" field:
# redirect stuff to the public inner folder
location / {
root {DOCROOT}/public;
try_files /public/$uri /public/$uri/ /public/index.php?$query_string;
}
# merged the stuff people suggests for laravel inside the php block
# mind the 'merge' keyword that did the trick
location ~ \.php$ { ##merge##
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
There is a slight problem with Danilo's answer. Php ran but assets like js/css/images stopped loading. Adding the following to nginx directives inside ISPConfig works for me:
location / {
root {DOCROOT}/public;
try_files $uri public/$uri/ /public/index.php?$query_string;
}
I am not expert with this but from my past experience if I have two domains I would define server blocks for each in two different files and place them in /etc/nginx/sites-available/site1.com and /etc/nginx/sites-available/site2.com
but it looks like you already have a website that you access using mywesite.com which is located at /var/www/mywebsite.com/web; (see the root value of your configuration file)
Now you install Laravel in test folder in /var/www/mywebsite.com/test location.
To access this you need can try adding following at the end of your ispconfig file.
Note how I used the relative path to laravel's public folder from the root of the server block.
location /../test/public {
try_files $uri $uri/ /index.php$is_args$args;
}
For more detailed tutorial try Nginx Server Block Setup.
Hope this helps,
K
I have a zend 1.x application and would like to add a header to a specific JSON request [not to all JSON requests]. For example anything that is requesting /data.json should have the Access-Control-Allow-Origin set.
I tried this config but it is not working [I tried to add generic headers and it is working so it seems that all the required modules are installed]. How would be possible to add the header just to the /data.json request ?
location /data.json {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
try_files $uri $uri/ /index.php$is_args$args;
}
# this part actually serves the zend files
## Parse all .php file in the directory
location ~ .(php|phtml)$ {
fastcgi_pass generic-fpm;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
So far I am adding the header directive in the php script that is generating the JSON response:
<?php
header('Access-Control-Allow-Origin: *');
?>
You did forget the ~ in your expression!
location ~ ^/data.json {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
try_files $uri $uri/ /index.php$is_args$args;
}
i have Mercurial installed and would like to use the hgweb to show also the repository on a webpage.
I am using nginx and I can acess the page where the repository is, but it seems that is coming out just empty (I can see the header columns [name, description and so on..] but I cannot see the content of the repo)
I am using hgweb.cgi and I setup there the config = "/var/hg/hgweb.config" to read the config that I defined like this:
[paths]
/repository=/var/hg/myrepository
[extensions]
hgext.highlight =
[web]
style = gitweb
allow_push = *
Note: the directory /var/hg/myrepository/ is containg the .hg dir.
UPDATE
I made more tests and it seems that there are some errors in the nginx config that are preventing the setup to work. Here is what I have:
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/nginx/conf/server.crt;
ssl_certificate_key /usr/local/nginx/conf/server.key;
ssl_session_timeout 20m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
server_name webhg.server.com *.webhg.server.com;
root /var/www ;
location / {
fastcgi_pass hg-fpm;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_hg;
auth_basic "private!";
auth_basic_user_file /var/hg/hg.htpasswd;
}
location /static/ {
rewrite /static/(.*) /$1 break;
root /usr/share/mercurial/templates/static;
expires 30d;
}
location ~ /\. { deny all; }
}
## Redirect for insecure
server {
server_name webhg.server.com;
listen 80;
rewrite ^(.*) https://$host$1 permanent;
}
I can access successfully the webhg.server.com and the repository is listed with the last updated date [so this is read by hgweb somehow]. But when I click on the repository name or any links on the page [RSS feeds and so on] I just got back to the main page.
Found the issue,
Turned out that there were some missing fastcgi_param in the config
now the params are:
fastcgi_split_path_info ^(/hg)(.*)$;
fastcgi_param SCRIPT_FILENAME $document_root/hgweb.cgi;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param AUTH_USER $remote_user;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
and this one is really important
fastcgi_param REQUEST_METHOD $request_method;
to avoid the error : "abort: HTTP Error 405: push requires POST request" when using SSL.
And with this I can browse the mercurial repository
I have a wildcard DNS entry so *.mydomain.tld is directed to my server.
I'm Using nginx
I have 2 conf files titled:
default
myconf.conf
My conf files look like this:
default:
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
root /var/www/website;
index index.html index.htm;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
}
}
myconf.conf:
server {
listen 80;
#listen [::]:80 default_server ipv6only=on;
root /home/me/www/website;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
# orig # server_name localhost;
server_name me.mydomain.tld;
access_log /home/me/logs/me.mydomain.tld.access.log;
error_log /home/me/logs/me.mydomain.tld.error.log warn;
location / {
try_files $uri $uri/ $uri.php?$args;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
When I browse to the domains as follows, these are the conf files that load up.
me.mydomain.tld loads up root directory defined in myconf.conf
mydomain.tld loads up root directory defined in default
anything.mydomain.tld loads up root directory defined in myconf.conf
What is going wrong that default is not being the catchall it should be?
anything.mydomain.tld should be loading the root directory in the default conf file.
In your default config file, you have to specify default_server on both listen lines; also, you need to remove the server_name line:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www/website;
index index.html index.htm;
#server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
}
}
The underscore that you are using for the server_name is not actually a wild card (if that was your intent). From the nginx Server Names documentation:
There is nothing special about this name, it is just one of a myriad of invalid domain names which never intersect with any real name. Other invalid names like “--” and “!##” may equally be used.