I've been having some problems with my application not loading the views (sometimes).
I am running a Debian server with php-fpm and nginx (php5.6.8 and nginx 1.8.0) Both compiled from source. On top of that I am running Lavavel 4.2.
So far I've had the problem in both Chrome and Firefox (chrome simply stops loading and shows the error, firefox does not show an error but shows a incomplete version of the view).
So far I've checked the permissions of both nginx and PHP, they both run as the same user (www-data:www-data).
My php-fpm socket is configured as:
[sitename]
listen = /var/run/php5-fpm/sitename.sock
listen.backlog = -1
listen.owner = www-data
listen.group = www-data
listen.mode=0660
; Unix user/group of processes
user = folderuser
group = www-data
; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; Pass environment variables
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
Note that I set user to folderuser because the folder where the files for the site are located is owned by folderuser (folderuser:www-data).
Furthermore, permissions inside laravel folders are configured as 755 (775 for cache and upload folders so that www-data can write cache files)
I have disabled any kind of serverside php cache (except for zend opcache).
I've also tried disabling "prefetch resources to load pages more quickly" feature in chrome, which did not solve the problem.
As a last resort I've tried this solution:
/*
|--------------------------------------------------------------------------
| Fix for Chrome / PHP 5.4 issue
| http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
|--------------------------------------------------------------------------
*/
App::after(function($request, $response)
{
$content = $response->getContent();
$contentLength = strlen($content);
$response->header('Content-Length', $contentLength);
});
And some variants to this script, but I got some content length mismatches (more often than the net::ERR_INCOMPLETE_CHUNKED_ENCODING errors.
So to sum up, I've checked permissions and user/group settings serverside, I've disabled serverside caching (except for zend), I've messed around with chrome settings and I've tried a script for laravel, none of which solved the issue I am having. Note that the issue happens at random intervals at random pages on the site.
I really do not know what the next step towards solving my problem would be as the solutions above are the only ones I've found on the internet.
I would really appreciate some help.
Edit: I have a beta version of the same application running off another server with the exact same configuration (only difference is in hardware, more memory though), the issue does not present there.
Also, I forgot the mention that the application does not run with HTTPS (currently). The beta version however is running with HTTPS.
Edit The server where the issue is present has 2048 MB RAM, the beta server has 8192 MB RAM.
Edit I inspected the response with fiddler when the error occured, it simply cuts of the response at some point for no reason.
You might want to check if the folder /var/lib/nginx is owned by www-data too. I had this problem that, when the response page was too big, the Nginx worker process tried to use this folder and failed, because it was owned by nginx and the worker process ran under www-data. By doing chown -R www-data:www-data /var/lib/nginx, the problem was fixed.
If anyone finds this in the future, my net::ERR_INCOMPLETE_CHUNKED_ENCODING were as a result of having run out of space. Have a look at your disk usage and see if that's why!
I've seen a similar problem on my Nginx Server running on the latest Debian. I'm running a Wordpress site with Advanced Custom Fields installed. On the advanced custom fields it says that the problem could potentially be with the max_input_vars value in the php.ini file. I increased my value from 1000 to 3000 and that fixed the issue on one of my sites.
You can check out this link to see if it might help you. http://www.advancedcustomfields.com/faq/limit-number-fields/
Related
Hi I am running Jellyfin on my censored.de domain.
Last night I tried adding some Buttons to the Interface and it worked great, but at some point I replaced a file in the web Folder and now everything is broken.
(Only login screen works, but that's NOT false code! I put back the old file)
Login for you= User: "guest", Password: "guest"
To understand what I did:
I used Samba to connect from my PC to the server with the following permissions:
smb.conf
[*(Censored)*]
path = /
public = yes
writeable = yes
; browseable = yes
valid users = *(Censored)*
force user = root
force group = root
create mask = 0770
directory mask = 0771
force create mode = 0660
force directory mode = 0770
Then I edited a file locally and replaced the one in my website.
I changed the file permissions already because it wasn't the same as the other files.
Before I also failed to add my own logo and thought it might be the image type but now I realized any new file gets rejected. Thats why I guess it might be https related BUT when I connect via http I also have the same issue what keeps me guessing wrong I think.
The server is running behind a reverse Proxy and I certified it with certbot --apache.
PS: By the way, this is not a specific problem for this website. My other website running on this server has the same problem. Couldn't make the background show up... When I replaced the index.html file the page was not visible at all anymore and it is still not showing. Don't really know what to do but something about my configuration seems off.
I am pretty new and have no experience with webhosting, so please excuse me for my basic level of understanding :)
Thanks in advance for you help,
Simon Wolf
Finally understood, that actually my setup is fine. The jellyfin-web package needs to be built if you add/replace data to function properly.
Just be careful only to edit files, not to replace them...
Solution for me then was just uninstalling and reinstalling the apt jellyfin-web Package and it worked again!
I want to use dovecot as a local IMAP server to serve my offlineimap synced mails to gnus. This is on a Nixos installation. I have installed the dovecot package via my configuration.nix. however I am having trouble configuring it, seeing where the log files are, etc. I copy the configuration files (dovecot.conf and config.d) from /nix/store/dovecot/share/doc/dovecot/example-config. I can then modify slightly the files to allow plain login (no ssl required) just to test.I can start dovecot (as root). I see the process running, the relevant ports are open and listening, e.g. 143. Everything looks OK. No crashes. However when I telnet localhost 143 (for imap) to test, I am connected and then immediately the connection is closed by foreign host. This is not what I expect from the Dovecot Wiki. I should get a statement that Dovecot is ready …
Additionally, the command doveadm log find responds with:
Looking for log files from /var/log Debug: Not found Info: Not found etc.
So there seem to be no log files. journalctl -u dovecot2.service shows logs begin …, end at …
No entries, so no issues ? I cannot find a log file which tells me why the connection on 143 is immediately closed.
I am at a loss what is going on. Is it to do with users needed, etc ? Appreciate any help. Can post doveconf -n if needed.
As written in the configuration file for dovecot2 here: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/mail/dovecot.nix#L344 as dovecot2 is the name of the service journalctl -u dovecot2 should be the right command to run to view its logs. That said, if for some reason there's a bug in the configuration module the command journalctl will show you the complete log, dovecot's included.
It would be nice if you had written here your configuration, because given that the configuation entries for dovecot are those listed here https://nixos.org/nixos/options.html#services.dovecot2 it's not clear to me what you mean when you write ... I copy the configuration files (dovecot.conf and config.d) from /nix/store/dovecot/share/doc/dovecot/example-config ... The configuration in nixos is comprised for the most part of Nix source files that specify entries in the NixOS configuration tree I've linked before.
I was doing things completely wrong. I now specify the dovecot service to start in my configuration.nic file and it sets up the correct environment with all config files in their correct place. To change options in the config file, e.g maildir, I now specify them also in the configuration.nix file. Thanks for your answer.
I'm having this problem across 2 different machines now.
Version 63.0.3239.84 (Official Build) (64-bit)
OSX 10.12.6
Chrome is persistently loading old cached files on http://localhost:3000. The only way I can stop it, is to use incognito. If I use cmd + shift + r it works for a single refresh, then goes back to the old files on reload.
I typically have the inspector up so I've tried ticked disable cache, that doesn't nothing. I've also tried deleting my cached files through chrome's settings, that does nothing.
Anything I'm missing here?
Finally realised what it was. A previous server I was running on that port had a manifest file and the appropriate service workers. It means that chrome was loading those cached files by default.
To fix it I went into developer tools > application > clear storage.
It now works as expected.
Are you sure is it a chrome cache? I mean, check if its really chrome cache getting the full of data in another way like curl, for example:
curl http://localhost:3000
If the content still not reloaded is not chrome problem but another think like your webserver.
If you are running Nginx don't worry it is Nginx configuration called sendfile and you can change it easily just doing this follow steps:
Step 1 - Open nginx configuration with your text editor(vim in my case)
vim /etc/nginx/nginx.conf
Step 2 - Search the line with this text:
sendfile on;
Step 3 - Change to off
sendfile off;
Step 4 - Restart your nginx:
/etc/init.d/nginx restart
Note: If you are using virtual environment like Vagrant reload it
vagrant reload
In my case, Chrome was generating local overrides, for some reason. Before discovering this, I switched ports several times to get past the issue. Chrome had created a local override for my index.html file for each port. After deleting the local overrides, the actual from the server started coming through just fine.
I'm using BrowserSync with Gulp to live reload a site in a local machine when specific kinds of files are changed. I've the following snippet in my gulpfile:
gulp.task('browser-sync', function() {
browsersync.init({
proxy: "mySite:8080",
files: ["./*.html", "dev/*.css"]
});
});
When changing (and saving) any of the above kinds of files, I get an output like this in my terminal:
[BS] Serving files from: ./
[BS] Watching files...
[BS] File changed: index.html
[BS] File changed: dev\styles.css
All the while, the site reloads as expected, but its content does not reflect the changes that were made. I can't figure out what am I doing wrong here. Any hint appreciated!
UPDATE
I forgot to mention that my host machine is running Windows 10 and my guest machine is running Ubuntu 14.04.4 LTS. The VM provider is VirtualBox.
Initially, I was using the default config.vm.synced_folder method. I had this on my vagrantfile:
config.vm.synced_folder "/Path/To/My/Host/Website/Folder/", "/usr/nginx/html/mywebsite/"
I've since tried using NFS, with the following configuration:
config.vm.synced_folder "/Path/To/My/Host/Website/Folder/", "/usr/nginx/html/mywebsite/",
:type => :nfs,
:mount_options => ['nolock,vers=3,udp,noatime,actimeo=1']
Since my host is running Windows, I installed the plugin vagrant-winnfsd, which adds support for NFS. But now vagrant halts when it tries to mount the NFS shared folder.
In addition, since I was getting the following error on vagrant up: GuestAdditions versions on your host (5.0.16) and guest (4.3.36) do not match, I installed the plugin vagrant-vbgues, in order to keep VirtualBox Guest Additions up to date. At no avail either. Vagrant is still freezing when it tries to mount the NFS shared folder.
The title and the tags say you're using Vagrant, even though it's not mentioned in your question.
Make sure your changes are being synced to the VM. Have a look at the vagrant documentation to select the type of synced folders that will work for your situation. Their basic example is as follows:
Vagrant.configure("2") do |config|
# other config here
config.vm.synced_folder "src/", "/srv/website"
end
You can vagrant ssh and check the files manually to make sure they match, and that the synced folders are working as expected.
UPDATE
Based on the new information and comment, I would recommend using rsync as the shared folder method.
config.vm.synced_folder "Host/Path/", "/Guest/Path", type: "rsync",
rsync__exclude: [ '.git', 'logs'],
rsync__args: ["--verbose"],
rsync__chown: false
I have never found a way to make NFS play well (and perform well) if Windows is in the mix.
It so happens that the problem was related with VirtualBox, as explained here. Since I'm running Nginx on a virtual machine in VirtualBox, the solution to my problem was to comment out the sendfile directive in nginx.conf, or simply set it off, like this:
sendfile off;
The same issue is reported here, and here. As well as in the Vagrant docs, which state that "There is a VirtualBox bug related to sendfile which can result in corrupted or non-updating files."
I'm getting a 500 Internal Server Error while trying to install Bolt.
First time Bolt user (been using Drupal for a few years).
I'm running on a VPS (with cPanel/WHM).
PHP version 5.4.37, Apache version 2.4.12
PHP memory_limit = 128M
PDO extension, curlssl extension, and GD extension are enabled
Chrome version 40.0.2214.111
mod_rewrite, SQLite, and MySQL 5.6.22
Downloaded the latest version and installed the traditional way (FTP)
Unzipped and updated permissions
.htaccess is there and looks the same as the one referenced on the Bolt installation page
I tried MySQL database and as-is to use SQLite database
Checked host configuration and AllowOverrides is enabled
tried enabling RewriteBase in .htaccess as well as the "FallbackResource /index.php" method
Bolt is in the root directory (not a subfolder)
I have PHP compiled as FCGI with suEXEC on and Ruid2 off.
All I get is the 500 Internal Server Error. What am I missing?
Have the same problem with Bolt 3.0.0 today.
Was solved by removing
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>
from .htaccess
Looks like I just figured it out.
I disabled "Zend Guard Loader" on my VPS (cPanel/WHM >> EasyApache) and now I'm good to go!
I had a similar problem with Bold 2.2.20 today. If you're using cPanel, look for something like 'jail php for WordPress'. Disable that and you'll be ready to go.
Here's the description shown:
This plugin will jail anonymous web page requests from users that are not logged in. Jailed requests will be read only, and linux will prevent writes to the filesystem. This will prevent almost all hacks.