I'm using BrowserSync with Gulp to live reload a site in a local machine when specific kinds of files are changed. I've the following snippet in my gulpfile:
gulp.task('browser-sync', function() {
browsersync.init({
proxy: "mySite:8080",
files: ["./*.html", "dev/*.css"]
});
});
When changing (and saving) any of the above kinds of files, I get an output like this in my terminal:
[BS] Serving files from: ./
[BS] Watching files...
[BS] File changed: index.html
[BS] File changed: dev\styles.css
All the while, the site reloads as expected, but its content does not reflect the changes that were made. I can't figure out what am I doing wrong here. Any hint appreciated!
UPDATE
I forgot to mention that my host machine is running Windows 10 and my guest machine is running Ubuntu 14.04.4 LTS. The VM provider is VirtualBox.
Initially, I was using the default config.vm.synced_folder method. I had this on my vagrantfile:
config.vm.synced_folder "/Path/To/My/Host/Website/Folder/", "/usr/nginx/html/mywebsite/"
I've since tried using NFS, with the following configuration:
config.vm.synced_folder "/Path/To/My/Host/Website/Folder/", "/usr/nginx/html/mywebsite/",
:type => :nfs,
:mount_options => ['nolock,vers=3,udp,noatime,actimeo=1']
Since my host is running Windows, I installed the plugin vagrant-winnfsd, which adds support for NFS. But now vagrant halts when it tries to mount the NFS shared folder.
In addition, since I was getting the following error on vagrant up: GuestAdditions versions on your host (5.0.16) and guest (4.3.36) do not match, I installed the plugin vagrant-vbgues, in order to keep VirtualBox Guest Additions up to date. At no avail either. Vagrant is still freezing when it tries to mount the NFS shared folder.
The title and the tags say you're using Vagrant, even though it's not mentioned in your question.
Make sure your changes are being synced to the VM. Have a look at the vagrant documentation to select the type of synced folders that will work for your situation. Their basic example is as follows:
Vagrant.configure("2") do |config|
# other config here
config.vm.synced_folder "src/", "/srv/website"
end
You can vagrant ssh and check the files manually to make sure they match, and that the synced folders are working as expected.
UPDATE
Based on the new information and comment, I would recommend using rsync as the shared folder method.
config.vm.synced_folder "Host/Path/", "/Guest/Path", type: "rsync",
rsync__exclude: [ '.git', 'logs'],
rsync__args: ["--verbose"],
rsync__chown: false
I have never found a way to make NFS play well (and perform well) if Windows is in the mix.
It so happens that the problem was related with VirtualBox, as explained here. Since I'm running Nginx on a virtual machine in VirtualBox, the solution to my problem was to comment out the sendfile directive in nginx.conf, or simply set it off, like this:
sendfile off;
The same issue is reported here, and here. As well as in the Vagrant docs, which state that "There is a VirtualBox bug related to sendfile which can result in corrupted or non-updating files."
Related
I'm using Xdebug 3.
I'm able to step over normally in index.php at the beginning of the request until the request starts going through Laravel's complex routing and middleware system. After that, it breaks at every line and enters every function .
What hasn't worked
Setting nginx root directory to the actual folder instead of symlink
Disabling the resolve and force break options in the PhpStorm debug settings
Clearing PHPStorm cache and re-indexing
Removing any vendor libraries from "Excluded folders"
Removing the profile option from xdebug.mode in xdebug.ini
Disabling xdebug.start_upon_error
Disabling Clockwork
Debugger validation
Here's my xdebug.ini:
zend_extension=xdebug.so
; https://xdebug.org/docs/install
; xdebug.mode = profile
; Uncomment if you want to profile with clockwork xdebug.mode=debug,profile
xdebug.mode=debug
xdebug.start_with_request = trigger
xdebug.client_host = 127.0.0.1
xdebug.client_port = 9003
; I think it might have problems writing to project folders in WSL so use /var/log
xdebug.log="/var/log/xdebug.log"
xdebug.idekey = PHPSTORM
xdebug.discover_client_host=true
Here are the PhpStorm Debug Settings:
The problem ended up being that I had overridden a Laravel library file with a custom version using the "files" section in composer.json. Apparently, Xdebug/PHPStorm got confused by the path mappings after that file was called.
I did the override long ago and it was never a problem for Xdebug before. I haven't updated PHPStorm, Xdebug or changed anything recently so I'm still not sure why it suddenly started occurring.
I'm having this problem across 2 different machines now.
Version 63.0.3239.84 (Official Build) (64-bit)
OSX 10.12.6
Chrome is persistently loading old cached files on http://localhost:3000. The only way I can stop it, is to use incognito. If I use cmd + shift + r it works for a single refresh, then goes back to the old files on reload.
I typically have the inspector up so I've tried ticked disable cache, that doesn't nothing. I've also tried deleting my cached files through chrome's settings, that does nothing.
Anything I'm missing here?
Finally realised what it was. A previous server I was running on that port had a manifest file and the appropriate service workers. It means that chrome was loading those cached files by default.
To fix it I went into developer tools > application > clear storage.
It now works as expected.
Are you sure is it a chrome cache? I mean, check if its really chrome cache getting the full of data in another way like curl, for example:
curl http://localhost:3000
If the content still not reloaded is not chrome problem but another think like your webserver.
If you are running Nginx don't worry it is Nginx configuration called sendfile and you can change it easily just doing this follow steps:
Step 1 - Open nginx configuration with your text editor(vim in my case)
vim /etc/nginx/nginx.conf
Step 2 - Search the line with this text:
sendfile on;
Step 3 - Change to off
sendfile off;
Step 4 - Restart your nginx:
/etc/init.d/nginx restart
Note: If you are using virtual environment like Vagrant reload it
vagrant reload
In my case, Chrome was generating local overrides, for some reason. Before discovering this, I switched ports several times to get past the issue. Chrome had created a local override for my index.html file for each port. After deleting the local overrides, the actual from the server started coming through just fine.
I want to enable LDAP module on my XAMPP Windows 10, here's the few solutions that I've tried :
Copy dll files to System and System32 and uncomment extension=php_ldap.dll in php.ini, both development and production.
Copy libsasl.dll to xampp/apache/bin
None of these working, when I opened phpinfo() there is no ldap info showing, which means the ldap hasn't been able to installed. I also added PHP in Windows Path with no success, but either my approach is wrong or that wasn't a solution. Any help appreciated.
Make sure the path\to\xampp\php directory has the following files
libeay32.dll
libsasl.dll
ssleay32.dll
Usually, you can find these files in path\to\xampp\sendmail - this library also uses them. But if not, try to search for them inside the xampp directory.
Uncomment or add the ldap extension in the php.ini (path\to\xampp\php\php.ini) file
extension=ldap
Restart the server
Make sure the path\to\xampp\php directory is set in the system environment variable PATH. To know how to do it, see this post.
I just ran into the same issue and the link you provided How to enable LDAP extension in XAMPP environment ended up being the solution for me.
I copied libeay32.dll and ssleay32.dll from C:/Ampps/php to C:/Windows/System32. I made sure neither of these files were in C:/Windows/System. From there I enabled extension=php_ldap.dll in the php.ini file. Ampps has a list you can enable php.ini dll's and if I remember right so does XAMPP. The last step is to just restart Apache and you should be good to go.
I'm using Windows 10 with Ampps instead of XAMPP but have to think they are pretty close.
I've been having some problems with my application not loading the views (sometimes).
I am running a Debian server with php-fpm and nginx (php5.6.8 and nginx 1.8.0) Both compiled from source. On top of that I am running Lavavel 4.2.
So far I've had the problem in both Chrome and Firefox (chrome simply stops loading and shows the error, firefox does not show an error but shows a incomplete version of the view).
So far I've checked the permissions of both nginx and PHP, they both run as the same user (www-data:www-data).
My php-fpm socket is configured as:
[sitename]
listen = /var/run/php5-fpm/sitename.sock
listen.backlog = -1
listen.owner = www-data
listen.group = www-data
listen.mode=0660
; Unix user/group of processes
user = folderuser
group = www-data
; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; Pass environment variables
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
Note that I set user to folderuser because the folder where the files for the site are located is owned by folderuser (folderuser:www-data).
Furthermore, permissions inside laravel folders are configured as 755 (775 for cache and upload folders so that www-data can write cache files)
I have disabled any kind of serverside php cache (except for zend opcache).
I've also tried disabling "prefetch resources to load pages more quickly" feature in chrome, which did not solve the problem.
As a last resort I've tried this solution:
/*
|--------------------------------------------------------------------------
| Fix for Chrome / PHP 5.4 issue
| http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
|--------------------------------------------------------------------------
*/
App::after(function($request, $response)
{
$content = $response->getContent();
$contentLength = strlen($content);
$response->header('Content-Length', $contentLength);
});
And some variants to this script, but I got some content length mismatches (more often than the net::ERR_INCOMPLETE_CHUNKED_ENCODING errors.
So to sum up, I've checked permissions and user/group settings serverside, I've disabled serverside caching (except for zend), I've messed around with chrome settings and I've tried a script for laravel, none of which solved the issue I am having. Note that the issue happens at random intervals at random pages on the site.
I really do not know what the next step towards solving my problem would be as the solutions above are the only ones I've found on the internet.
I would really appreciate some help.
Edit: I have a beta version of the same application running off another server with the exact same configuration (only difference is in hardware, more memory though), the issue does not present there.
Also, I forgot the mention that the application does not run with HTTPS (currently). The beta version however is running with HTTPS.
Edit The server where the issue is present has 2048 MB RAM, the beta server has 8192 MB RAM.
Edit I inspected the response with fiddler when the error occured, it simply cuts of the response at some point for no reason.
You might want to check if the folder /var/lib/nginx is owned by www-data too. I had this problem that, when the response page was too big, the Nginx worker process tried to use this folder and failed, because it was owned by nginx and the worker process ran under www-data. By doing chown -R www-data:www-data /var/lib/nginx, the problem was fixed.
If anyone finds this in the future, my net::ERR_INCOMPLETE_CHUNKED_ENCODING were as a result of having run out of space. Have a look at your disk usage and see if that's why!
I've seen a similar problem on my Nginx Server running on the latest Debian. I'm running a Wordpress site with Advanced Custom Fields installed. On the advanced custom fields it says that the problem could potentially be with the max_input_vars value in the php.ini file. I increased my value from 1000 to 3000 and that fixed the issue on one of my sites.
You can check out this link to see if it might help you. http://www.advancedcustomfields.com/faq/limit-number-fields/
Newbie question - my first attempt at Coldfusion/MySQL and getting it to run locally.
I'm running Apache Webserver (2.2), I have importet two .sql files into MySQL (5.2.) workbench, forward engineered a database from these, setup working database connection and MySQL Server. This is also running. In Coldfusion8 Admin I added my database as a data source.
I thought this would be enough :-)
Still, on http://localhost I'm still only getting an index of all files in my Apache htdocs folder. If I open one of the files it just shows the Coldfusion Markup/HTML source code. Nothing parsed.
Thanks for any hints on what I could be missing?
EDIT:
Three questions trying to implenent:
1. Can I load modules using absolute paths, like D:/Coldfusion8/lib...?
2. My lib/wsconfig folder only contains a dll file named jrunwin32.dll. Trying to use this?
3. The lib/wsconfig folder does not contain a jrunserver.store file. Not sure what to do here
It sounds as if your Apache config is not correct, as it doesn't sound as if it's having the cfm files handled correctly.
First of all, is there a specific reason for using CF8? CF9 has been around for a while, so if going from scratch then I'd advise taking a look at that instead.
That aside, I'd check for the following in your httpd.conf (or whatever your apache config file is named)
Firstly, that .cfm is acceptable as a DirectoryIndex (can have other indexes as well)
DirectoryIndex index.cfm
Secondly, that the JRUN handler is configured properly (so again, in httpd.conf)
LoadModule jrun_module /opt/coldfusion8/runtime/lib/wsconfig/1/mod_jrun22.so
<IfModule mod_jrun22.c>
JRunConfig Verbose false
JRunConfig Apialloc false
JRunConfig Ignoresuffixmap false
JRunConfig Serverstore /opt/coldfusion8/runtime/lib/wsconfig/1/jrunserver.store
JRunConfig Bootstrap 127.0.0.1:51801
AddHandler jrun-handler .jsp .jws .cfm .cfml .cfc .cfr .cfswf
</IfModule>
This is taken from my development VM, I have CF8 as a single-server install in /opt/coldfusion8/
Once you have those lines in (with the paths/ports etc appropriate for your environment) restart apache and it should work fine.
If you have installed CF8 in a Multiserver etc. install then please specify and will look to adjust my advice accordingly