I've not been able to find straight forward instructions on how to avoid caching problems when developing a js app with vue cli.
For example, when we deploy a new version of the app with npm run build, we can see that a new app.xxxxx.js is deployed, where the xxxxxx part is a new hash that I guess webpack is generating.
We host this in an Windows 2012 IIS Server.
Now when I tell my customer we have solved the latest issues that have been found in the app, it seems that they have caching problems, as they still have the previous version, even if they close and reopen the browser.
Is there any way to avoid this behaviour?
Try to follow the below steps which help you to disable the cache in iis.
1)Open iis manager.
2)Select the site and click on the "HTTP response header feature".
3)Click on the set common header link from the action pane.
4)click the checkbox "expire web content" and select "immediately" radio button.
You could also set output caching in iis:
Configure IIS Output Caching
Related
Is there a way in VS 2015 to start browser-link debug session with specific command-line options on Chrome like “--unsafely-treat-insecure-origin-as-secure=http://example.com". My objective is to debug a domain & subdomain "combo" in an insecure (http://) environment but Google now limits “powerful feature” to secure environments (https://) and this is one of their preferred workarounds. See more HERE. I know that I can probably start a manual instance of Chrome to the debug server:port but it would be nicer if it were integrated. I've text-searched the entire project for "chrome.exe" as a start but nothing was there.
I’ve tried to set up a domain & subdomain with SSL and I’ve failed miserably, so this is a (hopefully) a workaround for now. :-)
Edit2: Additional info: I use a machine-name:port setup along with a subdomain embedded in an iframe. Ex: http://MyMachine:50080 and the iFrame is at 192.168.48.116.
I am trying to get started on debugging my Polymer application. I have hand crafted it by copying what I think the PolymerCLI Polymer init does.
I am not sure what is loading service worker. The default one for development just does a console.info() call saying its been disabled for development.
When I use Polymer serve to serve my application on localhost: 8080, I get the console.info message, despite there being nowhere where I actually load the file service-worker.js . Because the application is much more complex (and I am trying to use http/2) I have my own node based server as well. When I run that and then fetch my application in the browser, service-worker.js does not get loaded and run.
What is Polymer serve doing to enable it?
It could be that a different application which used the same source (e.g. localhost:8080) registered and installed a service worker.
Open up the Application panel in Chrome Canary to inspect / delete the service worker.
If you can't access Chrome Canary, open chrome://serviceworker-internals, find the scope that matches your app, and click Unregister. There's also an option at the top of serviceworker-internals which lets you open a DevTools window and pause JS on the SW. Enable that option and you'll be able to see which SW is running.
I am trying to follow the instructions from here to enable offline support (Service worker) in my polymer starter kit clone.
However, after making the changes in gulpfile.js, index.html and elements.html, I keep getting the following error whenever I refresh the page.
Also if I change the throttling setting to offline in Chrome Developer Tools and refresh, the page comes back with the "Unable to connect to the Internet" message so clearly the caching isn't working.
Is there anything else that I need to do?
Update: Just decoded the url (i.e. http://localhost:5000/bower_components/platinum-sw/platinum-sw-register.html&clientsClaim=true&skipWaiting=true&version=1.0) in the error message and ran it in Chrome and got a 404 error. If I remove everything after .html then the file can be found though.
I was having the same issue and turned out it's because the platinum-sw-cache is set to disabled in the dev development, which means service worker will not work if you run
gulp serve
So to test the PSK offline, you need to call
gulp serve:dist
You can also ignore that ERR_FILE_EXISTS error as explained by #pirxpilot.
I have just setup hudson and have begun playing around with it.
I have downloaded the email-ext.hpi into the the folder $HUDSON_HOME\plugins
I have restarted hudson post-step1 ( i am following this manual method as i am unable to use (for proxy setting reasons) the automatic way of installing plugins via the "Manage hudson" page.
I dont see any errors when hudson starts. In fact i see the line
INFO: Started all plugins
BUT:
When i start a project configuration page, I do not see the promised option "Editable Email Notification".
FYI:
1. I am able to setup and run few basic test builds and they run fine.
2. I am also able to configure and receive the default hudson emails for failures and subsequent successes.(This confirms the SMTP settings)
3. I was also aboe to setup the subversion tag hpi in the same way as detailed above and that works fine as well!
What am i missing? Thanks in advance for any help!
EXTRA INFO:
Hudson version - 1.379 running on Windows XP
OK - i figured out a workaround (although i still need to dig into why this is a problem). Recording here for anyone else tha tmay face this issue.
The plugin when copied into the $HUDSON_HOME\plugin was somehow not really being activeated/recognized. But when i copied it over also to C:\Documents and Settings\mylogin.hudson\plugins and restarted hudson service, voila! it worked.
If anyone knows why this might have occured, kindly record it here for reference. Thanks.
To install a plugin you should use the easy route. In Hudson, go to 'Manage Hudson' -> 'Manager Plugins' -> 'Advanced' (its a tab) and use the 'upload plugin' option.
Than follow the instructions. Usually you have to restart Hudson to actually get the plugin.
Way saver than messing around with the file system. In general the approach you had should have been correct, but there seems to be an issue with your $HUDSON_HOME. Have a look at the "Manage Hudson" -> "Configure System" page. What is the Hudson Home directory displayed on the top of the page? I don't know what Hudson does if it can't access the Home Directory? My assumption is here that Hudson runs as a service with a user account rather than the local system account and that you used a different account to copy the hpi file.
Install Maven Legacy and Maven3 plugins .
Uploadify works for Visual Studio but not for IIS 7 (same machines), using Forms authentication. Does anyone have a working uploadify configuration for IIS 7 where they save to a subfolder?
I'm using the Uploadify jQuery control for client-side uploads.
I think my IIS 7 configuration has issues with it. The uploadify POST immediately returns a HTTP 1.1 302 Found, back to my login page.
I've tried to allow anonymous access to the uploading section(subfolder) plus the page(script) that processes the image in the web.config, using the location node(configuration ... location). Seems like the Uploadify post is immediately blocked.
Again, this worked fine just using Visual Studio 2008, but when I run the site on the same machine I get the redirect.
Your thoughts/ideas are very welcomed!
It seems that the many nested master pages included Forms Authentication code to ensure the user was authenticated. This was the source of the redirect ...
I had same problem as you. Check path of your *.axd script. When you have path like uploader.axd uploader work fine only in root. Try path with slash like this -> /uploader.axd.