I've setup mediawiki behind Varnish. And I've figured out a way to get updates to the wiki to pass through to the web server. But of course any edits you make to the wiki site don't get updated unless you clear the varnish cache.
I tried putting this into the config in an attempt to get allow the site to be updated when you edit the wiki:
# Allows you to edit the wiki
if (req.url ~ "&action=submit($|/)") {
return (pass);
ban(req.url);
}
How can I express this in the varnish VCL so that any time you edit the wiki, the varnish cache gets updated automatically?
Don't do that in VCL, instead configure MediaWiki to purge updated pages:
$wgUseSquid = true;
$wgSquidServers = array('varnish IP 1', 'varnish IP 2', ...);
See here for full docs.
Related
SvelteKit does a great job to avoid full page reloads where possible. However, this can be an issue when the pages should be reloaded, for example when developers uploaded a new version of files.
SvelteKit addresses exactly this via pollInterval. Documentation is here.
However, I don't see any effect setting pollInterval.
(1) As vite updates my local files during development, I visited my production ready test site with a browser (avoiding the effects of vite).
(2) Then I changed the configuration by setting pollInterval to 3000 ms in svelte.config.js like this:
const config = {
preprocess: vitePreprocess(),
kit: {
adapter: adapter()
version: {
pollInterval: 3000
}
}
};
(3) Additionally I changed content on some pages.
(4) Finally I uploaded it all (effectively overwriting the old files).
Result:
The client browser did only see the old (previous) content when browsing the routes, not the new content. Reloading the URLs did not help either. Only after restarting nodejs on the production test site the new content was present in the client's browser.
What am I missing? An example is appreciated.
I am looking for ways to make some of the posts on my blog visible only to myself but can't seem to this in Hugo.
Is there any way around this such as setting a password for certain posts?
Or is the feature supported but I just haven't found it?
Since Hugo just generates the static HTML, the question can be widened to: how to password-protect any static content on the web server.
That's doable.
It depends what is your web server using: Apache or Nginx or something else.
In Apache case, set up password authentication via .htaccess. See tutorials like this.
In Nginx case, set up password on your server block section, see tutorials like this.
For other server (IIS?), google accordingly.
Some people will want to downgrade this question, but in my opinion it's very valid, for example, if you want to post portfolios, CV's and whatnot on your personal website and limit public access
Make a landing page on Hugo site, password-protect the URL and give visitors the password. Easy, fast and still static!
My way to keep posts private is to set the draft flag in the front matter:
TOML
draft: "true"
A good practise for me is to connect a local instance of Hugo with GitLab/GitHub. If you want to see your website or a specific post as a rendered version you can turn on and off the visibility of pages using the draft flag with true and false.
If you have finished your tests you can push the final version with or without the draft flag active to the repo and sync it with your server side installation of Hugo.
Posts are displayed in multiple places (RSS feeds, search results, sitemap etc.).
Here is an article with an updated checklist and a solution to publish hidden posts with Hugo
Installation
git clone --recurse-submodules git#github.com:RoneoOrg/hugo-offtherecord-demo.git
cd hugo-offtherecord-demo
hugo serve
Usage
Set offTheRecord to true in the Front Matter of the posts you want to hide. That’s all!
See the source for details
I'm experimenting with the HTML5 ServiceWorker API based on this article. In the article it is mentioned that
When the user navigates to your site, the browser tries to redownload
the script file that defined the service worker in the background. If
there is even a byte's difference in the service worker file compared
to what it currently has, it considers it 'new'.
From which I conclude that if I would change anything in the worker's script file, it would prompt the browser to define a new version that would kick in when all pages referencing the old version of the worker are terminated.
Edit: Apparently the browser is caching the serviceworker.js file itself, which is why new versions aren't picked up. Could anyone tell me how to avoid caching the worker file? I've looked through the available demo's online (including those on MDN and W3C Webmob's GitHub)
This is my file structure:
|- index.html
|- serviceworker.js // the actual worker
|- serviceworker-cache-polyfill.js
|- serviceworker-registration.js // contains the registration logic for the worker
|- style.css
I configured my cache to include following URLs:
"/style.css"
The issue was not the configuration of the ServiceWorker, but the fact my server cached the file. Can't say I don't feel stupid I didn't checked this earlier.
For future reference, I am using http-server, it caches by default all files for 1 hour. You can override this by passing in the c parameter. To disable caching altogether, pass in -1:
http-server -c-1
Edit The following article contains a good summary on how to develop with the ServiceWorker:
In order to guarantee that the latest version of your Service Worker
script is being used, follow these instructions:
Configure your local server to serve your Service Worker script as non-cacheable (cache-control: no-cache)
After you made changes to your service
worker script:
close all but one of the tabs pointing to your web application
hit shift-reload to bypass the service worker as to ensure
that the remaining tab isn't under the control of a service worker
hit
reload to let the newer version of the Service Worker control the
page.
That would indeed explain the behavior. The update logic does respect the HTTP cache control header but up to 24 hours (to avoid being stuck with a broken SW served with a Cache-control: 1 year header).
I have a versioned cache manifest:
#version = e5b4271
Every time this version changes, my webapp loads the new manifest, but it never loads update files from the server. Even when I clear the browser cache (not the application cache itself), or hit Ctrl+Shift+R to force it to fetch a new version, it still loads the files from the old appcache.
The only way I can get it to update is to clear the browser's application cache in settings, but obviously this is unacceptable because I need it to update for regular users.
Any ideas why this would happen?
Just figured it out. I'm using Flask's development server, and it seems by default (via werkzeug) it sends cache headers for 12 hours for static files. Adding the following to my flask config solved this:
SEND_FILE_MAX_AGE_DEFAULT = -1
If anyone else has this issue, check your server config to make sure cache headers are not sent with static files. You can check this in the network tab in chrome during the first load of the file.
Occasionally when I try to open a site I will see a page saying smth like "This site is offline for maintenance" and then some comments follow on how long it would presumably take. Stack Overflow does that too.
How does it work? I mean if the site is shut down who replies to my HTTP request and serves this page?
There is a trick in asp.net where you place a file called
App_Offline.htm
All requests will go to this, until the page is deleted.
For other environments you can often just change where the server points, or another such plan.
-- Edit
A server-agnostic approach is achieved through something like load-balancing.
Under the hood you can send the requests to a given internal server. You may then decide to point all requests to your server 'a', which you configure to show the 'downtime' page. Then, you make changes to server 'b', confirm it as successful, and point all requests to 'b'. Then you update 'a', and let requests go to both.
In ASP.NET (and ASP.NET MVC as Stackoverflow uses) this is provided by the app_offline.htm feature. This works simply by forwarding all ASP.NET requests to the app_offline.htm file.
Incidentally the copy Web Site tool in ASP.NET performs the process of placing this file in the root of the web app, copyies the Web site files and then deletes this file.
Strategies for other technologies are discussed here.
In apache you may use a .htacces file with this content.
order deny,allow
allow from 192.168.1.151
deny from all
ErrorDocument 403 404.html
ErrorDocument 404 404.html
ErrorDocument 500 404.html
This will deny access to everyone except one IP and serve a static 404.html file.
This works in the case you only have one server without load-balancing and other stuff. Should work for load-balancing too though.
The apache reverse proxy server can be configured to send that response - if it is being used as part of that architecture.