I am wondering why you get a unique embed code every time for using Font Awesome CDN. Is there any downside to using the same embed code multiple times?
You can create account on their CDN service, which gives you access to some additional features and simplification tools, like specifying which Font Awesome version will be served to your users or cache invalidation if served version is changed.
Since they don't have users functionality on their main page, and they don't do user tracking, they have no way of knowing whether requested embed code will be used by new user or by existing user on new site. To save you hassle of creating multiple account on CDN service and to allow you management of all installations from one page, they ask you to always give the same email address, but they send you unique embed codes each time.
There are no downsides of using one embed code on multiple sites as long as you want the same configuration on all sites. If, for whatever reason, you want one site to use different version than your second site, you should use separate embed codes.
Browser caching. If new fonts are added they won't show on the client unless they do a forced refresh.
Related
I'm getting starting with bootstrap and followed a website that says
There are two ways to start using Bootstrap on your own web site. you can Download Bootstrap from getbootstrap.com or
Include Bootstrap from a CDN
also later
you want to download and host Bootstrap yourself
then
If you don't want to download and host Bootstrap yourself, you can include it from a CDN (Content Delivery Network).
what does it mean what is the process
Hosting any css/js file yourself means that you put it on your own website/server.
It means people will download it from your website every time they open it up. (unless it's cached locally by the browser, but at least the very first time)
CDN is used so that people already have the files in their cache from any other website they visited using the same CDN. (For example, a google font)
This drastically reduces loadtimes for first time visitors, but you do risk delays that are out of your control by loading something from an external website (if it's out, yours won't work properly!)
So it's a speed vs risk thing, basically.
hosting it yourself means you download the file and put it in the same place as your website on your web-hosting server.
otherwise, you can reference it in your website with a CDN(content deliver network). these networks hold files for you to use. you add a reference to in your website. and you don't have to keep the bootstrap files on your own server.
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
^ this is an example of CDN. they'll probably have a server keeping the file bootstrap.min.css, then they get a domain (bootstrapcnd.com), create a sub-domain(maxcdn). and you can request the resource(the bootstrap.min.css file) from it.
Of the 2 options, you can choose which one is the best for YOU.
i'd list out the "goods" and "bads" of both:
Availability: Hosting on your own server means, you never have to worry what happens about downtime. as long as you have your own server up(where your website files are placed) your resources will be available too. Whereas, if your vendor resources(jQuery, Bootstrap) come from a CDN, the CDN server being down will affect your visitors too. A GOOD CDN Service however, gives up time up of around 99.9%.
Usability: What do you do when you want to update your jQuery or Bootstrap? If you're hosting yourself, you go to the jQuery or Bootstrap website, download the file and put it on your server, then update the reference in your html. With CDN, you just update the version(given that particular CDN has the updated file).
Caching: Every unique visitor to your website will download the resources(jQuery, Bootstrap etc) if it's hosted on your server. With CDN, it these files might already be cached on their browser if they visited a website that uses the same CDN as you. resulting in faster loading time for YOUR page.
Bandwidth: Let's say you're using a very cheap hosting. and they give you like 100 MB bandwidth every month. but you do get a 30 unique visitors daily. your website page size with jQuery is 100 KB. and your monthly bandwidth usage around, (30*100*30/1000 = ) 90MB. with jQuery(~84KB) on CDN it becomes (16*30*30 /1000 = ) 14.4MB. (Again this is a hypothetical case. i don't think you can find a hosting as bad as 100MB a month, but you get the point).
I'll add up more when i remember them. hope it helps.
Is there some means to identify the CMS (Content Management System) which was used for creating a webpage based on its HTML source code?
Sometimes I see webpages and immediately wonder with which tool they were developed. With tool I mean CMS like Wordpress, Drupal, Typo3, etc. I could think of some fingerprinting-technique which could do that.
It's hard to pinpoint the backend CMS accurately. Almost all CMS systems out there support custom themes which would have completely different HTML code.
Your best educated guess would be to try and identify the CMS by:
The robots.txt file in its root directory.
The existence of the CMS admin panel login page.
The folder structure used to serve page resources such as images
and css files.
The presence of a specific CMS backend file.
The URL structure of default services such as RSS.
For example, if you are to guess if a certain website uses WordPress, you would do the following:
1- check the existence of robots.txt and if it contains "Disallow: /wp-admin/" then there is a high chance this is a WordPress website.
2- If you get a response from accessing the default WordPress admin panel at http://domain_name/wp-admin , then there is a high chance this is a WordPress website.
3- If this file exists http://domain_name/wp-mail.php then there is a high chance this is a WordPress website.
4- If we get a valid RSS feed at this URL http://domain_name/?feed=rss2 then there is a high chance this is a WordPress website.
Now if a site meets 3 out of the 4 detection rules listed above, you can safely say it's a WordPress website.
You need to do the same thing in identifying unique detection rules for each CMS you want to detect.
Note that there are existing services such as http://whatcms.org/ and http://guess.scritch.org/ that do what I described in this answer.
Good luck!
Although I know a lot of email clients will pre-fetch or otherwise cache images. I am unaware of any that pre-fetch regular links like some link
Is this a practice done by some emails? If it is, is there a sort of no-follow type of rel attribute that can be added to the link to help prevent this?
As of Feb 2017 Outlook (https://outlook.live.com/) scans emails arriving in your inbox and it sends all found URLs to Bing, to be indexed by Bing crawler.
This effectively makes all one-time use links like login/pass-reset/etc useless.
(Users of my service were complaining that one-time login links don't work for some of them and it appeared that BingPreview/1.0b is hitting the URL before the user even opens the inbox)
Drupal seems to be experiencing the same problem: https://www.drupal.org/node/2828034
Although I know a lot of email clients will pre-fetch or otherwise cache images.
That is not even a given already.
Many email clients – be they web-based, or standalone applications – have privacy controls that prevent images from being automatically loaded, to prevent tracking of who read a (specific) email.
On the other hand, there’s clients like f.e. gmail’s web interface, that tries to establish the standard of downloading all referenced external images, presumably to mitigate/invalidate such attempts at user tracking – if a large majority of gmail users have those images downloaded automatically, whether they actually opened the email or not, the data that can be gained for analytical purposes becomes watered down.
I am unaware of any that pre-fetch regular links like some link
Let’s stay on gmail for example purposes, but others will behave similarly: Since Google is always interested in “what’s out there on the web”, it is highly likely that their crawlers will follow that link to see what it contains/leads to – for their own indexing purposes.
If it is, is there a sort of no-follow type of rel attribute that can be added to the link to help prevent this?
rel=no-follow concerns ranking rather than crawling, and a no-index (either in robots.txt or via meta element/rel attribute) also won’t keep nosy bots from at least requesting the URL.
Plus, other clients involved – such as a firewall/anti-virus/anti-madware – might also request it for analytical purposes without any user actively triggering it.
If you want to be (relatively) sure that any action is triggered only by a (specific) human user, then use URLs in emails or other kind of messages over the internet only to lead them to a website where they confirm an action to be taken via a form, using method=POST; whether some kind of authentication or CSRF protection might also be needed, might go a little beyond the context of this question.
All Common email clients do not have crawlers to search or pre-build <a> tag related documents if that is what you're asking, as trying to pre-build and cache a web location could be an immense task if the page is dynamic or of large enough size.
Images are stored locally to reduce load time of the email which is a convenience factor and network load reduction, but when you open an email hyperlink it will load it in your web browser rather than email client.
I just ran a test using analytics to report any server traffic, and an email containing just
linktomysite
did not throw any resulting crawls to the site from outlook07, outlook10, thunderbird, or apple mail(yosemite). You could try using a wireshark scan to check for network traffic from the client to specific outgoing IP's if you're really interested
You won't find any native email clients that do that, but you could come across some "web accelerators" that, when using a web-based email, could try to pre-fetch links. I've never seen anything to prevent it.
Links (GETs) aren't supposed to "do" anything, only a POST is. For example, your "unsubscribe me" link in your email should not directly unsubscribe th subscriber. It should "GET" a page the subscriber can then post from.
W3 does a good job of how you should expect a GET to work (caching, etc.)
http://www.w3schools.com/tags/ref_httpmethods.asp
I'm developing a mobile web app, and I'd like to take advantage of the new HMTL5 caching features. The app consists in a photo manager: the user can create albums, store photos, edit pictures and data, and so on. I use the jQuery Mobile framework and all data is stored client-side (webstorage) apart from images, which are uploaded to the server.
I have not added the HTML5 caching yet, but I rely on the normal browser caching for images, and when the user edits an image and this is uploaded to the server, I change the querystring attached to the image request so I get the updated version (a trick I came to know here on stackoverflow).
I'd like to use HTML5 caching for everything, except for images, since this trick works like a charm, but I understand that once I add HMTL5 caching, every resource is:
either cached and not updated until a new manifest is detected / I do it programmatically (and I can't choose which resource to update) (CACHE section)
or not cached at all and reloaded everytime (NETWORK section)
Is there a way to have the cake and eat it too? :-)
Thank you very much.
Not every resource is cached once you start caching, depends on what is specified in your manifest file, so you could try to take out from the manifest the image urls you do not want to get cached.
I have what seems like a typical usage scenario for users downloading, editing and uploading a document from a web page.
User clicks a link to download a document
User edits downloaded file
User saves the file
User goes back to the web page and uploads the new file with the changes
The problem is that downloaded files are typically saved in a temporary directory, so it can be difficult to find the file after it is saved. The application is for very non-technical users, and I can already imagine the problems with saved files being lost or the wrong versions being uploaded.
Is there a better way? Things I've thought about:
Using Google Docs or something similar.
Problems: forcing users to use new
application with less features,
importing legacy content, setting up
accounts for everyone to edit a
file.
Using WebDAV to serve the files. Not sure how this would work exactly, but seems like it should be possible
Some kind of Flash or Java app that manages downloads and uploads. Not sure if these even exist.
User education :)
If it matters, the files will be mostly Word and Powerpoint documents.
Actually, despite the fact that you have more flexibility with AJAX in developing application, the problem of uploading multiple files is not solved yet.
To the thoughts you've mentioned in your question:
Google Docs:
Online apps like Google docs are certainly appealing for certain use cases. However, if you'd like to upload Word and Powerpoint slides, you don't want the content to be changed once you've uploaded the document. The problem is that Google Docs uses its own data format and therefore changes some of the formats. If you go for an online app, I'd go for a Document Management Solution. I'm sure there are plenty (even free ones) out there; however, I didn't use any on them yet.
WebDAV It is possible and seems to me like the best solution. You can embed WebDav like any directory. Documents are locked until a user releases the document. Unfortunately, you don't have a web front end to manage the files or administer access restrictions.
It
Flash or Java app They do exist, for sure. I'd prefer Flash over Java since Flash Apps still run smoother within a browser. I would definitely not use a rich application, even if it is a Java Web Start app that can be downloaded and opens in a separate window. More and more, users seem to accept browser based web applications. Which brings me to point 4:
User education You can educate them, sure. But in the end you want them to want to use the system. Most often, users get easily used to a tool. However, if they don't like the tool, they're not going to use it.
Clear instructions to save to their desktop is a start. Then clear instructions to go to the desktop to re-up it. I've not run across an online MSWord viewer/editor or whatever format the file is, but I'm sure they exist, now that Google Docs and a few other online versions of MSOffice exist.
I would make sure that there are easy to follow instructions, plus a tutorial somewhere else (perhaps with a video too) to guide users through the process.