What's the most efficient way to add social media "like" and "+1" buttons to your site? - html

The task sounds trivial but bear with me.
These are the buttons I'm working with:
Google (+1)
Facebook (Like)
Twitter (Tweet)
LinkedIn (Share)
With a little testing on webpagetest.org I found that it's incredibly inefficient if you grab the snippet from each of these services to place these buttons on your page. In addition to the images themselves you're also effectively downloading several JavaScript files (in some cases multiple JavaScript files for just one button). The total load time for the Facebook Like button and its associated resources can be as long as 2.5 seconds on a DSL connection.
Now it's somewhat better to use a service like ShareThis as you can get multiple buttons from one source. However, they don't have proper support for Google +1. If you get the code from them for the Google +1 button, it's still pulling all those resources from Google.
I have one idea which involves loading all the buttons when a generic looking "Share" button is clicked. That way it's not adding to the page load time. I think this can be accomplished using the code described here as a starting point. This would probably be a good solution but I figured I'd ask here before going down that road.

I found one possible solution if you don't care about the dynamic aspect of these buttons. In other words, if you don't care to show how many people have +1'd or liked your page, you can just use these links...
https://plusone.google.com/_/+1/confirm?hl=en&url={URL}
http://www.facebook.com/share.php?u={URL}
http://twitter.com/home/?status={STATUS}
http://www.linkedin.com/shareArticle?mini=true&url={URL}&title={TITLE}&summary={SUMMARY}&source={SOURCE}
You'd just have to insert the appropriate parameters. It doesn't get much simpler or lightweight than that. I'd still use icons for each button of course, but I could actually use CSS sprites in this case for even more savings. I may actually go this route.
UPDATE
I implemented this change and the page load time went from 4.9 seconds to 3.9 seconds on 1.5 Mbps DSL. And the number of requests went from 82 to 63.
I've got a few more front-end optimizations to do but this was a big step in the right direction.

I wouldn't worry about it, and here's why: if the websites in question have managed their resources properly - and, come on, it's Google and Facebook, etc... - the browser should cache them after the first request. You may see the effect in a service where the cache is small or disabled, but, in all likelihood, all of your clients will already have those resources in their cache before they ever reach your page.
And, just because I was curious, here's another way:
Here's the snippet of relevant code from StackOverflow's facebook share javascript:
facebook:function(c,k,j){k=a(k,"sfb=1");c.click(function(){e("http://www.facebook.com/sharer.php?u="+k+"&ref=fbshare&t="+j,"sharefacebook","toolbar=1,status=1,resizable=1,scrollbars=1,width=626,height=436")})}}}();
Minified, because, hey, I didn't bother to rework the code.
It looks like the StackOverflow engineers are simply calling up the page on click. That means that it's just text until you click it, which dynamically pulls everything in lazily.

Related

Auditing unused CSS on complex web pages

I know there are several tools available to find unused CSS on a static web page. But in most real world scenarios I encounter, a lot of the CSS is used after some or the other interaction on the page, maybe a new modal opening up or an options popup etc.
In such scenarios, what would you suggest? How do I keep a tab on my ever-growing render blocking CSS?
The only way I guess one could do that is by running regular unused-css-detector type tools in conjunction with Selenium - test known interactions and see whats left unused. But a big assumption here is that I'd need to know all interactions on my page which could use new CSS. Is there a way to achieve my goal without making this assumption?
In an ideal world, I'd be able to post-back all CSS used by a visitor's browser on my page to my server. Then I'd collect data over a month, aggregate, and get a pretty accurate idea about actual unused CSS.
Any good ideas?
I am the author of a tool that is aiming at doing what you are describing. Everywhere I worked, the CSS is this "append-only" thing that is too risky, too time-consuming to clean up. And even when you try, the ROI is so low that it not worth it.
So I am working on a tool that is very similar to what you are describing. The goal is to bring confidence on what can be removed, and to actually do it automatically by submitting pull requests.
A snippet of JavaScript is running in the browser and sends reports of what is being used to a server. Once enough data is accumulated to build some "confidence score", it can create Pull Request automatically to remove selectors that are actually not used.
It is still very early stage, but you are welcome to try it and give some feedback about it.
https://www.bleachcss.com/

What speaks against single-page apps from a user experience point of view?

I like them more and wonder why they are not more common. Explanations involving caching or SEO make sense to me, but I don't see them as directly driven by user experience considerations. In which way are traditional sites with page reloads better for the user?
Personally I think the best argument for normal page reloads from a user's perspective is that when you do that it's much harder to break many basic browser functions. In general the back/forward buttons work, bookmarking works, copying and pasting links works, history works, page titles work, getting an error page when a server call fails works, everything just works as expected. For free.
I have seen single page application implemented in a way that breaks one or more of the above more times than I can count.
It's naturally not a problem if you get it just right (and then it will in general be nicer to use), but not all sites do.
Just as an example here's a screenshot how a site that is a SPA and justifiedly so (they have a music player that you don't want to interrupt with page loads), broke a basic browser function in a way they might not even have thought of. I was trying to find a song I recently listened to but couldn't remember the exact title... but because of the SPAness the page titles weren't properly reflected in my browser history.

Advantages of the html5Mode?

I've read quite some post about the angularjs html5Mode including this one and I'm still puzzled:
What is it good for? I can see that I can use http://example.com/home instead of http://example.com/#/home, but the two chars saved are not worth mentioning, are they?
How is this related to HTML5?
The answers links to a page showing how to configure a server. It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Update after Peter Lyons's answer
I started to react in a comment, but it grew too long. His long and valuable answer raises some more questions of mine.
option of rendering the actual "/home"
Yes, but that means a lot of work.
crazy escaped fragment hacks
Yes, but this hack is easy to implement (I did it just a few hours ago). I actually don't know what I should do for in case of the html5mode (as I haven't finished reading this seo article yet.
Here's a demo
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks
But the same holds for the hashbang, too. Moreover, a user following an external link to http://example.com/#!/home and then another link to http://example.com/#!/foreign will be always served the same page via the same URL, while in the html5mode they'll be served the same page (unless the burdensome optimization you mentioned gets done) via a different URL (which means it has to be loaded again).
but the two chars saved are not worth mentioning, are they?
Many people consider the URL without the hash considerably more "pretty" or "user friendly". Also, a very big difference is when you browse to a URL with a hash (a "fragment"), the browser does NOT include the fragment in it's request to the server, which means the server has a lot less information available to deliver the right content immediately. As compared to a regular URL without any fragment, where the full path "/home" IS including in the HTTP GET request to the server, thus the server has the option of rendering the actual "/home" content directly instead of sending the generic "index.html" content and waiting for javascript on the browser to update it once it loads and sees the fragment is "#home".
HTML5 mode is also better suited for search engine optimization without any crazy escaped fragment hacks. My guess is this is probably the largest contributing factor to the push for HTML5 mode.
How is this related to HTML5?
HTML5 introduced the necessary javascript APIs to change the browser's location bar URL without reloading the page and without just using the fragment portion of the URL. Here's a demo
It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks a link onto the site OR does a manual browser reload. Otherwise, the user can navigate around the app clicking like mad and in some cases the server will see ZERO traffic from that. More commonly, each click will make at least one AJAX request to an API to get JSON data, but overall the approach serves to reduce browser<->server traffic. If you see an app responding instantly to clicks and the URL is changing, you have HTML5 to thank for that, as compared to a traditional app, where every click includes a certain minimum latency, a flicker as the full page reloads, input forms losing focus, etc.
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
A good implementation will use HTML5 when available and fall back to fragments otherwise, but work fine in any browser. But in any case, the web is a moving target. At one point, everything was full page loads. Then there was AJAX and single page apps with fragments. Now HTML5 can do single page apps with fragmentless URLs. These are not overwhelmingly different approaches.
My feeling from this back and forth is like you want someone to declare for you one of these is canonically more appropriate than the other, and it's just not like that. It depends on the app, the users, their devices, etc. Twitter was all about fragments for a good long while and then they realized their mobile users were seeing too much latency and "time to first tweet" was too long, so they went back to server-side rendering of HTML with real data in it.
To your other point about rendering on the server being "a lot of work", it's true but some consider it the "holy grail" of web app development. Look at what airbnb has done with their
rendr framework. See also Derby JS. My point being, if you decide you want rendering in both the browser and the server, you pick a framework that offers that. Not that you have a lot of options to choose from at the moment, granted, but I wouldn't advise hacking together your own.

How to hide content from File2HD?

There is a website called file2hd.com which can download any type of content from your website including audio, movies, links, applications, objects and style sheets. Of course this doesn't work for high profile websites such as Google, but is there there a type of method I can use to cloak content on my website and prevent this?
E.g. Using a HTML Code, or using .htaccess method?
Answers are appreciated. :)
If you hide something from the software, you also hide it from regular users. Unless you have a password-protected part of your website. But even then, those users with passwords will be able to fetch all loaded content - HTML is transparent. And since you didn't provide what kind of content are you trying to hide, it's hard to give you a more accurate answer.
One thing you can do, but it works just for certain file types, is to server just small portions of a file. For example, you have a video on your page and you're fetching 5-second bits of the video from the server every 5 seconds. That way, in order for someone to download the whole thing, they'd have to get all the bits (by watching the whole thing) and then find a way to join the parts... and it's usually just not worth it. Think of Google Maps... and Google uses this/similar technique on a few other products as well.

How do i code a scrolling window to stay at a certain link per page

On this page, I want to get my scrolling dinosaur name window to specifically keep that dinosaurs name at the top so the person doesn't have to scroll all the way down to the next dinosaur.
I also want to know if there's an easier way to do this window.
My predicament is this....
I have over 30 dinosaurs on here. Each time I add a new one I have to update each and every one of the dinosaurs pages to add that one new dinosaur. Its not really time effective... Is there a better way without having to use frames?
My code is open so you can look at it and modify it at your leasure.
Thanks!
Vince
At this point I would suggest you go for server side code. Since you have 30 dinosaurs, it would be much easier to create and maintain a simple page using server side scripts such as PHP or ASP.NET to load the dinosaur from a database.
What are server side scripts?
Server side scripts allow you to dynamically generate a page on the fly whenever the user requests a page. For example, take youtube's search page. Rather than generate a seperate page for every single possible search term, they simply have a base template there, and then they fetch the relevant results based on the search query. The same can be applied to your site. You can have one page for all the dinosaurs, and you would just load the appropriate dinosaur based on the url.
Once you do that, putting the current dinosaur at the top of the page would be a trivial task. Since it appears that you already have a fair amount of knowledge in HTML, it should be easy for you to pick up and use some PHP. Codecademy has some excellent tutorials.
Along the same lines as Kevins answer but more specifically I'd like to recommend you look into a PHP MVC framework such as CakePHP, Laravel or CodeIgniter.
You've done all the hard work manually building these pages, which is awfully time consuming.
Once you learn one of these frameworks and you'll rebuild this site in a day.
If your links had id attributes on them you could scroll the list to a position by linking to #whatever. Here's a quick code example of a link.
<li id="camarasaurus">Camarasaurus</li>
Here's a small example: http://jsbin.com/ExExEvAB/1/edit?html,css,output
As for making it easier to administrate, I'd look into PHP since it's widely available and there's tons of resources to learn from. When you're basically looking for is <?php include "dinosaur-menu.html" ?> since you're thinking in terms of frames. You can make it even easier but this alone should make it a ton easier to update.
I really started to enjoy Mixture recently. It's great for prototyping and is, in my opinion, perfect for exactly what you're trying to do here.