Always avoid using <iframe>? - google-maps

Some days ago, some friends of mine told me to avoid using <iframe> for virtually anything, which of course includes Google Maps. That made me do some research and, among other things, find this thread in Quora (http://www.quora.com/Google-Maps/What-are-best-practices-and-recommendations-to-implement-Google-maps-within-an-iframe-on-a-webpage), which I think isn't conclusive, at least in my case. I've made a simple site which includes displaying a Google Map. I used an <iframe> because it is very simple and, as pointed out before, it is the option that Google offers within every map, so I guessed it was the optimal one.
My question is: using an <iframe> is always a bad solution, or in a simple case like mine (only displaying a location map), is it recommended?
Thank you all, please let me hear your thoughts on this,
João

Using an iframe is like having another page loaded in your browser. Which takes resources. I think this is what the suggestion to avoid it based on. But naturally, the solution is to avoid those who suggest that you should avoid something always. Just use it when it makes sense and know where to stop.

Related

coordinates in an area tag

So I've started a web design course about 4 months ago, it's going smoothly and I'm really enjoying it. I'm trying to learn more in my own time and found tag while searching for something similair.
I don't fully understand it, but I think I get the grasp of it, so basically I'm asking if what I think the cords do is correct.
coords="x1,y1, x2,y2"
Does that create a box, which I can then use a so when it's pressed linked to another page?
I think you're a bit confused.
This function won't perform any doing. In fact, all that is a 'string'. Meaning, if you refer to cords, you will just find 'x1,y1, x2,y2' as the value.
If you're interested in linking content, use 'a' tags. Also, if you'd like to create a nice box, you're going to need some styling knowledge. Remember, as a web designer you create content for the web developer to put together. If you're looking into making the sites, look for a course in 'Web Development'.

What's the most efficient way to add social media "like" and "+1" buttons to your site?

The task sounds trivial but bear with me.
These are the buttons I'm working with:
Google (+1)
Facebook (Like)
Twitter (Tweet)
LinkedIn (Share)
With a little testing on webpagetest.org I found that it's incredibly inefficient if you grab the snippet from each of these services to place these buttons on your page. In addition to the images themselves you're also effectively downloading several JavaScript files (in some cases multiple JavaScript files for just one button). The total load time for the Facebook Like button and its associated resources can be as long as 2.5 seconds on a DSL connection.
Now it's somewhat better to use a service like ShareThis as you can get multiple buttons from one source. However, they don't have proper support for Google +1. If you get the code from them for the Google +1 button, it's still pulling all those resources from Google.
I have one idea which involves loading all the buttons when a generic looking "Share" button is clicked. That way it's not adding to the page load time. I think this can be accomplished using the code described here as a starting point. This would probably be a good solution but I figured I'd ask here before going down that road.
I found one possible solution if you don't care about the dynamic aspect of these buttons. In other words, if you don't care to show how many people have +1'd or liked your page, you can just use these links...
https://plusone.google.com/_/+1/confirm?hl=en&url={URL}
http://www.facebook.com/share.php?u={URL}
http://twitter.com/home/?status={STATUS}
http://www.linkedin.com/shareArticle?mini=true&url={URL}&title={TITLE}&summary={SUMMARY}&source={SOURCE}
You'd just have to insert the appropriate parameters. It doesn't get much simpler or lightweight than that. I'd still use icons for each button of course, but I could actually use CSS sprites in this case for even more savings. I may actually go this route.
UPDATE
I implemented this change and the page load time went from 4.9 seconds to 3.9 seconds on 1.5 Mbps DSL. And the number of requests went from 82 to 63.
I've got a few more front-end optimizations to do but this was a big step in the right direction.
I wouldn't worry about it, and here's why: if the websites in question have managed their resources properly - and, come on, it's Google and Facebook, etc... - the browser should cache them after the first request. You may see the effect in a service where the cache is small or disabled, but, in all likelihood, all of your clients will already have those resources in their cache before they ever reach your page.
And, just because I was curious, here's another way:
Here's the snippet of relevant code from StackOverflow's facebook share javascript:
facebook:function(c,k,j){k=a(k,"sfb=1");c.click(function(){e("http://www.facebook.com/sharer.php?u="+k+"&ref=fbshare&t="+j,"sharefacebook","toolbar=1,status=1,resizable=1,scrollbars=1,width=626,height=436")})}}}();
Minified, because, hey, I didn't bother to rework the code.
It looks like the StackOverflow engineers are simply calling up the page on click. That means that it's just text until you click it, which dynamically pulls everything in lazily.

Why does the Google homepage use deprecated HTML (ie. is not valid HTML5)?

I was looking at the www.google.com in Firebug and noticed something odd: The Google logo is centered using a center tag.
So I went and checked the page with the W3C validator and it found 48 errors. Now, I know there are times when you can't make a page valid, especially when we're talking about something like www.google.com and you want it to be as small as possible, but can someone please explain why they use the center tag?
I attended a panel at SXSW a few years ago called "F*ck Standards" which was all about breaking from standards when it makes sense. There was a Google engineer on the panel who talked about the Google home page failing validation, using deprecated tags, etc. He said it was all about performance. He specifically mentioned layout rendering with tables beating divs and CSS in this case. As long as the page worked for their users, they favored performance over standards.
This is a very simple page with high traffic so it makes sense. I imagine if you're building a complex app that this approach might not scale well.
From the horse's mouth.
Because it's just the easiest, most concise way to get the job done. <center> is deprecated, for sure, but as long as it's still supported, you're likely to still see them using it.
Shorter than margin:0 auto. Quicker to parse. It is valid HTML4. No external dependencies, so less HTTP requests.
Usability is NOT validity.
Google Search's biggest achievement has been to build a site which is easy to use, and can be widely used. Now, if Google achieved this with a page which does not validate, well, there's a lesson there to learn.
I think a better question to ask would be "why would Google make it validate if it works fine?" It makes no difference to the user.
There has been speculation and discussion about whether this is intentional; the basic test carried out in the first link does result in a smaller page, and even gzipped, through millions of page views it theoretically stacks up. I doubt that's the reason though: it was created, tested on many browsers at the time, it worked, and continues to work.
Google's breaks validation in many ways on their home page. The very likely real reason - they are all about speed and bandwidth costs. Look at the size of the home page HTML particularly after Gzip is applied at the packet level. They are clearly trying to avoid packet fragmentation (which will mean more bandwidth) and willing to do whatever it takes to get it (identifier shortening, quote removal, deprecated tags, white space removal, etc.
If you look at this just as a validity question, fine but they break the rules on purpose if you don't assume this of course you may jump to a negative conclusion. BTW you can further optimize their pages both in positive and negative manners but why once inside the typical packet size it is somewhat pointless.
They also use other deprecated presentational tags like font and u. My guess is it makes the page quicker to load then using an external stylesheet and allows it to work on more platforms.
It's deprecated, sure, but I think simplicity is the answer to your question.

Scraping hidden HTML (when visible = false) using Hpricot (Ruby on Rails)

I've come across an issue which unfortunately I can't seem to surpass, I'm also just a newborn to Ruby on rails unfortunately hence the number of questions
I am attempting to scrape a webpage such as the following:
http://www.yellowpages.com.mt/Malta/Grocers-Mini-Markets-Retail-In-Malta-Gozo.aspx
I would like to scrape The Addresses, Phones and URL of the next Page which in this case is
http://www.yellowpages.com.mt/Malta/Grocers-Mini-Markets-Retail-In-Malta-Gozo+Ismol.aspx
I've been trying just about anything i could think of but nothing seems to work due to them being set to invisible or so.
The Address is within an h3 tag but it does not appear to be scrap-able. I've been also looking into ScRUBYt from the following url http://www.rubyrailways.com/ajax-scraping-with-scrubyt-linkedin-google-analytics-yahoo-suggestions/, but i really cant seem to find heads or tails of how to apply them in this case.
I would really appreciate any pointers as this is an obstacle which i really need to surpass in order to move forward on my assignment. Thanks in advance for any help.
In the particular example you have given, the elements are not hidden, but loaded via ajax after the page load. So basically what you need is a http client which can run javascript (web browser?) to see those address and other contents.
If you want to really automate the process and scrape the data which is got through ajax or javascript, you can try selenium. Even though it is not developed for that purpose, it serves your needs.
I don't have an answer to your specific question, but I thought I'd point to Ryan Bates' Railscast episode on screen scraping with ruby: http://railscasts.com/episodes/173-screen-scraping-with-scrapi
He uses a library called scrAPI instead of ScRUBYt, since he couldn't get ScRUBYt working. scrAPI seems to be a bit easier maybe?
I hope this helps somewhat, good luck with your assignment! :)
-John
There is a good script posted at the google group. It seems to extract address, etc. You may want to look at the code for the script page.txt.

How do I provide info to Google about interesting/important pages on my website?

For an example of what I mean, search on Google for "Last.fm". The first result will be www.last.fm and 8 additional links are listed; "Listen", "Log in", "Music", "Download", "Charts", "Sign up", "Jazz music", and "Users". I looked around in their HTML but couldn't figure out where this information was supplied to Google.
Any help? Thanks :)
You can try looking at the Google Webmaster Tools, and provide google with a webtree of your site.
Write semantic markup.
Google work out the important links from that, they aren't told explicitly.
Google's documentation explains the process.
In your sitemap you can specify priority for pages.
The above answers are all good.
You might also try NO FOLLOWING (rel="nofollow") unimportant links on your homepage or other pages. Google will the give more weight to the followed links.
It used to be that you needed to be pagerank 4 or higher to get the sitelinks to show up if you were the top result. (and then you could edit them via webmastertools)
but it seems like google are currently changing things around. apparantly they were not clicked enough to warrant taking up valuable space on the resultspage.
Use XML sitemaps. However, be warned that sitemaps must not be misused. There is a big debate on whether sitemaps are good or not.
I met such thing before.
What I did is submitting new, accurate site page to google.
Taking a close look at the content, as well as Mata tags to see if they are accurate and descriptive. In my case I reorganized the whole content.
Most important, I back to the track of SEO, refresh content frequently. Shame to me, I had not refreshed content for a long time.
I do not know which one plays the rule, but thing works pretty well now. Hope it it is worthwhile for you as a reference.