Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I prefer to write my HTML clear, so I may use empty lines here and there - example:
<div>
<!-- Seasons -->
<table class="giantTable">
...
</table>
<!-- Prices -->
<table class="giantTable">
...
</table>
</div>
Today my new workmate told me that this is bad for SEO,
because Google would need more time for parsing the site and may abort with a timeout.
I never heard about this,
shall I really write Spaghetti-Code again?
Google do use page-load and rendering time as one metric (of over 200!) for determining your page-rank, so to an extent your colleague is right (although timeout's are not the issue - he is wrong on that).
However, you can have the best of both worlds :) Write your HTML as you normally do, and then minify it before deployment.
Note that there are a number of tools for analysing your site performance (both online, and as browser plugins - e.g. YSlow), and it's a very sensible thing to do. You can have numerous bottlenecks in your web-site, and can often get some quick wins that significantly improve the responsiveness of your site.
As always with optimisation though - measure first! Don't just randomly implement supposed improvements until you have measured the bottlenecks, and then confirmed the improvement is an improvement.
The sentiment isn't entirely off. Google does now consider the speed of your pages as a factor, and excessive white-space in code can increase payload size. Google themselves recommend minifying your code ( https://developers.google.com/speed/docs/best-practices/payload#MinifyHTML ), and this can be done without too much overhead on the web server.
I find the biggest culprit in dynamic websites comes from using loads of space in the middle of for/while loops, so cutting down on that can make a big difference. Also, try using tabs instead of spaces and you'll cut your white-space big-time.
Even if this were true (which I've never heard before however RBs point above makes a good point), there are many other things that contribute to your page ranking way more than what that would.
Google made an awesome SEO guide which I always check out, its really pretty and easy to read as well, what with all the pictures of Robots. Its definitely worth checking out - Google SEO Guide
It isn't bad at all, they ignore white space. Otherwise everyone would be trying to write code all on one line
http://jamesmartell.com/matt-cutts/is-excessive-whitespace-in-the-html-source-bad/
This document describes how to do SEO for Google (it is quite extensive). A first glance over all the pages doesn't say anything about compressing your HTML.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there a hard (or somewhat-hard) limit on the number of HTML elements (no css, no scripts) a webpage can load on a browser?
Kind of a dumb set of questions. Good Code influences speed a lot. Better hardware lowers the layman's ability to distinguish Good from Bad.
A case could be made that we've hit the point in hardware where only extreme cases of Bad code are distinguishable, but that's a discussion for some other time.
Write Good Code.
Don't feed the Dissonance.
As for your specific questions...
1) Based on the memory available, yes. As to how many, "yes" is also a good answer, because it's dependent on a lot of things but usually is higher than you'd reasonably hit.
2) Pulling this out of my ass, but generally, the reason your page is slow is (in order of likelyhood):
Total File Size is Too Damn Big for Available Bandwidth
JS hooks on spammy events (such as scroll; never hook scroll directly)
JS that causes repaint and reflow
Rendering in general
General use of JS
JS fetch/parse delay (async/defer and bottom-of-page-load are your friends)
CSS parsing & application
HTML parsing (assuming not well-formed)
HTML parsing (assuming well-formed)
3) Inline styles really don't add much in the grand scheme. Their scope is absurdly small. They honestly contribute more to file size than to parse/apply.
4) Not by much. CSS is declarative and static. Microsoft tried dynamic for a while but very quickly realized how bad of an idea that was and killed it. JS is the real bottleneck, not CSS.
5) Depends on hardware and usage. 10 mil blank <meta /> tags in the head is probably going to do very little. 10 mil <div> tags with a ton of supporting CSS and JS, on the other hand, will chug real bad if the machine isn't beefy (and 64bit).
One of the pages at work that is in desperate need of a refactor has around 16800 elements currently, as well as a lot of JS. The page chugs fairly bad at the start, but is reasonably responsive once it's finished loading. Meanwhile, another is sitting around 89.7k. In spite of being orders of magnitude larger, it doesn't have a ton of JS backing it up and many of the elements are (usually) display:hidden, so it's actually a lot faster and more responsive of a page (until you hit ctrl+f).
6) Not significantly. Some stuff can chug extra hard on one rather than another, but that's usually due to either an impl bug in the browser being proded by some really odd combination or usage of features a web page hit.
7) Better hardware can handle more complex stuff faster, assuming it can be taken advantage of. A 128-core computer can't magically multithread your serial O((n^7)!) script- though the JIT optimizer sure as hell might try.
8) Server Hardware does very little to effect client-side speed unless it's swamped and thus running into the server-side bandwidth limit. Server Hardware does do a lot to effect the speed at which it can produce pages with heavy server-side backend to them. Again, having the server do something that's O((n^7)!) will be slower the worse its hardware is, and that gets compounded significantly by the number of users requesting that operation (assuming you can't/don't lock and return a cached copy of the first request's result).
A bit of further reading on Repaints and Reflows... The title of the article itself is absurdly bad, but the content is good. In-depth explanations (and even further readings) on repaints and reflows, which, while not quite the same today (8 years later), are still a key concept in page efficiency. Most of these are also still applicable due to how the browser has to render some elements (such as tables without table-layout: fixed;).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My girlfriend is making a website for her exam.
She is not good at HTML - at all! (Me neither). But for making it easy for her, I told her to use frames. It's easy and manageable.
But the problem is that the website have to live up to the standard HMTL5. That's a requirement. I don't think frames does that because it's deprecated. Am I right or wrong?
So.. What to do? It has to be as simple as possible. I don't think the other solutions I could find are something she can do herself :( Any ideas?
(Sorry, if my english sucks :) )
You should definitely try not to use deprecated elements. They will behave inconsistently, and probably won't work in the future. There's also a very good reason why frames are deprecated.
You probably need to rethink how you're structuring your data. An HTML page should be as simple and to-the-point as possible. If you have to pull in a significant amount of content from other web-pages using frames, you're doing it wrong. In most cases it's better to simply link to the extra content.
If you're trying to use frames to pull in the <header> or the <nav>, so you don't have to simply copy them across multiple pages, although I applaud you for trying to be DRY, this is the wrong way to do it. Just copying these sections into all your documents is a better solution than using frames.
Ideally though, you'd use a server-side templating system of some sort. The simplest one (depending on your setup) may be Server-Side Includes.
Having said all that, if you really need frames, the <iframe> element is perfectly valid in HTML5 and may help you out.
Don't use frames, very old and not needed at all
Make sure all your style stuff is made in CSS and it's loaded through a CSS file.
Then make each html page with the information needed in it. Use links for navigation to another page.
With the external CSS you can change the look of your site in one file (the CSS file), and it will automatic update for each html page.
You are correct in assuming that frames do not conform to the HTML5 draft (to the extent that it makes sense to speak of conformance to a draft). HTML5 is not a standard, though it may one day become W3C Recommendation. It does not use the term “deprecated” but “obsolete” and “non-conforming”, but in any case, frameset and frame elements do not conform.
The iframe element (“inline frame”) conforms, however. Using it instead of “normal” (old-style) frames is clumsy and limited, but possible to some extent.
I think this answers the specific, on-topic questions asked. The rest is mainly opinion-based and hence off-topic at SO.
P.S. If “living up the the standard HTML5” is a requirement, then the teachers would need a crash course on the basics of HTML5 process. The requirement means that a page that is “standard HTML5” in the morning may become non-conforming before lunch. After all, HTML5 is a draft that may change at any moment without prior notice, and it says itself: “It is inappropriate to cite this document as other than work in progress.”
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Upon visiting an older site it can be common to find deprecated tags and attributes. Yet as the are deprecated and not obsolete, they are still valid and supported. A 'good' programmer tends to stay away from these if at all possible, yet just how bad is it to use these?
Upon looking up information about deprecated elements, I find the most common information supplied is:
Browsers should continue to support deprecated tags and attributes,
but eventually these tags are likely to become obsolete and so future
support cannot be guaranteed.
Obviously if something breaks you should change it. Yet should we actively change instances of deprecated elements in code we had worked on previously? Either a personal site, or a huge site with a huge view rate, just how important is it to stay as up-to-date on accepted elements?
How much warning after a tag or attribute becomes deprecated is given when it is decided it will become obsolete? Any? What about other web formats?
On any site, be it an over-haul, or just noticing a line of deprecated code, should it all be actively removed? Or is it worth the saved time and energy to just wait until is is truly dropped and change it all at once?
Should you lazy (like I wish I could be), or is it worth the effort to be that up-to-date?
Should you just be lazy (like I wish I could be), or is it worth the effort to be that up-to-date?
It's always important to be lazy, that's why we have tags marked as deprecated instead of just removed.
I always tell developers to be lazy, but only lazy enough to not stop yourself from preventing yourself from doing good things. There's such a thing as too lazy, and such a thing as not lazy enough. DRY is an example of good lazy. Relying on deprecated tags is an example of bad lazy. Using divs for table layout is an example of inefficient overwork.
On any site, be it an over-haul, or just noticing a line of deprecated code, should it all be actively removed? Or is it worth the saved time and energy to just wait until is is truly dropped and change it all at once?
If you're doing maintenance, and it's a bug report, it's always important to fix it, obviously, but in the case of being proactive, the question becomes, what's the most important thing for you to do right now. If you have some 20% time and you want to fix it, then fix it. If you have tasks that need to be done, then you need to focus on those first.
Either a personal site, or a huge site with a huge view rate, just how important is it to stay as up-to-date on accepted values?
This is really what you're asking about. On your own personal site, that's definitely 20% time, so do that as you want.
On a massive site with lots of throughput, you need to try and keep it up to date so it continues to offer good benefit to the users. If the site suddenly stops working for lots of users, they're going to stop coming. Then a massive site with lots of throughput turns into a dead site taking up space on the internet.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
IS there a way to test CSS and HTML? For instance: sometimes some of the notices get affected by some CSS changes. I don't want to be testing all the notices by hand every time I do a change.
Thanks
It's very difficult to automate testing of layout. But it's not too difficult to drastically cut down the time and effort involved so that you can do it manually, but very quickly.
You could try Blink Testing.
I've heard of it used for websites like this.
Write a script that walks through your website, visiting as many pages as you can think of.
At each page, take a screen shot.
Combine all the screen shots into a 'movie' with just a second or two for each screenshot.
Now, each day you can 'play' the movie and watch it for any issues.
You could even extend bcarlso's approach but replace the MD5 check with a visual check. Each page gets displayed for 1 second - first the known good, then the new. You could alternate them a few times so any obvious errors will appear as a flicker.
A website with hundreds of pages can be checked like this in a matter of a few minutes. You may not think this will provide enough time to find issues, but it is remarkably efficient in identifying obvious problems with your website.
Any pages that have major layout problems will pop out at you as they don't match the same pattern as all the other pages.
I am assuming that the issue that you're trying to test would be that the CSS changed in some incompatible way with the layout causing, for example, text to be truncated or otherwise visually "broken". If that's the case, then I would say that there isn't a good way to test the aesthetics of a page at this time. One of the primary benefits of TDD and CI is quick feedback so that you know something is broken before it gets to production. Not knowing much context around your environment and how those changes make it into your app it's hard to suggest solutions, but here is an example of a potential non-traditional option:
Put a commit hook into your repository that let's everyone on the team know via an e-mail when someone changes some CSS. Preferably with a diff of the CSS. This would give the team a heads up to keep an eye out for layout problems.
We started an experiment to use WATIR to walk some of the main screens in the app and take a picture using ImageMagik (essentially a screenshot) and store it in a "Last Known Good" folder. Every day re-run the script on a clean install of the app and data and place the images in a "Current" folder. At the end of each run use an MD5 to compare the images and alert on changes. Have the QA team review a list of flagged screenshots and if the change was acceptable (for example, a field was added as part of a feature) then copy "Current" to "Last Known Good". Unfortunately we didn't get our experiment finished so I don't know if it will work well. I'm concerned about the brittleness of screenshots as "assertions".
Hope that helps!
I believe Selenium can test your frontends for you. Specifically for browser compatibility testing, take a look at Selenium RC.
If you'd simply like to make sure you're contents are in the right contents etc. etc. you can create a simple testing suite that's going to be making GET requests to your website. When you receive all the content you can run it through a validation template like xslt. Well formed html will usually be able to be matched against xslt's or xsd's. It is not ideal but if you're only worried about the structure of your website and not the styling you'll be able to achieve it this way.
A change to CSS should not affect the behaviour of a page, only it's appearance, so I'm not sure that Selenium would be much help for this.
I'm going to take a guess that you are trying to avoid problems such as elements being misplaced on the page so that they are not readable. If so, you would probably need some kind of OCR-based tool, but I don't know one off-hand to suggest.
However, it may be to better to invest your effort in preventing this kind of problem in the first place. If your layout is easily broken, maybe you need to refactor your CSS to something simpler.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When you learn HTML and so forth nowadays, the mantra is always "Clean code = better code".
So why do sites like Mobile Me and Google and Facebook all use huge amounts of tables and other not-semantically correct code?
Thanks!
Because people still use IE6, unfortunately, and it's so incredibly bad at CSS as to make it almost worthless for CSS selectors of any sophistication. Until IE6 is gone and dead dead dead in the cold ground, you're still going to see a lot of this.
If you could see what SharePoint generates, you would probably go into seizures.
Clean code is better, yes.
But working code is much much better )
Because sometimes that's the path of least resistance. It's not always about being ideologically pure, it's about being pragmatic and getting the job done in this crazy, multi-browser, multi-platform world.
Because it's easier.
While the purist in me will also strive for semantic tags and external CSS for layout, the pragmatist in me need to get this site up by 6pm (so I can go home to my wife and a nice warm dinner!) and there's just this little problem with [insert browser here*] that could easily be solved with a bit of conditional CSS, or a table or something.
There are other reasons for high-traffic sites like Google and Facebook to use inline CSS and js: bandwidth. Every external file you reference is one extra round-trip to the server to fetch. Of course, that doesn't really explain the style="xxx" tags as opposed to just inline <style> tags, but it still reduces the size of the page. Also, while caching does tend to reduce the number of trips, there are still a significant number of requests that are made with a "clean" cache that you still want to optimise for that case.
Not always IE (but mostly is)
I had an affiliate marketing client the other day who wanted me to make him a web template where he could go in and edit it with Adobe Dreamweaver (some app I don't use because I'm a Linux user). So, being the web-savvy guy I am, I did it in XHTML with cross-platform CSS that I have learned over the years, using DIVs primarily, and only using TABLES for form field alignment simply because of the 80/20 rule. It was lean code in as few lines as possible, loaded super fast, and worked on all browsers from IE6 on up.
But then I handed it off to him, and he was visibly frustrated and asked me to make changes. He didn't like the CSS because he couldn't cut and paste sections to another page and have the styling carry over. Instead, he wanted me to switch everything to inline styles. Next, he couldn't edit the floating DIVs very well, and would undo my cross-platform work I had done, so he wanted it reverted back to tables. The end result was XHTML + CSS for the shell of the page that centers everything into the middle and adds the fancy graphics around the page. Then, I used PHP "include" separation for headers and footers. The final part was the middle of the page, and that was his domain. I had to compose it in TABLEs with inline styles. At that point, he was happy and I had a compromise.
So, keep this in mind that there are some cases where you have to use TABLE formatting and inline styles because that's all the client really knows how to manipulate.