Browser load HTML [duplicate] - html

I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:

Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>

1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.

Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)

If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.

AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.

Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.

The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

Related

Proper order for scripts in head of my document [duplicate]

When embedding JavaScript in an HTML document, where is the proper place to put the <script> tags and included JavaScript? I seem to recall that you are not supposed to place these in the <head> section, but placing at the beginning of the <body> section is bad, too, since the JavaScript will have to be parsed before the page is rendered completely (or something like that). This seems to leave the end of the <body> section as a logical place for <script> tags.
So, where is the right place to put the <script> tags?
(This question references this question, in which it was suggested that JavaScript function calls should be moved from <a> tags to <script> tags. I'm specifically using jQuery, but more general answers are also appropriate.)
Here's what happens when a browser loads a website with a <script> tag on it:
Fetch the HTML page (e.g. index.html)
Begin parsing the HTML
The parser encounters a <script> tag referencing an external script file.
The browser requests the script file. Meanwhile, the parser blocks and stops parsing the other HTML on your page.
After some time the script is downloaded and subsequently executed.
The parser continues parsing the rest of the HTML document.
Step #4 causes a bad user experience. Your website basically stops loading until you've downloaded all scripts. If there's one thing that users hate it's waiting for a website to load.
Why does this even happen?
Any script can insert its own HTML via document.write() or other DOM manipulations. This implies that the parser has to wait until the script has been downloaded and executed before it can safely parse the rest of the document. After all, the script could have inserted its own HTML in the document.
However, most JavaScript developers no longer manipulate the DOM while the document is loading. Instead, they wait until the document has been loaded before modifying it. For example:
<!-- index.html -->
<html>
<head>
<title>My Page</title>
<script src="my-script.js"></script>
</head>
<body>
<div id="user-greeting">Welcome back, user</div>
</body>
</html>
JavaScript:
// my-script.js
document.addEventListener("DOMContentLoaded", function() {
// this function runs when the DOM is ready, i.e. when the document has been parsed
document.getElementById("user-greeting").textContent = "Welcome back, Bart";
});
Because your browser does not know my-script.js isn't going to modify the document until it has been downloaded and executed, the parser stops parsing.
Antiquated recommendation
The old approach to solving this problem was to put <script> tags at the bottom of your <body>, because this ensures the parser isn't blocked until the very end.
This approach has its own problem: the browser cannot start downloading the scripts until the entire document is parsed. For larger websites with large scripts and stylesheets, being able to download the script as soon as possible is very important for performance. If your website doesn't load within 2 seconds, people will go to another website.
In an optimal solution, the browser would start downloading your scripts as soon as possible, while at the same time parsing the rest of your document.
The modern approach
Today, browsers support the async and defer attributes on scripts. These attributes tell the browser it's safe to continue parsing while the scripts are being downloaded.
async
<script src="path/to/script1.js" async></script>
<script src="path/to/script2.js" async></script>
Scripts with the async attribute are executed asynchronously. This means the script is executed as soon as it's downloaded, without blocking the browser in the meantime.
This implies that it's possible that script 2 is downloaded and executed before script 1.
According to http://caniuse.com/#feat=script-async, 97.78% of all browsers support this.
defer
<script src="path/to/script1.js" defer></script>
<script src="path/to/script2.js" defer></script>
Scripts with the defer attribute are executed in order (i.e. first script 1, then script 2). This also does not block the browser.
Unlike async scripts, defer scripts are only executed after the entire document has been loaded.
(To learn more and see some really helpful visual representations of the differences between async, defer and normal scripts check the first two links at the references section of this answer)
Conclusion
The current state-of-the-art is to put scripts in the <head> tag and use the async or defer attributes. This allows your scripts to be downloaded ASAP without blocking your browser.
The good thing is that your website should still load correctly on the 2% of browsers that do not support these attributes while speeding up the other 98%.
References
async vs defer attributes
Efficiently load JavaScript with defer and async
Remove Render-Blocking JavaScript
Async, Defer, Modules: A Visual Cheatsheet
Just before the closing body tag, as stated on Put Scripts at the Bottom:
Put Scripts at the Bottom
The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
Non-blocking script tags can be placed just about anywhere:
<script src="script.js" async></script>
<script src="script.js" defer></script>
<script src="script.js" async defer></script>
async script will be executed asynchronously as soon as it is available
defer script is executed when the document has finished parsing
async defer script falls back to the defer behavior if async is not supported
Such scripts will be executed asynchronously/after document ready, which means you cannot do this:
<script src="jquery.js" async></script>
<script>jQuery(something);</script>
<!--
* might throw "jQuery is not defined" error
* defer will not work either
-->
Or this:
<script src="document.write(something).js" async></script>
<!--
* might issue "cannot write into document from an asynchronous script" warning
* defer will not work either
-->
Or this:
<script src="jquery.js" async></script>
<script src="jQuery(something).js" async></script>
<!--
* might throw "jQuery is not defined" error (no guarantee which script runs first)
* defer will work in sane browsers
-->
Or this:
<script src="document.getElementById(header).js" async></script>
<div id="header"></div>
<!--
* might not locate #header (script could fire before parser looks at the next line)
* defer will work in sane browsers
-->
Having said that, asynchronous scripts offer these advantages:
Parallel download of resources:
Browser can download stylesheets, images and other scripts in parallel without waiting for a script to download and execute.
Source order independence:
You can place the scripts inside head or body without worrying about blocking (useful if you are using a CMS). Execution order still matters though.
It is possible to circumvent the execution order issues by using external scripts that support callbacks. Many third party JavaScript APIs now support non-blocking execution. Here is an example of loading the Google Maps API asynchronously.
The standard advice, promoted by the Yahoo! Exceptional Performance team, is to put the <script> tags at the end of the document's <body> element so they don't block rendering of the page.
But there are some newer approaches that offer better performance, as described in this other answer of mine about the load time of the Google Analytics JavaScript file:
There are some great slides by Steve Souders (client-side performance expert) about:
Different techniques to load external JavaScript files in parallel
their effect on loading time and page rendering
what kind of "in progress" indicators the browser displays (e.g. 'loading' in the status bar, hourglass mouse cursor).
The modern approach is using ES6 'module' type scripts.
<script type="module" src="..."></script>
By default, modules are loaded asynchronously and deferred. i.e. you can place them anywhere and they will load in parallel and execute when the page finishes loading.
Further reading:
The differences between a script and a module
The execution of a module being deferred compared to a script(Modules are deferred by default)
Browser Support for ES6 Modules
If you are using jQuery then put the JavaScript code wherever you find it best and use $(document).ready() to ensure that things are loaded properly before executing any functions.
On a side note: I like all my script tags in the <head> section as that seems to be the cleanest place.
<script src="myjs.js"></script>
</body>
The script tag should always be used before the body close or at the bottom in HTML file.
The Page will load with HTML and CSS and later JavaScript will load.
Check this if required:
http://stevesouders.com/hpws/rule-js-bottom.php
The best place to put <script> tag is before closing </body> tag, so the downloading and executing it doesn't block the browser to parse the HTML in document,
Also loading the JavaScript files externally has its own advantages like it will be cached by browsers and can speed up page load times, it separates the HTML and JavaScript code and help to manage the code base better.
But modern browsers also support some other optimal ways, like async and defer to load external JavaScript files.
Async and Defer
Normally HTML page execution starts line by line. When an external JavaScript <script> element is encountered, HTML parsing is stopped until a JavaScript is download and ready for execution. This normal page execution can be changed using the defer and async attribute.
Defer
When a defer attribute is used, JavaScript is downloaded parallelly with HTML parsing, but it will be execute only after full HTML parsing is done.
<script src="/local-js-path/myScript.js" defer></script>
Async
When the async attribute is used, JavaScript is downloaded as soon as the script is encountered and after the download, it will be executed asynchronously (parallelly) along with HTML parsing.
<script src="/local-js-path/myScript.js" async></script>
When to use which attributes
If your script is independent of other scripts and is modular, use async.
If you are loading script1 and script2 with async, both will run
parallelly along with HTML parsing, as soon as they are downloaded
and available.
If your script depends on another script then use defer for both:
When script1 and script2 are loaded in that order with defer, then script1 is guaranteed to execute first,
Then script2 will execute after script1 is fully executed.
Must do this if script2 depends on script1.
If your script is small enough and is depended by another script
of type async then use your script with no attributes and place it above all the async scripts.
Reference: External JavaScript JS File – Advantages, Disadvantages, Syntax, Attributes
It turns out it can be everywhere.
You can defer the execution with something like jQuery so it doesn't matter where it's placed (except for a small performance hit during parsing).
The most conservative (and widely accepted) answer is "at the bottom just before the ending tag", because then the entire DOM will have been loaded before anything can start executing.
There are dissenters, for various reasons, starting with the available practice to intentionally begin execution with a page onload event.
It depends. If you are loading a script that's necessary to style your page / using actions in your page (like click of a button) then you better place it at the top. If your styling is 100% CSS and you have all fallback options for the button actions then you can place it at the bottom.
Or the best thing (if that's not a concern) is you can make a modal loading box, place your JavaScript code at the bottom of your page and make it disappear when the last line of your script gets loaded. This way you can avoid users using actions in your page before the scripts are loaded. And also avoid the improper styling.
Including scripts at the end is mainly used where the content/ styles of the web page is to be shown first.
Including the scripts in the head loads the scripts early and can be used before the loading of the whole web page.
If the scripts are entered at last the validation will happen only after the loading of the entire styles and design which is not appreciated for fast responsive websites.
You can add JavaScript code in an HTML document by employing the dedicated HTML tag <script> that wraps around JavaScript code.
The <script> tag can be placed in the <head> section of your HTML, in the <body> section, or after the </body> close tag, depending on when you want the JavaScript to load.
Generally, JavaScript code can go inside of the document <head> section in order to keep them contained and out of the main content of your HTML document.
However, if your script needs to run at a certain point within a page’s layout — like when using document.write to generate content — you should put it at the point where it should be called, usually within the <body> section.
Depending on the script and its usage the best possible (in terms of page load and rendering time) may be to not use a conventional <script>-tag per se, but to dynamically trigger the loading of the script asynchronously.
There are some different techniques, but the most straightforward is to use document.createElement("script") when the window.onload event is triggered. Then the script is loaded first when the page itself has rendered, thus not impacting the time the user has to wait for the page to appear.
This naturally requires that the script itself is not needed for the rendering of the page.
For more information, see the post Coupling async scripts by Steve Souders (creator of YSlow, but now at Google).
Script blocks DOM load until it's loaded and executed.
If you place scripts at the end of <body>, all of the DOM has a chance to load and render (the page will "display" faster). <script> will have access to all of those DOM elements.
On the other hand, placing it after the <body> start or above will execute the script (where there still aren't any DOM elements).
You are including jQuery which means you can place it wherever you wish and use .ready().
You can place most of <script> references at the end of <body>.
But if there are active components on your page which are using external scripts, then their dependency (.js files) should come before that (ideally in the head tag).
The best place to write your JavaScript code is at the end of the document after or right before the </body> tag to load the document first and then execute the JavaScript code.
<script> ... your code here ... </script>
</body>
And if you write in jQuery, the following can be in the head document and it will execute after the document loads:
<script>
$(document).ready(function(){
// Your code here...
});
</script>
If you still care a lot about support and performance in Internet Explorer before version 10, it's best to always make your script tags the last tags of your HTML body. That way, you're certain that the rest of the DOM has been loaded and you won't block and rendering.
If you don't care too much any more about in Internet Explorer before version 10, you might want to put your scripts in the head of your document and use defer to ensure they only run after your DOM has been loaded (<script type="text/javascript" src="path/to/script1.js" defer></script>). If you still want your code to work in Internet Explorer before version 10, don't forget to wrap your code in a window.onload even, though!
I think it depends on the webpage execution.
If the page that you want to display can not displayed properly without loading JavaScript first then you should include the JavaScript file first.
But if you can display/render a webpage without initially download JavaScript file, then you should put JavaScript code at the bottom of the page. Because it will emulate a speedy page load, and from a user's point of view, it would seems like that the page is loading faster.
Always, we have to put scripts before the closing body tag expect some specific scenario.
For Example :
`<html> <body> <script> document.getElementById("demo").innerHTML = "Hello JavaScript!"; </script> </body> </html>`
Prefer to put it before the </body> closing tag.
Why?
As per the official doc: https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/JavaScript_basics#a_hello_world!_example
Note: The reason the instructions (above) place the element
near the bottom of the HTML file is that the browser reads code in the
order it appears in the file.
If the JavaScript loads first and it is supposed to affect the HTML
that hasn't loaded yet, there could be problems. Placing JavaScript
near the bottom of an HTML page is one way to accommodate this
dependency. To learn more about alternative approaches, see Script
loading strategies.

How can I have an Arduino web server tell a browser client to re-load a local URL?

I need to remotely control a solenoid with an Arduino, from about 2000 feet away. So far, it works: I designed a control circuit that fires based upon a logic-level signal from pin 9.
My problem: the initial Arduino code sent up a web page over ethernet each time the form was submitted, but if the user tried to toggle the state too quickly, the transmission was interrupted and the whole system puked. It was also slow to load.
My attempted solution: I created an HTML document on a local page to do what I need done, and indeed it does: I can control the Solenoid. However once the links which control the commands are submitted, there's no redirect back to the local control page, and after much Google-fu I can't seem to implement it in this way. Is this possible? Is this a good approach?
<HTML>
<HEAD>
<TITLE>Sensor-Cleaning Control</TITLE>
</HEAD>
<BODY>
<H1>Solenoid Remote Actuation</H1>
<hr />
<br />
Turn On Solenoid
Turn Off Solenoid<br />
<button type="button" onclick="location.href='http://192.168.0.88/?sol_on'">On</button>
<button type="button" onclick="location.href='http://192.168.0.88/?sol_off'">Off</button>
<button type="button" onclick="location.href='http://192.168.0.88/?toggle'">Toggle</button>
<br />
<p>(Check pin 9 LED ''L9'' to make sure this code is working)</p>
</BODY>
</HTML>
So if the Arduino sees "sol_on" it turns the solenoid on; "sol_off" off, and you can guess what "toggle" does. I'm pretty comfortable coding, but I know nothing of javascript, CSS, or PHP. I'm not afraid of implementing those, it just needs to be clear for me to do so. Note that there's some redundancy in the code above, I left it so that I could test multiple approaches to the UI.
If I'm understanding you correctly, your best approach would probably be to use Ajax, where your web page uses an asynchronous Javascript call to do the toggling/on/off.
Effectively, you have the web page as shown, but instead of links to the Arduino "pages", clicking each link fires off an asynchronous request to the Arudino page, leaving your current page in the browser while still prodding the URL on the Arduino web server.
If you're not that familiar with Javascript, possibly a sensible approach would be to use jQuery, a Javascript library which insulates you somewhat from differences between browsers, and encapsulates things like Ajax requests quite nicely.
Here are some simple steps:
1) Download the latest production jQuery. I'm using 2.0.3 from here for this example.
2) Put it in the same directory as your web page, so we can include it easily.
3) Convert your web page to use Ajax with jQuery. (I've also converted it to something a little closer to the current web standard, HTML5):
<!DOCTYPE html>
<html>
<head>
<title>Sensor-Cleaning Control</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<!-- Include jQuery so we can use its simple goodness -->
<script src="jquery-2.0.3.min.js"></script>
<script>
/* This function will be called by the onclick handlers of the buttons */
function solenoid(url) {
// Use jQuery's Ajax functionality to prod the given URL without
// reloading this page or visiting another one:
$.ajax(url);
}
</script>
</head>
<body>
<h1>Solenoid Remote Actuation</h1>
<button type="button" onclick="solenoid('http://192.168.0.88/?sol_on');">On</button>
<button type="button" onclick="solenoid('http://192.168.0.88/?sol_off');">Off</button>
<button type="button" onclick="solenoid('http://192.168.0.88/?toggle');">Toggle</button>
<p>(Check pin 9 LED ''L9'' to make sure this code is working)</p>
</body>
</html>
The main things to note are:
1) The inclusion of the jQuery library, so we can use its ajax() call and fire off http requests in the background with ease no matter which browser we're on.
2) I've replaced your existing onclick events with a call to a new function called solenoid, that takes a URL as a parameter.
3) The solenoid function, defined in the <script> at the top, takes the URL that was passed in and uses jQuery's ajax() call to poke the given URL. This happens in the "background", i.e. without any page (re)load.
From here, you could expand this in all sorts of ways. This code could, for example, read a short response from the Arduino and handle it in the background, too, perhaps indicating the current state of the solenoid.
(Given the simplicity of what I'm doing here, I'm sure this could be done in a more "lightweight" way in pure Javascript without jQuery, but that would have involved a chunk more slightly scary code in this example to ensure the Ajax stuff worked in many different browsers -- there's some browser inconsistency in how the underlying object (an XMLHttpRequest) used by Ajax is created. I figured for a Javascript beginner, simpler was probably better...)
Well, i don't know your Arduino's based http server, but it certainly shall reply all requests, either with an 200 http status, that means "OK" or with any other error message like 400, that means "Bad Request". While your web application is waiting for a response, you can block (or hide) the page (or some elements) so the user will be unable to start a click-frenzy and mess up everything while he should be waiting the server's (Arduino) response.
You can use an ajax call using JQuery, so you will be able to "do something" after calling the url with your "Arduino code", either in case of success or fail.
Please see the example below:
<html>
<head>
<script src="../scripts/jquery-2.0.3.min.js"></script>
<script>
function callArduinoCode(var code) {
jQuery.ajax({
type: "GET",
url: "http://192.168.0.88/?" + code,
data: dados,
beforeSend: function() {
// Hide links, show loading message...
$("#controls").css("display","none");
$("#loading").css("display","block");
},
success: function( data ){
// Hide loading message, show links again...
$("#controls").css("display","block");
$("#loading").css("display","none");
},
error: function (xhr, ajaxOptions, thrownError){
alert("Failed, HTTP Status was " + xhr.status);
// Hide loading message, show links again...
$("#controls").css("display","block");
$("#loading").css("display","none");
}
});
return false;
}
</script>
</head>
<body>
<div id="controls">
<!-- your links here -->
<a onClick="callArduinoCode('sol_on'); return false;">Turn Solenoid ON</a>
<a onClick="callArduinoCode('sol_off'); return false;">Turn Solenoid OFF</a>
</div>
<div id="loading" style="display:none;">
Loading, please wait...
</div>
</body>
</html>
You can get JQuery at jquery.com
Good Luck!
No need to add any library / framework to accomplish what you need. Even you can achieve it without javascript at all. Simply add an invisible IFRAME in your HTML file with name attribute set. In following example we'll use "Arduino" as the IFRAME's name, but you can use any valid element name you'd like.
<IFRAME name="Arduino" style="display:none"></IFRAME>
Next, add target attribute on your link element (the 'A' tags) with value specified as the IFRAME's name, i.e.:
Turn On Solenoid
When you click on the link, request sent to your Arduino and resulting response will be directed to the the invisible IFRAME without navigating away from currently viewed page.
For the button element, prefix location.href in onclick handler with the IFRAME's name:
<BUTTON onclick="Arduino.location.href='//192.168.0.88?sol_on';">On</BUTTON>

Calling a URL in non-blocking manner on a webpage

Story time
I have a purchased/rented typeface whose license asks me to query a unique counter on their domain every time the typeface is shown. Sadly, their suggestion is to call it in the CSS file with import, which blocks rendering for the duration of the call. It is also weird since according to the license they wish to track individual page views yet if the CSS file in question is cached, won't that prevent the import from being called again until cache clears?
In any case, I removed the import call, but then began to ponder what exactly should I replace it with. What tag would give me a non-blocking call that would work universally across browsers and irregardless of disabled features? A link with rel=prefetch? HTML5, it didn't work in IE7 when I tested it. And it would also feel awkward since it implies the resource should be cached yet the response contains a No-Cache directive. A script tag with defer and async attributes at the end of the page? Maybe, but what if someone has disabled scripting? I could add a noscript tag and then an image tag inside it as a fallback. But! Will the image display as broken for some browsers since the image contents are an empty string? And what if someone has scripting AND images disabled? Oh no! World must be a pretty bleak place for them, I must admit. Oh, oh! What about embed/object? Now that's just wrong, stop touching me funny.
I ended up going with just a plain image tag for now, but what would be the magical combination that would cover the widest range of edge cases? I could add the script tag for example to support those without image loading on.
My intrigue here is purely scientifical so I'm not really looking for alternative typeface providers or to discuss how unlikely previously described situations are. Also, why they provide me with the actual font file to serve from my own server and then trust me to call the counter honestly is beyond me.
Code
Let's imagine my unique font counter is located at http://font.foo/bar and the font.foo server is acting slow.
Starting point
// fonts.css
#import url('font.foo/bar')
#font-face { ... }
// index.html
<link rel="stylesheet" href="fonts.css">
Separate link tag
// Problem: Blocks rendering
// index.html
<link rel="stylesheet" href="fonts.css">
<link rel="stylesheet" href="font.foo/bar">
rel=prefetch
// Problem: Won't load in IE7, semantically awkward
// index.html
<link rel="prefetch" href="font.foo/bar">
Deferred async script load
// Problem: Won't work when user has disabled scripting
// index.html
<script src="font.foo/bar" async defer>
</body>
Script tag with added image fallback
// Problem: Won't work when user has disabled scripting AND images
// index.html
<script src="font.foo/bar" async defer>
<noscript><img src="font.foo/bar" alt=""></noscript>
</body>
Maybe mix it this way?
You have the script options and keep using a link element instead of an img
<script src="font.foo/bar" async defer>
<noscript><link rel="stylesheet" href="font.foo/bar"></noscript>
</body>

"async" attribute for the <script>

The documentation says about the async attribute: "Set this Boolean attribute to indicate that the browser should, if possible, execute the script asynchronously." I thought that even without this tag all external scripts are executed asynchronously. Was I wrong?
If I have declared several external scripts, will they be downloaded simultaneously or one by one? In which order they will be executed?
<script type="text/javascript" src="js/1.js"></script>
<script type="text/javascript" src="js/2.js"></script>
<script type="text/javascript" src="js/3.js"></script>
Yes. Scripts are, by default, blocking. HTML parsing will stop until the script has finished executing (noting that some function calls made by the script might be handled asynchronously and those won't block further rendering).
If this wasn't the case, then:
<script src="foo.js"></script>
<p>Hello
and
document.write("foo!");
might insert foo! into the HTML some time after Hello because it would take time for the script to download before being executed.
A script element needs to be resolved immediately. If it's an inline script everything is fine, but if it is an external resource, it needs to be downloaded first.
While downloading, it is blocking the page rendering and possibly other downloads. That's why one should put script blocks at the end of the body tag to block as few other processes as possible.
Whether your 3 scripts are downloaded in parallel or one after another depends on the browser. Modern browser do several http requests at the same time and thus have better page rendering times. However, independent from which of the scripts has finished loading first, the order of execution is always fixed - the scripts get executed in the order they appear in your html markup (in your example: 1.js -> 2.js -> 3.js). So a very small .js file which appears last in the source might be avaiable first, but has to wait with execution for all other sourcefiles to be downloaded and executed which appear before.
That's why they introduced async - which basically tells the browser: "The order of execution doesn't matter, just download it and execute it when it's finished donwloading and you have some spare time."

Prevent Browser Timeout When Receiving Large Amounts Of Data

I have an administrative interface that is used by one of my clients that, when a process is initiated through a web interface, the server processes a massive amount of records on the backend server. The web page then provides updates using just simple HTML (not any AJAX or other JS) updates and looks like this:
Processing All Records:
000000 to 000099: ....................................................................................................
000100 to 000199: ....................................................................................................
//... More Records Processed Here ...
009700 to 009799: ....................................................................................................
009800 to 009899: ..............................
Data processing complete: 9830 total records processed
This processing can take upwards of an hour. In the browsers I use (IE and Crome), the page continues receiving the updates and displaying them as they are processed. The problem is that, for my client (using multiple browsers and computers), his screen stops displaying the updates after a few mintutes, and even if he waits until the processing is supposed to be complete (leaving his browser open overnight), that he never receives the "Data processing complete" notification. Basically, his browser simply times out, even thought the processing continues on the server and the process does complete.
The HTML output (using classic ASP... I know, ugg) is done with a simple Response.Write(".") after each record is processed. Other than a standard HTML <head> </head> <body> </body> tags, no other formatting is applied and all the Response.Writes are done within the body (so the browser isn't waiting for any closing tag other than the body tag).
I realize that I could rewrite the code and use some fancy JS and AJAX calls for updates, or move to an asynchronous service with email updates, but I want to find the simplest solution that doesn't require me to change the code much.
So, my question is, what setting in his browser/network would cause his browser to time out while all the browsers I tested continue to receive the data?
If it helps, here is the raw HTML that is output:
<html>
<head>
<title></title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<LINK rel="stylesheet" type="text/css" href="\css\normal.css">
</head>
<body>
Initiating data retreival at: 8/23/2012 11:02:59 AM<br>
<hr>
Processing All Records:
<br>000000 to 000099:....................................................................................................
<br>000100 to 000199:....................................................................................................
<br>000200 to 000299:....................................................................................................
<!-- More Lines Here -->
<br>009800 to 009899:..............................<hr>
Data import complete: 9830 total records processed<br>
</body>
</html>
There is no reliable, well-defined way of doing this because it is completely up to the browser, network connection, router, operating system, ... Instead of hoping you strike it lucky, take control of the circumstances.
The easiest way is to refactor out one script that does the processing for a small batch of items and then keep fetching that script with AJAX (keeping an offset) until it reports that it has completed. This is more robust and well-defined than hoping to find some magic technique that, for all you know, might change in the future.
Browsers depends on RAM and Shared Memory regarding caching. Try your code to limited width and height, table-based
<table style='table-layout:fixed'>
This will allow the browser to render the table without it trying to recompute the width on each new row addition
Also, GTK based browsers would help you for your purpose # for example, juniper-browser, NetFront etc.