I have an issue with the path definition of the libraries and models that are used in an HTML file using WebGL. The HTML file can be found here, which is an example code for a WebGL2 book.
The HTML file itself is sitting locally in the following directory in my computer.
C:\Users\bob\Desktop\Book\ch01\ch01_04_showroom.html
The libraries and other sources are located in
C:\Users\bob\Desktop\Book
├───ch01
| └───ch01_04_showroom.html
├───ch02
└───common
├───images
│ └───cubemap
├───js
├───lib
└───models
├───audi-r8
├───bmw-i8
├───ford-mustang
├───geometries
├───lamborghini-gallardo
└───nissan-gtr
The parts of the code that I have issues with are in the following
ch01_04_showroom.html
<html>
<head>
<title>Real-Time 3D Graphics with WebGL2</title>
<link rel="shortcut icon" type="image/png" href="/common/images/favicon.png" />
<!-- libraries -->
<link rel="stylesheet" href="/common/lib/normalize.css">
<script type="text/javascript" src="/common/lib/dat.gui.js"></script>
<script type="text/javascript" src="/common/lib/gl-matrix.js"></script>
<!-- modules -->
<script type="text/javascript" src="/common/js/utils.js"></script>
<script type="text/javascript" src="/common/js/EventEmitter.js"></script>
<script type="text/javascript" src="/common/js/Camera.js"></script>
...
<script type="text/javascript">
'use strict';
// ...
function configure() {
carModelData = {
// This is the number of parts to load for this particular model
partsCount: 178,
// The path to the model (which I have issue with on my computer)
path: '/common/models/nissan-gtr/part'
};
}
...
I have issue defining the path for hrefs and srcs. Also the one in the javascript function:
path: '/common/models/nissan-gtr/part'
If I use the code as it is posted in here nothing will be displayed in my Google Chrome, just an empty page.
So, I have changed paths from
/common
to relative paths:
./../common
but still, I am not able to load the HTML correctly. I see the gridded floor with an incomplete menu but the car is not displayed yet as in the following snapshot.
It's a security, Chrome doesn't let you load local files through file:///... for security reasons.
The purpose of this security is to prevent external resources from gaining access to your system, which could allow them to modify or steal files
Solutions
The best solution is to run a little http server locally since you can follow the steps from this SO answer or this one.
Or, maybe others will bring it up so I'll mention it, you can also launching Google Chrome from the command line with the flag: --allow-file-access-from-files, but this isn't recommended since Chrome doesn't allow this behaviour to protect you.
Is there a way to display a map for a given area completely offline using HTML and JavaScript? I am looking for a mobile-friendly (read Cordova-enabled) solution.
There is an elegant solution for this problem in this blog post. I have compiled a full code example from it. Here are the steps:
1. Create map tiles
download Mobile Atlas Creator
create a new atlas with OSMdroid ZIP format
make map and zoom selection, add your selection to the atlas
click "Create atlas"
unzip the atlas file
your tiles have this format: {atlas_name}/{z}/{x}/{y}.png ({z} stands for "zoom")
2. Set up HTML and JavaScript
copy your atlas folder to your HTML root
download leaflet.js and leaflet.css and copy them to html root
create index.html with the code below
adjust starting coordinates and zoom on the line where var mymap is defined
change atlasName to your folder name, set your desired maxZoom
3. You are all set! Enjoy!
run index.html in your browser
<!DOCTYPE html>
<html>
<head>
<title>Leaflet offline map</title>
<link rel="stylesheet" charset="utf-8" href="leaflet.css" />
<script type="text/javascript" charset="utf-8" src="leaflet.js"></script>
<script>
function onLoad() {
var mymap = L.map('mapid').setView([50.08748, 14.42132], 16);
L.tileLayer('atlasName/{z}/{x}/{y}.png',
{ maxZoom: 16 }).addTo(mymap);
}
</script>
</head>
<body onload="onLoad();">
<div id="mapid" style="height: 500px;"></div>
</body>
</html>
you should do these steps one by one
Download the mbtiles file of the specific area from https://openmaptiles.org/
establish the Map Server by Docker
implement the web pages by Leaflet.js and use the map server IP address in your codes.
Is there a simple way (or what could be the simplest way) to include a html-code fragment, which is stored in a text file, into a page code?
E.g. the text file fragment.txt contains this:
<b><i>External text</i></b>
And the page code should include this fragment "on the fly". (Without php ...?)
The Javascript approach seems to be the preferred one. But with the examples below you possibly can get problems with cross origin requests (localhost to internet and vice versa) or you can have security problems when including external scripts which are not served via HTTPS.
An inline solution without any external libraries would be:
<!DOCTYPE html>
<html>
<body>
<div id="textcontent"></div>
<script>
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
document.getElementById('textcontent').innerText = xhttp.responseText;
};
xhttp.open("GET", "content.txt", true);
xhttp.send();
</script>
</body>
</html>
Here you need a file content.txt in the same folder as the HTML file. The text file is loaded via AJAX and then put into the div with the id textcontent. Error handlings are not included in the example above. Details about XMLHttpRequest you can find at http://www.w3schools.com/xml/xml_http.asp.
EDIT:
As VKK mentioned in another answer, you need to put the files on a server to test it, otherwise you get Cross-Origin-Errors like XMLHttpRequest cannot load file:///D:/content.txt. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.
You need to use Javascript to do this (or perhaps an iframe which I would avoid). I'd recommend using the JQuery framework. It provides a very simply DOM method (load) that allows you to load the contents of another file into an HTML element. This is really intended for AJAX calls, but it would work in your use case as well. The fragment.txt would need to be in the same server directory as the html page (if it's in a different directory just add on a path).
The load method is wrapped in the $(document).ready event handler since you can only access/edit the contents element after the DOM (a representation of the page) has been loaded.
Most browsers don't support local AJAX calls (the load method uses AJAX) - typically the HTML and txt files would be uploaded to a server and then the html file would be accesed on the client. Firefox does support local AJAX though, so if you want to test it locally use Firefox.
<!DOCTYPE html>
<html>
<head>
<script src="https://code.jquery.com/jquery-2.2.4.js"></script>
<script>
$(document).ready(function() {
$("#contents").load("fragment.txt");
});
</script>
</head>
<body>
<div id="contents"></div>
</body>
</html>
With javascript. I use it.
Example:
<!DOCTYPE html>
<html>
<script src="http://www.w3schools.com/lib/w3data.js"></script>
<body>
<div w3-include-html="content.html"></div>
<script>
w3IncludeHTML();
</script>
</body>
</html>
I have a Google Maps application that works perfectly on multiple developer machines but fails to run when deployed to an Internet facing Web server.
The developer machines all access the application using
http://localhost
However on the Internet facing web server the application is of course accessed via a valid domain
http://<Some Domain>/<application>
Debugging on the browser side I see that the last call made from the browser is
https://maps.googleapis.com/maps/api/js/AuthenticationService.Authenticate?1shttp://<the proper domain>/corpsuser/User_Page.html&callback=_xdc_._9p58yx&token=125651
(Domain name masked)
The Maps application code where things hang seems to be the javascript:
map = new google.maps.Map(document.getElementById('map'), mapOptions);
The element 'map' and object mapOptions are properly defined (after all, app works fine on dev machines)
I am guessing this is a licensing/Key issue.
I logged in to the Google Account used for enabling the Google Maps API and generated browser key and also added/associated the domain to the key but that didn't work. Noticed that there is a message that said it could take up to 5 minutes for changes to reflect. So waited for some time and still no luck.
Digging deeper, I saw that some of the calls, for example the snapToRoads API take in the Server API Key as a parameter.
However the call where the application hangs is the first call to setup the map and does not take in the API key.
The google documentation says I need to use this tag somewhere
<script async defer src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap" type="text/javascript"></script>
Where should I add it? and Do I have to define the initMap function? or should it be used as-is?
Any help would be greatly appreciated.
Here is a sample code in where you need to place the line <script async defer src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap" type="text/javascript"></script>
<html>
<head>
<title>Simple Map</title>
<meta name="viewport" content="initial-scale=1.0">
<meta charset="utf-8">
<style>
html, body {
height: 100%;
margin: 0;
padding: 0;
}
#map {
height: 100%;
}
</style>
</head>
<body>
<div id="map"></div>
<script>
var map;
function initMap() {
map = new google.maps.Map(document.getElementById('map'), {
center: {lat: -34.397, lng: 150.644},
zoom: 8
});
}
</script>
<script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap"
async defer></script>
</body>
</html>
You will notice that all example in these documentation uses that line, because you need this to load the Google Maps JavaScript API.
For the callback parameter, it explains here that when the async attributes lets the browser render the website while the Maps JavaScript API loads, when the API is ready, it will call the function specified using the callback parameter.
I had this problem and it turned out be related to caching. Some of the code involved in the loading and display of maps was in an external .js file that was being cached by Cloudflare. When I deployed to production, it was still using the old js. I needed to manually purge the cache within Cloudflare.
Using the same setup (devolper host vs. public host), facing the same symptoms (map shows on developer host while map loading stalled on public host with last log line on browser console "...maps.googleapis.com/maps/api/js/AuthenticationService.Authenticate?1shttp...&token=64530 [HTTP/1.1 200 OK 25ms]" the reason for failure was that (only on public host) the map container-div contained legacy direct element style formatting disabling display that overrode the used formatting so that the map stayed unvisible.
The probability the reported problem will have exactly the same reason might be small, but there are 2 findings:
The loading of Google maps api code and data might stop at unexpected positions as the reason for my failure was in no way related to authentication.
If a developer version is running, starting to derive the public version with an exact copy of the developer version seems to be a good, at least handy practice.
By the way, there are ways to not use the cited code to use Google maps, but it can also be done by dynamically include it (independent of async and defer settings).
I had the same experience with Chrome browser on Android. The reason was Chrome's data saver option which was enabled. The problem got solved on disabling this option. The second option is to go with SSL. Chrome does not compress the pages for data saving served as https.
I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:
Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)
If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.
AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.
Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.
The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.