Javascript of external iwidget on IBM Connections 4 is retrieved but the events (onView) are not fired - widget

I have an iWidget that is deployed outside the Connections environment.
This iWidget is working in WebSphere Portal 8, the iWidget Wrapper.
The iWidget can be added to a community and the initial text is loaded.
The onView() or other events are never invoked, resulting in the iWidget displaying the initial message and the 'div' never being replaced. I have change the src of the javascript in different ways, the ./javascript one, being the latest.
Firebug shows a succesful retrieval of the js (widget.xml) through the communities/proxy context root.
This is the iWidget XML:
<iw:iwidget id="365DocsWidget" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:iw="http://www.ibm.com/xmlns/prod/iWidget" supportedModes="view edit" mode="view" lang="en" iScope="365DocsWidgetScope" sandbox="false" allowInstanceContent="true"><iw:itemSet id="pref"><iw:item id="documentlist" value="https://fire3ice.sharepoint.com/sites/demo4if/_api/Web/Lists(guid'cca56100-1f15-461b-92f3-d1da80ba1ca8')"/></iw:itemSet><iw:resource src="./javascript/365DocsWidget.js" /> <iw:content mode="view"><![CDATA[<div id="ROOT_DIV">Hello World, last time this widget was updated: 2013-01-04 16:07:17</div>]]></iw:content><iw:content mode="edit"><![CDATA[<div id="EDITMODE_DIV">Hello Edit World</div><div><input type="button" name="selectDocumentList" value="selectDocumentList" onclick="iContext.iScope().changeDocumentList()" /> </div> ]]></iw:content></iw:iwidget>
The widget.xml is publically accessible here:
https://eog-fire-ice.appspot.com/365DocsWidget.jsp

This might be caused by the Javascript resource for the iWidget not being recognised as Javascript and therefore not being loaded.
Can you set a Content-Type of application/javascript on the response for the JS file?

I've also seen this when I have a typo in the JavaScript file. Please go through the JavaScript file and make sure there are no missing commas or semi-colons.

There is a mismatch between your xml and js.
The iScope in your xml is "365DocsWidgetScope"
The Object declared in your js is "J365DocsWidgetScope" (in https://eog-fire-ice.appspot.com/javascript/365DocsWidget.js)
replace "J365DocsWidgetScope" to "365DocsWidgetScope" should be able to solve the problem.

Related

Retrofit: MalformedJsonException despite right return [duplicate]

I use the free webhost 000webhost. The service is okay but it inserts some javascript counter into every file and request.
The script looks like this.
<!-- www.000webhost.com Analytics Code -->
<script type="text/javascript" src="http://analytics.hosting24.com/count.php"></script>
<noscript><img src="http://analytics.hosting24.com/count.php" alt="web hosting" /></noscript>
<!-- End Of Analytics Code -->
If i do a jquery post it breaks my code and I get no response.
<?xml version="1.0"?>
<response>
<status>1</status>
<time>1266267386</time>
<message>
<author>test</author>
<text>hallo</text>
</message>
<message>
<author>test</author>
<text>hallo</text>
</message>
<message>
<author>test</author>
<text>hallo</text>
</message>
<message>
<author>test</author>
<text>hallo</text>
</message>
<message>
<author>admin</author>
<text>hallo</text>
</message>
</response>
<!-- www.000webhost.com Analytics Code -->
<script type="text/javascript" src="http://analytics.hosting24.com/count.php"></script>
<noscript><img src="http://analytics.hosting24.com/count.php" alt="web hosting" /></noscript>
<!-- End Of Analytics Code -->
How can I fix that? How can I remove the hosting javascript code?
They have a link in their cPanel where you can disable the analytics code.
http://members.000webhost.com/analytics.php
EDIT
Beware - by doing this you violate their policy and they will eventually drop you from their service and you will lose all your data.
In case anyone run into this problem in the future, to stop the analytic code from being included, all you have to do is use the exit() command at the end of your script. The code is in a PHP auto-append file. It doesn't get executed if you exit explicitly instead of letting the script reach the end of the file. The setup at 000webhost.com parses HTML through PHP as well. If you want your regular HTML files to pass validation, add at the very end the following:
<?php exit; ?>
Adding to Catfish's answer,
They have a link in their cPanel where you can disable the analytics code.
http://members.000webhost.com/analytics.php
EDIT
Beware - by doing this you violate their policy and they will eventually drop you from their service and you will lose all your data.
As asked by asalamon74 in the comments,
When I disable the analytics code with this method my domain get canceled due to inactivity. Is there a way to avoid it
Yes, there is a simple way to avoid it. Go ahead and enable your analytics code. Visit http://members.000webhost.com/analytics.php and enable it again.
Then, create a .htaccess in your public_html folder and add the following code to it:
<FilesMatch "\.(php)$">
php_value auto_append_file none
</FilesMatch>
This will prevent all .php files from been appended with the analytics code. Just replace php with any file extension where you want the analytics code removed.
That's all. This way your domain won't get cancelled. Hope it helps.
According to the FAQ, you can go to http://members.000webhost.com/analytics.php to disable the analytics:
How to disable analytic code from my site? (Note: in some case you
might need to add this "php_value auto_append_file none" to your
.htaccess)
FAQ - Frequent Ask Questions Guide!:
http://www.000webhost.com/forum/faq/7894-faq-frequent-ask-questions-guide.html
Alternatively, you could edit your .htaccess file by adding this line:
php_value auto_append_file none
Try changing the content-type in PHP; that should help. (For example, I assume, it doesn't modify images)
Alternatively, you could remove the code in Javascript using indexOf and substring.
Check the terms of service agreement for your host. They may require their analytics code to run unadulterated as part of your TOS.
If they require this code to run, then there are 2 possible solutions:
Get a new host (see http://www.webhostingtalk.com for hosting reviews and deals)
Figure out what in your code is being affected and find a work around there.
If they do not require the analytics code block as a condition of service then you may be able to block their snippet from running by altering the tag's event handler via jquery that they use to fire their code.
If you are using the $.ajax(); function, there is a dataFilter parameter that lets you modify the raw contents of the request before processing it. In here, you could either substring or replace out the offending text, or you could add a <!-- to the end of your XML file, then stick a --> at the end of the response in the dataFilter code.
The analytics code also creates 3 html errors when checked with the w3c validator, so this also lowers your website ranking with google, as google say their ranking also requires that your pages are well formed! try using the validator on this page HERE and you will see the 3 errors.
I had EXACTLY the same problem with these guys. I was so confounded as Chrome just said there was a problem, but thankfully I tried Firefox and it actually blurted out the offending code. I talked to customer support about it, and they said they don't add content to customer pages. Seriously?! So the solution is, as suggested by others, move on and get someone better.

Resource interpreted as stylesheet but transferred with MIME type text/html (seems not related with web server)

I have this problem. Chrome continues to return this error
Resource interpreted as stylesheet but transferred with MIME type text/html
The files affected by this error are just the Style, chosen and jquery-gentleselect (other CSS files that are imported in the index in the same way work well and without error). I've already checked my MIME type and text/css is already on CSS.
Honestly I'd like to start by understanding the problem (a thing that seems I cannot do alone).
i'd like to start by understanding the problem
Browsers make HTTP requests to servers. The server then makes an HTTP response.
Both requests and responses consist of a bunch of headers and a (sometimes optional) body with some content in it.
If there is a body, then one of the headers is the Content-Type which describes what the body is (is it an HTML document? An image? The contents of a form submission? etc).
When you ask for your stylesheet, your server is telling the browser that it is an HTML document (Content-Type: text/html) instead of a stylesheet (Content-Type: text/css).
I've already checked my myme.type and text/css is already on css.
Then something else about your server is making that stylesheet come with the wrong content type.
Use the Net tab of your browser's developer tools to examine the request and the response.
Using Angular?
This is a very important caveat to remember.
The base tag needs to not only be in the head but in the right location.
I had my base tag in the wrong place in the head, it should come before any tags with url requests. Basically placing it as the second tag underneath the title solved it for me.
<base href="/">
I wrote a little post on it here
I also had problem with this error, and came upon a solution. This does not explain why the error occurred, but it seems to fix it in some cases.
Include a forward slash / before the path to the css file, like so:
<link rel="stylesheet" href="/css/bootstrap.min.css">
My issue was simpler than all the answers in this post.
I had to setup IIS to include static content.
Setting the Anonymous Authentication Credentials to Application Pool Identity did the trick for me.
Try this <link rel="stylesheet" type="text/css" href="../##/yourcss.css">
where ## is your folder wherein is your .CSS - file
Don't forget about the: .. (double dots).
I was also facing the same problem. And after doing some R&D, I found that the problem was with the file name. The name of the actual file was "lightgallery.css" but while linking I has typed "lightGallery.css".
More Info:
It worked well on my localhost (OS: Windows 8.1 & Server: Apache).
But when I uploaded my application to a remote server ( Different OS & Web server than than my localhost) it didn't work, giving me the same error as yours.
So, the issue was the case sensitivity (with respect to file names) of the server.
In case you serve static css with nginx you should add
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
or
location ~ \.css{
default_type text/css;
}
location ~ \.js{
default_type application/x-javascript;
}
to nginx conf
Based on the other answers it seems like this message has a lot of causes, I thought I'd just share my individual solution in case anyone has my exact problem in the future.
Our site loads the CSS files from an AWS Cloudfront distribution, which uses an S3 bucket as the origin. This particular S3 bucket was kept synced to a Linux server running Jenkins. The sync command via s3cmd sets the Content-Type for the S3 object automatically based on what the OS says (presumably based on the file extension). For some reason, in our server, all the types were being set correctly except .css files, which it gave the type text/plain. In S3, when you check the metadata in the properties of a file, you can set the type to whatever you want. Setting it to text/css allowed our site to correctly interpret the files as CSS and load correctly.
#Rob Sedgwick's answer gave me a pointer, However, in my case my app was a Spring Boot Application. So I just added exclusions in my Security Config for the paths to the concerned files...
NOTE - This solution is SpringBoot-based... What you may need to do might differ based on what programming language you are using and/or what framework you are utilizing
However the point to note is;
Essentially the problem can be caused when every request, including
those for static content are being authenticated.
So let's say some paths to my static content which were causing the errors are as follows;
A path called "plugins"
http://localhost:8080/plugins/styles/css/file-1.css
http://localhost:8080/plugins/styles/css/file-2.css
http://localhost:8080/plugins/js/script-file.js
And a path called "pages"
http://localhost:8080/pages/styles/css/style-1.css
http://localhost:8080/pages/styles/css/style-2.css
http://localhost:8080/pages/js/scripts.js
Then I just add the exclusions as follows in my Spring Boot Security Config;
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true)
#Order(SecurityProperties.ACCESS_OVERRIDE_ORDER)
public class SecurityConfig extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests()
.antMatchers(<comma separated list of other permitted paths>, "/plugins/**", "/pages/**").permitAll()
// other antMatchers can follow here
}
}
Excluding these paths "/plugins/**" and "/pages/**" from authentication made the errors go away.
Cheers!
Using Angular
In my case using ng-href instead of href solved it for me.
Note :
I am working with laravel as back-end
If you are on JSP, this problem can come from your servlet mapping.
if your mapping takes url by defaut like this:
#WebServlet("/")
then the container interpret your css url, and goes to the servlet instead of going to the css file.
i had the same issue, i changed my mapping and now everyting works
i was facing the same thing, with sort of the same .htaccess file for making pretty urls. after some hours of looking around and experimenting. i found out that the error was because of relatively linking files.
the browser will start fetching the same source html file for all the css, js and image files, when i would browse a few steps deep into the server.
to counter this you can either use the <base> tag on your html source,
<base href="http://localhost/assets/">
and link to files like,
<link rel="stylesheet" type="text/css" href="css/style.css" />
<script src="js/script.js"></script>
or use absolute links for all your files.
<link rel="stylesheet" type="text/css" href="http://localhost/assets/css/style.css" />
<script src="http://localhost/assets/js/script.js"></script>
<img src="http://localhost/assets/images/logo.png" />
I have a similar problem in MVC4 using forms authentication. The problem was this line in the web.config,
<modules runAllManagedModulesForAllRequests="true">
This means that every request, including those for static content, being authenticated.
Change this line to:
<modules runAllManagedModulesForAllRequests="false">
I also face this problem recently on chrome. I just give absolute path to my CSS file problem solve.
<link rel="stylesheet" href="<?=SS_URL?>arica/style.css" type="text/css" />
For anyone that might be having this issue.
I was building a custom MVC in PHP when I encountered this issue.
I was able to resolve this by setting my assets (css/js/images) files to an absolute path.
Instead of using url like href="css/style.css" which use this entire current url to load it. As an example, if you are in http://example.com/user/5, it will try to load at http://example.com/user/5/css/style.css.
To fix it, you can add a / at the start of your asset's url (i.e. href="/css/style.css"). This will tell the browser to load it from the root of your url. In this example, it will try to load http://example.com/css/style.css.
Hope this comment will help you.
It is because you must have set content type as text/html instead of text/css for your server page (php,node.js etc)
I want to expand on Todd R's point in the OP. In asp.net pages, the web.config file defines permissions needed to access each file or folder in the application. In our case, the folder of CSS files did not allow access for unauthorized users, causing it to fail on the login page before the user was authorized. Changing the required permissions in web.config allowed unauthorized users to access the CSS files and solved this problem.
I have the same exact problem and after a few minutes fooling around I deciphered that I missed to add the file extension to my header. so I changed the following line :
<link uic-remove rel="stylesheet" href="css/bahblahblah">
to
<link uic-remove rel="stylesheet" href="css/bahblahblah.css">
Using React
I came across this error in my react profile app. My app behaved kind of like it was trying to reference a url that doesn't exist. I believe this has something to do with how webpack behaves.
If you are linking files in your public folder you must remember to use %PUBLIC_URL% before the resource like this:
<link type="text/css" rel="stylesheet" href="%PUBLIC_URL%/bootstrap.min.css" />
In case anyone comes to this post and has a similar issue. I just experienced a similar problem, but the solution was quite simple.
A developer had mistakenly dropped a copy of the web.config into the CSS directory. Once deleted, all errors were resolved and the page properly displayed.
I came across the same issue whilst resuming work on a old MEAN stack project. I was using nodemon as my local development server and got the same error Resource interpreted as stylesheet but transferred with MIME type text/html. I changed from nodemon to http-server which can be found here. It immediately worked for me.
This occurred when I removed the protocol from the css link for a css stylesheet served by a google CDN.
This gives no error:
<link rel="stylesheet" href="//fonts.googleapis.com/css?family=Architects+Daughter">
But this gives the error Resource interpreted as Stylesheet but transferred with MIME type text/html :
<link rel="stylesheet" href="fonts.googleapis.com/css?family=Architects+Daughter">
I was facing similar issue. And Exploring solutions in this fantastic Stack Overflow page.
user54861 's response (mismatching names in case sensetivity) makes me curious to inspect my code again and realized that "I didnt upload two js files that I loaded them in head tag". :-)
When I uploaded them the issue runs away ! And code runs and page rendered without any another error!
So, moral of the story is don't forget to make sure that all of your js files are uploaded where the page is looking for them.
I came across the same issue with a .NET application, a CMS open-source called MojoPortal. In one of my themes and skin for a particular site, when browsing or testing it would grind and slow down like it was choking.
My issue was not of the "type" attribute for the CSS but it was "that other thing". My exact change was in the Web.Config. I changed all the values to FALSE for MinifyCSS, CacheCssOnserver, and CacheCSSinBrowser.
Once that was set the web site was speedy once again in production.
Had the same error because I forgot to send a correct header a first
header("Content-type: text/css; charset: UTF-8");
print 'body { text-align: justify; font-size: 2em; }';
I encountered this problem when loading CSS for a React layout module that I installed with npm. You have to import two .css files to get this module running, so I initially imported them like this:
#import "../../../../node_modules/react-grid-layout/css/styles.css";
but found out that the file extension has to be dropped, so this worked:
#import "../../../../node_modules/react-grid-layout/css/styles";
If nodejs and using express
the below code works...
res.set('Content-Type', 'text/css');
I started to get the issue today only on chrome and not safari for the same project/url for my goormide container (node.js)
After trying several suggestions above which didn't appear to work and backtracking on some code changes I made from yesterday to today which also made no difference I ended up in the chrome settings clicking:
1.Settings;
2.scroll down to bottom, select: "Advanced";
3.scroll down to bottom, select: "Restore settings to their original defaults";
That appears to have fixed the problem as I no longer get the warning/error in the console and the page displays as it should. Reading the posts above it appears the issue can occur from any number of sources so the settings reset is a potential generic fix.
Cheers
If you are serving the app in prod make sure you are serving the static files with service worker. I had this error when I was serving only static subfolder of React build on Django (without assets that have styles)

Browser load HTML [duplicate]

I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:
Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)
If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.
AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.
Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.
The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

Html from Silverlight (not out of browser)

I am trying to open HTML file from the local URI which I use as XML Editor, to edit xml data that come from Silverlight application, then close browser window and return back edited xml data to the Silverlight application.
I tried to use HtmlPage.Window.Navigate but I don't quit like it.
I have tried using a method from: http://weblogs.asp.net/dwahlin/archive/2010/05/10/integrating-html-into-silverlight-applications.aspx
but instanly got an exception "failed to invoke ShowJobPlanIFrame"
Is there any way to handle this task?
"Out of browser" mode doesn't fit.
Thanks.
===========================================================================
Update:
It worked out using IFrame overlay.
Button click invokes the following code in C#:
var scriptObject = (ScriptObject)HtmlPage.Window.GetProperty("ShowJobPlanIFrame");
scriptObject.InvokeSelf(url);
Where "ShowJobPlanIFrame" is as defined at:
http://weblogs.asp.net/dwahlin/archive/2010/05/10/integrating-html-into-silverlight-applications.aspx
This allowed me to pass data into XML editor and then get it back.
An error with JavaScript function invocation I told above, was my fault in JavaScript code itself.
A very similar scenario: https://stackoverflow.com/a/7919065/384316
Try using an iframe overlay, then you can load any HTML-like content.
There is an excellent explanation of how to do this here:
http://www.wintellect.com/cs/blogs/jlikness/archive/2010/09/19/hosting-html-in-silverlight-not-out-of-browser.aspx
It worked out using IFrame overlay.
Button click invokes the following code in C#:
var scriptObject = (ScriptObject)HtmlPage.Window.GetProperty("ShowJobPlanIFrame");
scriptObject.InvokeSelf(url);
Where "ShowJobPlanIFrame" is as defined at:
http://weblogs.asp.net/dwahlin/archive/2010/05/10/integrating-html-into-silverlight-applications.aspx
This allowed me to pass data into XML editor and then get it back.
An error with JavaScript function invocation I told above, was my fault in JavaScript code itself.
Did you try NavigationFramework of Silverlight? It's capability may support your needs in a more simple way than using multiple browser pages.

Alternative to iFrames with HTML5

I would like to know if there is an alternative to iFrames with HTML5.
I mean by that, be able to inject cross-domains HTML inside of a webpage without using an iFrame.
Basically there are 4 ways to embed HTML into a web page:
<iframe> An iframe's content lives entirely in a separate context than your page. While that's mostly a great feature and it's the most compatible among browser versions, it creates additional challenges (shrink wrapping the size of the frame to its content is tough, insanely frustrating to script into/out of, nearly impossible to style).
AJAX. As the solutions shown here prove, you can use the XMLHttpRequest object to retrieve data and inject it to your page. It is not ideal because it depends on scripting techniques, thus making the execution slower and more complex, among other drawbacks.
Hacks. Few mentioned in this question and not very reliable.
HTML5 Web Components. HTML Imports, part of the Web Components, allows to bundle HTML documents in other HTML documents. That includes HTML, CSS, JavaScript or anything else an .html file can contain. This makes it a great solution with many interesting use cases: split an app into bundled components that you can distribute as building blocks, better manage dependencies to avoid redundancy, code organization, etc. Here is a trivial example:
<!-- Resources on other origins must be CORS-enabled. -->
<link rel="import" href="http://example.com/elements.html">
Native compatibility is still an issue, but you can use a polyfill to make it work in evergreen browsers Today.
You can learn more here and here.
You can use object and embed, like so:
<object data="http://www.web-source.net" width="600" height="400">
<embed src="http://www.web-source.net" width="600" height="400"> </embed>
Error: Embedded data could not be displayed.
</object>
Which isn't new, but still works. I'm not sure if it has the same functionality though.
object is an easy alternative in HTML5:
<object data="https://github.com/AbrarJahin/Asp.NetCore_3.1-PostGRE_Role-Claim_Management/"
width="400"
height="300"
type="text/html">
Alternative Content
</object>
You can also try embed:
<embed src="https://github.com/AbrarJahin/Asp.NetCore_3.1-PostGRE_Role-Claim_Management/"
width=200
height=200
onerror="alert('URL invalid !!');" />
Re-
As currently, StackOverflow has turned off support for showing external URL contents, run code snippet is not showing anything. But for your site, it will work perfectly.
No, there isn't an equivalent. The <iframe> element is still valid in HTML5. Depending on what exact interaction you need there might be different APIs. For example there's the postMessage method which allows you to achieve cross domain javascript interaction. But if you want to display cross domain HTML contents (styled with CSS and made interactive with javascript) iframe stays as a good way to do.
If you want to do this and control the server from which the base page or content is being served, you can use Cross Origin Resource Sharing (http://www.w3.org/TR/access-control/) to allow client-side JavaScript to load data into a <div> via XMLHttpRequest():
// I safely ignore IE 6 and 5 (!) users
// because I do not wish to proliferate
// broken software that will hurt other
// users of the internet, which is what
// you're doing when you write anything
// for old version of IE (5/6)
xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if(xhr.readyState == 4 && xhr.status == 200) {
document.getElementById('displayDiv').innerHTML = xhr.responseText;
}
};
xhr.open('GET', 'http://api.google.com/thing?request=data', true);
xhr.send();
Now for the lynchpin of this whole operation, you need to write code for your server that will give clients the Access-Control-Allow-Origin header, specifying which domains you want the client-side code to be able to access via XMLHttpRequest(). The following is an example of PHP code you can include at the top of your page in order to send these headers to clients:
<?php
header('Access-Control-Allow-Origin: http://api.google.com');
header('Access-Control-Allow-Origin: http://some.example.com');
?>
This also does seem to work, although W3C specifies it is not intended "for an external (typically non-HTML) application or interactive content"
<embed src="http://www.somesite.com" width=200 height=200 />
More info:
http://www.w3.org/wiki/HTML/Elements/embed
http://www.w3schools.com/tags/tag_embed.asp
An iframe is still the best way to download cross-domain visual content. With AJAX you can certainly download the HTML from a web page and stick it in a div (as others have mentioned) however the bigger problem is security. With iframes you'll be able to load the cross domain content but won't be able to manipulate it since the content doesn't actually belong to you. On the other hand with AJAX you can certainly manipulate any content you are able to download but the other domain's server needs to be setup in such a way that will allow you to download it to begin with. A lot of times you won't have access to the other domain's configuration and even if you do, unless you do that kind of configuration all the time, it can be a headache. In which case the iframe can be the MUCH easier alternative.
As others have mentioned you can also use the embed tag and the object tag but that's not necessarily more advanced or newer than the iframe.
HTML5 has gone more in the direction of adopting web APIs to get information from cross domains. Usually web APIs just return data though and not HTML.
I created a node module to solve this problem node-iframe-replacement. You provide the source URL of the parent site and CSS selector to inject your content into and it merges the two together.
Changes to the parent site are picked up every 5 minutes.
var iframeReplacement = require('node-iframe-replacement');
// add iframe replacement to express as middleware (adds res.merge method)
app.use(iframeReplacement);
// create a regular express route
app.get('/', function(req, res){
// respond to this request with our fake-news content embedded within the BBC News home page
res.merge('fake-news', {
// external url to fetch
sourceUrl: 'http://www.bbc.co.uk/news',
// css selector to inject our content into
sourcePlaceholder: 'div[data-entityid="container-top-stories#1"]',
// pass a function here to intercept the source html prior to merging
transform: null
});
});
The source contains a working example of injecting content into the BBC News home page.
You can use an XMLHttpRequest to load a page into a div (or any other element of your page really). An exemple function would be:
function loadPage(){
if (window.XMLHttpRequest){
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}else{
// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState==4 && xmlhttp.status==200){
document.getElementById("ID OF ELEMENT YOU WANT TO LOAD PAGE IN").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("POST","WEBPAGE YOU WANT TO LOAD",true);
xmlhttp.send();
}
If your sever is capable, you could also use PHP to do this, but since you're asking for an HTML5 method, this should be all you need.
The key issue to load another site's page in your own site's page is security. There is cross-site security policy defined, you would have no chance to load it directly in your iframe if another site has it set to strict same origin policy. Hence to find an alternative to iframe, they must be able to bypass this security policy restriction, even in the future, if any new component is introduced by WSC, it would have similar security restrictions.
For now, the best way to bypass this is to simulate a normal page access in your logic, this means AJAX + server side access maybe a good option. You can set up some proxy at your server side and fetch the page content and load it into the iframe. This works but not perfect as if there is any link or image in the content and they may not be accessible.
Normally if you try to load a page in your own iframe, you would need to check whether the page can be loaded in iframe or not. This post gives some guidelines on how to do the check.
You should have a look into JSON-P - that was a perfect solution for me when I had that problem:
https://en.wikipedia.org/wiki/JSONP
You basically define a javascript file that loads all your data and another javascript file that processes and displays it. That gets rid of the ugly scrollbar of iframes.