Display html within Extjs panel for IE browser - html

I have an ExtJS application that needs to display an html document within a panel. The application works fine in FF and chrome However, when I attempt to use this functionality inside Internet Explorer 8 a pop-up appears:
"Do you want to save this file, or find a program online to open it?
Name: getBinary Type: Unknown File Type, 688KB From:
SERVER_NAME_HERE"
I have not included the code because it's my company's property but could probably create a mock-up if it's really needed. I first wanted to see if there was any common knowledge on this type of issue that I haven't been able to find online.

Ok I figured it out and have to apologize to the good people here at stackoverflow. IE8 was not having trouble displaying a .html page. The page was actually a .xhtml file which I learned by digging deeper into the code, one of our developers just named it .html :/
So there are two different solutions to get internet explorer 8 to display a .xhtml file.
You can use the hack/fix mentioned here w3c xhtml fix
You can change the .xhtml file's extension to .html (this is more risky) but in my particular case no additional information was being lost.

Related

Wordpress drops file prefix after update

TL;DR - why does Wordpress remove file:// from file links?
Our intranet page has a section containing icons with links behind them. All of a sudden (our guess is after an update), one of the links stopped working. The link is as follows (1):
<img src="/img/meetings.jpg" style="width:75px; height:75px;"/>
The expected behaviour (in Internet Explorer (2)) is that the file explorer opens, and points to the share \vmdata\meetings, which has always worked up until now.
When I hover over the icon image I see the following code however:
http://vmdata/meetings
and when I check the HTML by viewing the source of the page, I see that the file:// prefix is indeed gone:
<a href="//vmdata/meetings" target="_blank" rel="noopener noreferrer">
To work around this issue, I had a look at a page on which the original creator had added the same type of links. My idea was to create a similar page, copy the HTML code on the page and link the icon to said page. I added the page and HTML link but after viewing the page, the result is exactly the same: the file:// prefix is gone.
My guess is that something within Wordpress is rewriting/removing the file:// link. My question now is twofold: how do I stop this rewriting/removing behaviour, and/or how can I add a link to a fileshare as before?
PS: the creator of the website is no longer available, and the website is running yet unmanaged. Only content creators are left. We have no Wordpress knowledge in house, so we're basically just trying to keep the site up and running (in wait for a new site).
(1) I realise that pointing to a server share from an intranet site is a very ugly way to publish files. However, as stated before, we're in a situation of if it ain't broke don't fix it with this website, so we just want to go back to a working situation. Creating a page to link to (hosted) documents would be a lot better, but is for various reasons not feasible.
(2) please don't bother pointing out NOT to use Internet Explorer (anymore), we all know that but we are stuck with it because it is a requirement for one of the major tools we all use everyday. As long as that tool doesn't support other browsers, we're stuck with IE (unfortunately).
I found another question regarding this issue: can't save network share path as a link in wordpress 3.1
Apparently the correct way to add an allowed protocol into WordPress is to modify the functions.php file and add the following code:
function allowed_link_protocols_filter($protocols)
{
$protocols[] = 'file';
return $protocols;
}
add_filter('kses_allowed_protocols', 'allowed_link_protocols_filter');
More information can be found in the following article: https://developer.wordpress.org/reference/hooks/kses_allowed_protocols/
Adding the code above solved the problem for me, so I hope this helps others to solve similar issues in the future.

Chrome inspector doesn't show css line number anymore in many of the localhost sites

Every other website I visit, the inspector works as expected:
But in many of the sites I'm editing within apache server (using xampp) somehow they doesn't show the "filename.css:lineNumber" data
Also, every change I try to do in the inspector in thse sites, doesn't do anything to the code shown in the source tab.
I've tried refreshing, hard refreshing with cache, closing and opening the tab, closing and opening chrome again. The same problem occurs.
In other sites the inspector works well, but not in many of the localhost sites.
Has someone experienced this before? Is there a way to fix this?
If you are using a client-side CSS generator library such as Lea Verou's excellent -prefix-free or client-side Less, you will not see source information as it has all been processed and reinjected as style elements.
Client-side Less has a property [dumpLineNumbers] to include source line info as a comment in the generated source. (I'm not sure if this will display in Chrome's inspector - but I think it might)
The only "fix" I know of for -prefix-free is to temporarily remove it, obtain the source info for reference, then put it back in.
I have had the exact same problem (not using a css generator), and it appears to be a known bug with the current version of Chrome, the solution is to use the more up to date Beta version known as Chrome Canary - Here's the link :)
https://www.google.co.uk/intl/en/chrome/browser/canary.html
Try to check what the format of your CSS file is. I had the same trouble with the UNIX and Macintosh formats. For example, try to open you CSS file in Notepad++ and in the bottom right corner of the window you'll see your current format. If you see UNIX or Macintosh there, click it with the right button of your mouse and change to DOS/Windows. Then save your file and refresh your page in Chrome. It definitely helped in my case.enter image description here

Convert webarchive to html

I managed to collect the behavior of a complex web site into a webarchive. Thereafter I would like to turn that webarchive into an html set of nested directory. Yet, when I did it both with Waf and with a commercial software bought on the the Apple store, what I get is just the nested directory with the html page at the bottom and no images, nor css nor working links.
If you are interested the webarchive document is at:
http://www.miafoto.it/it/GiroMilano.webarchive
while the weak product of the extraction is at:
http://www.miafoto.it/it/Giromilano/Pagine/default.aspx
and the empty directories above.
In addition to the different look, the webarchive displays the same behavior as the official web site - when a listbox vales is selected and then the button pushed - while the extracted version produces a page with no contents by loading itself rather than the official page.
As you may see the webarchive is over 1MB while the extraction just little over 1 KB.
What is wrong with it and how may I perform such an apparently trivial business with usable results?
Thanks,
textutil -convert html example.webarchive
Be careful — html with files is created in the same folder as webarchive!
Also, I had to open .html with text editor and replace "file:///image.tiff" links (replace "file:///" with "") so they point to relative path.
Also, not all browsers display .tiff images.
Who knew we have Stack Overflow wiki?
I find that this WebArchiveExtractor.app works on my Mac (Mojave OS) –
https://robrohan.github.io/WebArchiveExtractor/
I managed the issue by finding all parameters being submitted in the page and submitting them too in my script, ignoring the webarchive.
To save HTML pages on mac, I use chrome. Download and install it and save your page as HTML. Safari will save the web pages with webarchiveformat and for me, it's very hard to deal with it.

just wanna highlight some texts when use a browser to view local html

A lot of tutorials which can be downloaded have the file type of .chm, .pdf, .html, etc. I downloaded a Java SE tutorial of Java SE in HTML format. When I use chrome to view it and everything is good. But I just wonder how could I just directly highlight some useful information (e.g. text) when I use chrome to view it? The html files are local, I know that I could use some software to edit it, like using HTML tag <font color:> etc.
But I just want to highlight it directly in the browser like editing it in word. Is there any suggestion? Dose chrome support such kind of plugin? If you still don't understand what i mean, please refer to "clip to evernote", which is a plugin of chrome and can cut the pages and upload them to the evernote server. when I use evernote client to read them, I can directly highlight some words which is useful to me.
It's much more a SuperUser question, but ... There is a lot of plugins for highlighting web pages out there. You could try Yawas or Simple Highlighter
edit: ok, I think I understood better your problem ... Yawas, Simple Highlighter, as well as most other highlighters, don't hightlight on local pages.
I'm not sure there is such an highlighter available for Chrome, then. What I would suggest is to try opening you documentations with Amaya instead of Chrome. It's both the Browser and the Editor from the W3C; and since it has both functionalities, you probably will be able to do what you want on your local pages.
You can save it to your computer by clicking "Open a new tab containing a list of highlights and notes on just this page". Then you can save only the html contents to your computer with the name as you like. Don't try to use ALT to save the list of note because you will never see the contents what you want to save.

Google Chrome doesn't print my Javascript-and-AJAX-generated content

I am the developer of a webapplication.
Chrome displays my Javascript-and-AJAX-generated webpages correctly, but when I try and print them (via its native function) I get a blank page.
Printing works just fine on other browsers.
I have tried and print server-side-generated pages with Chrome and they get printed fine.
What can be wrong on the webpages of my web application? I think the issue is that those pages are dynamically generated by Javascript and AJAX.
I am saying that because I have just found out that I can't even save those pages correctly with Chrome (all the dynamic HTML is not shown).
I am on Google Chrome 13.0.782.112.
How can I debug and fix this issue?
Is there any workaround?
Is anybody managing to get dynamic-generated content printed with Google Chrome?
This problem is driving me crazy!
P.S.: some of my users are reporting the same problem on Safari :-(
UPDATE: upgraded to Chrome 14.0.835.202 but the issue is still there...
I've had exactly the same problem, though not in Chrome (although I didn't actually test with Chrome). On certain browsers (and I cannot remember which ones offhand - but it was either in IE or FF), any content that is added into the DOM by JavaScript is not printed. What is actually printed is the original document that was served to the browser.
I successfully solved this using the following JavaScript function:
function docw()
{
var doct = "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\"
\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">";
document.write(doct + document.getElementsByTagName('html')[0].innerHTML);
}
This is called when JavaScript page manipulation has finished. It actually reads the entire DOM and then re-writes the whole document back into itself using document.write. This is now enabling printing for my particular project on both IE and FireFox, although I'm pretty sure one of those did already work in the first place, and the other one didn't (can't remember which from memory, and it's not a project I can pull out to test at the moment). Whether this will solve the problem in Chrome I don't know, but it's worth a try.
Edit Terribly sorry, but I'm a complete pleb. I just re-read my old comments and this solution had nothing to do with printing; it was actually to fix a problem where only the original served document would be saved when saving to file. However, that said, I still wonder if it's worth a shot.
This helped me with a related problem - how to view/save dynamically generated HTML itself. I came up with the following bookmarklet.
javascript:(function(){document.write('<pre>'+(document.getElementsByTagName('html')[0].innerHTML.replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>'))+'</pre>')})()
I run this and 'select all' / copy, and then (in Linux) do 'xclip -out' to direct the large amount of clipboard data to a file.
Trevor's answer totally worked for me- with jquery I simply did something like
$("html").html $("html").html()
worked perfectly