I have a strange issue where accented characters (central european) are not shown correctly. The web site was created in Adobe Muse, and the fonts are Museo Slab and Open Sans (from Typekit) which should contain accented characters and they appear correctly on Typekit's website and on typetester.org as well.
I have checked the code but not found anything what might be causing the issue.
The website (work in progress) is at http://festivalzvsk.businesscatalyst.com/
thanks,
Michael
it was the font subsetting option within Muse - the option 'default subset' in File > Site Properties was not using the full character set (default subsetting is the default option), so when switched to "All' it worked. Many thanks for pointing me to right direction.
Related
In Google Chrome, if I create a prompt with an Emoji character, it looks fine there. But, if I put in an input field or in the page itself, all the characters look weird with a square and a cross.
Do you know if this is something I can fix easily?
Is it the charset used in the page (utf-8)?
Is it the font I'm using?
Thank you.
Prompt:
Textarea:
i have chinese help .chm file, before extracting the chm file the table of contents are showing properly in chinese but when i used the keytool or microsoft html workshop to extract the chm file the hhc and hhk file contents showing some symbols not chinese words. I installed chinese language pack also but its not resolving. please help me to resolve
HTML Help v1.0 was released 1997. It is old and not Unicode enabled. So all project files (.hhp, .hhc, .hhk) and HTML topic files (.htm, .html) all need to be saved as ANSI. If the HTML is encoded as Unicode (UTF-8 or UTF-16 aka UNICODE) non-English chars wont be handed correctly in the HH navigation (TOC, Index, Search). The embedded browser (content area on right of the help viewer) will however display the topic text fine since this is a UNICODE enabled control.
To correctly compile and display say e.g. Japanese Help you will need to find a e.g. Japanese Windows PC, or change the PC Region settings to use Japanese.
I managed to collect the behavior of a complex web site into a webarchive. Thereafter I would like to turn that webarchive into an html set of nested directory. Yet, when I did it both with Waf and with a commercial software bought on the the Apple store, what I get is just the nested directory with the html page at the bottom and no images, nor css nor working links.
If you are interested the webarchive document is at:
http://www.miafoto.it/it/GiroMilano.webarchive
while the weak product of the extraction is at:
http://www.miafoto.it/it/Giromilano/Pagine/default.aspx
and the empty directories above.
In addition to the different look, the webarchive displays the same behavior as the official web site - when a listbox vales is selected and then the button pushed - while the extracted version produces a page with no contents by loading itself rather than the official page.
As you may see the webarchive is over 1MB while the extraction just little over 1 KB.
What is wrong with it and how may I perform such an apparently trivial business with usable results?
Thanks,
textutil -convert html example.webarchive
Be careful — html with files is created in the same folder as webarchive!
Also, I had to open .html with text editor and replace "file:///image.tiff" links (replace "file:///" with "") so they point to relative path.
Also, not all browsers display .tiff images.
Who knew we have Stack Overflow wiki?
I find that this WebArchiveExtractor.app works on my Mac (Mojave OS) –
https://robrohan.github.io/WebArchiveExtractor/
I managed the issue by finding all parameters being submitted in the page and submitting them too in my script, ignoring the webarchive.
To save HTML pages on mac, I use chrome. Download and install it and save your page as HTML. Safari will save the web pages with webarchiveformat and for me, it's very hard to deal with it.
On a couple sites I've made random ASCII characters have been appearing in the middle of the document. It's always been on test sites and has never been a problem until my most recent project for a client that it just appeared on. They aren't displayed in the development environment I use (aptana 3), but then appear both on screen and in the source code when viewed in a browser.I've looked around and it looks like others have had the issue but I haven't been able to find any real solution. I tried messing with the text-encoding but nothing changed. Has anyone been able to solve this issue?
Did you try saving your file as UTF-8?
Did you verify that your file is actually saved as UTF-8?
A Linux command to check the file type is:
file -i filename.html
Did you verify the content-header being sent from the webserver?
Is any of the text coming from a database?
You could also try adding the following meta string:
<meta charset="utf-8">
With your example site: http://www.oryxwebstudio.com/saloncruz/#contact
Maybe I'm blind but I do not see any invalid characters in IE, Firefox or Chrome. Here is a screenshot of Firefox:
Just encountered an interesting problem. I have a CHM file. If I display it using Process.Start it displays correctly.
If however I launch it using the HH API it displays without any icons in the toolbar and treeview; the main content, including graphics, displays correctly. Here's what it looks like - with a few article titles scribbled out: alt text http://img527.imageshack.us/img527/3430/problemhelpfilehl2.png
The same file works fine on a colleagues machine with the same setup.
Any thoughts as to what's going on?
It seems that the problem was that I was giving HH API a relative path to the help file. Now that I'm using an absolute path the problem seems to have gone away.