Can Summernote editor produce XHTML code? - html

I'd like to use the Summoernote editor as alternative to the currently used TinyMCE, but it is a requirement to produce XHTML code.
Per default, it writes HTML5 code, f.i.:
<p>Foo<br>bar<br></p><hr><p>Baz<br></p>
What I would like to get is this:
<p>Foo<br />bar<br /></p><hr /><p>Baz<br /></p>
I can't find any reference to it, but can summernote somehow output XHTML code by a plugin or other means?

Related

How to change the <!DOCTYPE> declaration from version HTML4.01 to HTML 5 in IntelliJ

When generating JavaDoc documentation in IntelliJ 2018.3.3 (Community Edition) I get this information
Constructing Javadoc information...
javadoc: warning - You have not specified the version of HTML to use.
The default is currently HTML 4.01, but this will change to HTML5
in a future release. To suppress this warning, please specify the
version of HTML used in your documentation comments and to be
generated by this doclet, using the -html4 or -html5 options.
At the moment the first statement of all generated HTML-files is:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
Therefore I changed the default HTML language level from HTML 4 to HTML 5 at
Project Settings - Languages & Frameworks - Schemas and DTDs - Default XML Schemas
In addition I looked at the project settings at
Editor - File and Code Templates - Default Scheme
There are templates for HTML and HTML4, but not for HTML5.
I wonder about how to change to the right version of HTML as required.
Thanks for your help!
As the message you've quoted says, you can specify the version of the HTML generated by JavaDoc by specifying the -html5 option. In IntelliJ IDEA, this option can be specified under "Other command line arguments:" in the Tools | Generate JavaDoc... dialog.
None of the other options you've tried to change have any impact on JavaDoc generation.

Does Primefaces render invalid HTML?

To build a very simple web app for my company I'm evaluating some web frameworks, including PrimeFaces.
One strict requirement is the accessibility, and the fact that the HTML must be valid (checked against W3C Validator).
I've played a bit with the examples and I've noticed that the HTML rendered is not valid. The invalid block is the following:
<input name="javax.faces.ViewState" id="javax.faces.ViewState" value="2042368857675116551:8104174386673838460" autocomplete="off" type="hidden">
and the reason is:
line 74 column 159 - Errore: Attribute autocomplete not allowed on element input at this point.
So, can I perform some action on Primefaces in order to render valid HTML code? I didn't go deep into Primefaces, but I guess I have little control over how controls are rendered. Has anyone experience on this problem (validity of HTML rendered by PF) and would like to share it?
Thanks
The viewstate is not something that PrimeFaces adds to your rendered html but the jsf implementation. If you use mojarra there are some parameters that you can set to tune things (not tested this myself, just did some simple googling for you (hint, hint)).
See in How to let JSF render conform XHTML 1.0 strict?

How to choose the output format of the BlueGriffon editor?

Wikipedia says that BlueGriffon can create and edit pages in accordance to HTML 4, XHTML 1.0, HTML 5 and XHTML 5. But i can't find where to choose the output format, it always creates an xhtml file when saving. Could someone give me a hint?
Thanks,
Stephan
As it turns out you can't. You can pick your new documents format, but afterwards you can't change.

Find unclosed HTML tags

I've been editing a lot of HTML pages with basic text editor, notepad.
When I went to validate them the validation service is saying there's a div tag that is not closed. I tend to find automatic error reports such as these don't tend to be too reliable, i.e they will give you a line number and the error but often times the error is actually in another part of the file entirely.
I'm just wondering if there is a way to find the closing tag for an opened HTML tag. For example, you click on a tag then click a shortcut, and the program will jump to the closing tag. I know this functionality is in homesite, but I don't have homesite, and its a bit of a bulky program anyway.
To sum up, I would like to know how to find html tags that don't have closing tags.
If you save your HTML as page.xhtml (instead of page.html), the browser (Firefox/Chrome or Opera) should find the un-closed tags for you without the need for a validator. Just remember to rename them .html before serving them online - IE doesn't support .xhtml files yet.
Edit (3 years later): This post's still getting comments/upvotes so a slight amendment. IE9 and IE10 do now support xhtml files.
Use the firefox view source - wrong code will be in different color
Notepad++ - never had any problems with it and also never had any unclosed html tag with it.
You can just click on any element and see if it has a closing tag. Also you can do this: click on "TextFX"(left from plugins in navigation) -> click on "Text FX HTML Tidy" -> click on lets say hmm "TiDy clean Document - wrap". That should fix your html document, aka close all unclosed elements.
http://validator.w3.org/
Does more than just unclosed tags. Should be used by all front-end developers, IMO.
I am using two online-tool, which work very fine.
jona.ca and tormus.com
CSE HTML Validator Lite is a free lightweight editor (for Windows) that will check your HTML (just press F6) and find missing end tags and other problems. You can also press Ctrl+M on a start tag or end tag and it will take you to the matching start or end tag.
A simple online service that will also do this (and more) is OnlineWebCheck.com. There are other online services but in my opinion the one I just mentioned is the simplest one to use and understand.
Full disclosure: I am the developer of CSE HTML Validator Lite and http://www.OnlineWebCheck.com/ which is based on CSE HTML Validator.
If your code is very messy, not prettified nor indented, v.Nu (as seen at https://validator.w3.org/nu/) will often get confused (for instance if there's an extre closing tag, it may not manage to select the one which is really wrong).
One solution is code folding: by collapsing all the code which is marked as a child of a certain node, you can often easily spot some incorrect hierarchy.
An example of editor which supports code folding is Kate editor: see the arrows on the left in their screenshot.
free lightweight html editors ... online html validation services that can highlight unclosed tags?
Use linter-vnu.
linter-vnu is a package for the Atom editor that uses the Nu Html Checker (v.Nu) to validate HTML or XHTML documents.
Disclosure: I am the developer of linter-vnu.
linter-vnu uses another Atom package, linter, to integrate v.Nu and Atom.
For example, if you open the following test.html file in Atom:
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<meta charset="utf-8"/>
<title>Test HTML document</title>
</head>
<body>
<div>
<p>Lorem ipsum dolor sit amet...</p>
</body>
</html>
(with a deliberately missing closing </div> tag)
then Atom (or rather, linter-vnu, thanks to linter and v.Nu) displays the following error messages:
Unclosed element “div”. at line 8 col 1 in test.html
End tag for “body” seen, but there were unclosed elements. at line 10 col 1 in test.html
and marks those lines in the editor with red dots.
If you click the "at..." (hyperlinked text) in the error message, the editor insertion point moves to the corresponding line, and a popup appears under the line, with the error text ('Unclosed element "div".').
If you save your HTML document with the file extension .xhtml, and open it in Atom, then v.Nu validates your document as XHTML (XML) rather than HTML, with slightly different messages. In this case, just one error message:
required character (found “b”) (expected “d”) at line 10 col 3
where line 10 contains the closing </body> tag. v.Nu was expecting a </div> tag instead; it was happy with </ - it was expecting a closing tag - but it was expecting the element name to begin with "d" for "div", not "b" for "body".
I make the following claims, as of November 2016:
v.Nu is the best option for validating (X)HTML(5).
linter-vnu is the best option for interactively harnessing v.Nu in an editor. linter-vnu itself is trivial; it's just a few lines of "glue" code. What makes it the best option is the Atom editor and the Atom linter package.
I welcome counterclaims and questions about these claims. I'd be happy to be proven wrong and be shown something better. Especially if, like v.Nu and linter-vnu, it's free.

Why does the web page I fetch with Perl look odd?

I have a Perl script to open the page http://svejo.net/popular/all/new/ and filter the names of the posts, but except headers, everything seems encrypted. Nothing can be read.
When I open the same page in a browser everything looks fine, including the source code. How is it possible to encrypt a page for a script and not for a browser? My Perl script sends the same headers as my browser (Google Chrome).
The page looks fine to me, although I don't read Bulgarian.
#!perl
use LWP::Simple;
getprint( 'http://svejo.net/popular/all/new/' );
This script returns the plain page without anything that looks odd or encrypted:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="bg" lang="bg">
<head>
<title>Svejo — Популярните новини </title>
What were you trying, and which versions of perl and the modules are you using? What is the output that you are seeing?
You clarify that you are using ActivePerl on Windows (please update your question with additional details). Remember, not only do you need to do the right Unicode things in your programs, but your terminal has to be set up to display Unicode properly.
What happens when you explicitly binmode your output?
binmode STDOUT, ':utf8';
Try saving the output to a file and looking at it in an editor that understands UTF-8.
Okay, that didn't work. Let's get even more general and set all handles to use UTF-8 by default:
use open IO => ':utf8';
The page is encoded with UTF-8.
Perhaps your Perl script is using a different encoding?
I found this page that describes Processing UTF-8 Files with Perl.