Using server-side XML Parser to generate HTML content - html

Is it possible using a server side XML parser to have it create an HTML blob that is then included in an existing HTML page?

Is the existing HTML page already being served by the same server? If so, then yes - arguably that's a rough description of what almost all web frameworks do, to a greater or lesser extent: insert some dynamic content within a static template.
Which bit are you concerned about?

Most languages should have XML libraries or parsers available to facilitate this.
For example, PHP has an XML parser.
Write a function using this to take a given XML feed, pretty it up, and spit it out. Then call this function in your HTML page.

Related

I have created an html generating js function based on JSON. Is this more effective than templating engines or doing it manually by code?

Could you please check if this function is more effective from a coding point of view than templating engines or manually creating the html elements through code?
https://github.com/krishnakumar-m/simple-html-templating-json
I have checked github for use of JSON to generate HTML. But was not able to find any. I found this a bit more handy while dynamically generating html elements, because I was able to visualize the HTML as JSON.
Why are we not using JSON for structuring HTML this way? Is there a catch here? Or are there libraries which does the same thing?

Convert a HTML site into a Wiki

I have a basic HTML site (No JavaScript, PHP or CSS) that I would like to turn into a Wiki. The site has over 1000 pages. I would like the converter to take the contents of each page, and place the content into a its own newly created wiki page. I also need all the links to be converted as well. I would prefer to use MediaWiki, but any wiki software would do.
Does anyone know of a way to do this?
IIRC Reimar Bauer has written some tool to recover MoinMoin 1.9.x contents from html content on archive.org. Maybe you can slightly modify that for your purpose.
But be aware that that was made to convert html that was MADE by moin, maybe rather not arbitrary html. But if you html is simple, it might be worth trying.
Another idea (maybe needing a bit more research and coding) is to use the post url that is used by the gui editor of moin. It expects xhtml there and will try to convert that to moin wiki markup. You can also do a first try interactively by just copy&pasting your html to the gui editor of moin. But be aware that browsers do a sanitizing step (convert html to valid xhtml) that is not present if you just post (not well-formed) html to the post url.

Store data from HTML to XML file

Hello im trying to learn about XML , XML is media for store data but HTML media for display data, how can I store data from HTML to XML ?
Because i'd like to build some quiz maker that build up into HTML and store in XML, any tutorial/references for this?
thanks
XML is just a fancy way to store data for your application. It's a standard which means that you can easily export data from one application into another. If you are interested in this, take a look at this page: http://www.w3schools.com/xml/xml_parser.asp
You will need to use html and javascript to build a quiz. If you want you can make your quiz load questions and answers from XML.
HTML is a specialized language written in XML to describe how a webpage renders. HTML is valid XML however they very different things.
The question is very open ended, so it's hard to answer. One way is to post data from your html based website to your server and store it as xml.
However, it all depends on how you intend to use it.
I assume you mean "How can I load data stored in XML to html website". The simplest answer I can think of right now would be using jquery/javascript.
http://think2loud.com/224-reading-xml-with-jquery/
https://stackoverflow.com/questions/10811511/jquery-how-to-get-xml-data
https://stackoverflow.com/questions/16113188/convert-xml-to-html-using-jquery-javascript

How do I create some HTML help pages, with the same content at the top and bottom, without php or ASP etc?

I want to create some html help pages, separate html pages.
However, I want to have the same content on the top and bottom of the pages.
In the past I've used PHP or ASP, with header and footer files.
I've then had to do view source and save these pages to get what I want.
I just wondered if there an easiest way to do this ?
EDIT:
The pages are for use with software using a web object not a normal browser. So there won't be a web server
If your web server supports it, you could do server side includes
You could use frames, but it's not necessarily advisable (for one, it breaks navigation).
You could use XML files with an XSLT stylesheet to turn them into HTML documents that share similar elements.
You could use PHP or another server-side language to generate the pages, and then use a recursive download tool (such as wget) to turn them into HTML.
EDIT: you're basically asking whether the "standard-ish" subset of HTML supported by your component of choice provides a way of including data from a common file, just so you won't have to include the data in every HTML document.
The answer hovers somewhere between "no way" and "maybe your component has a few tricks to do that".
The sane thing to do here would be to have a tool generate the HTML documents from a common template. Could be XML + XSLT, PHP/ASP/whatever, or a fully-fledged CMS (this actually helps let non-technical users write the document contents).
It's awful, but you could include a JS file that uses a bunch of document.write("...") to include common elements. Not SEO friendly.

Converting HTML to RDF

I'm looking for a general purpose API/web service/tool/etc... that allows convert a given HTML page to an RDF graph as specific as possible (most probably using a back bone ontology and/or mapper).
Have you proved GRDDL?
GRDDL is a technique for obtaining RDF
data from XML documents and in
particular XHTML pages.
I used XQuery to extract the data out of the given set of web pages. I had to write custom queries for the web pages. I think this is the most straight forward approach to take for a specific set of HTML files. However, it is obviously not good for the general case. For a different set of web pages other custom queries are need to be written.
I used JSoup to scrape data from HTML. It uses jQuery style of querying HTML DOM, wich I was already famirial with, so it was realy simple tool to use for me. I also fund it quite robust but I needed it just to scrape 3 datasources so I dont have rich experience with this tool yet. jsoup