Use plasTeX on short strings - html

I'm trying to find a python package which will convert short strings like
A closed-form solution of
$\textbf{R}\textbf{R}_1=\textbf{R}_2\textbf{R}$
w.r.t $\textbf{R}$
to a reasonable HTML representation, like
A closed-form solution of
<i><b>R</b><b>R<b><sub>1</sub> = <b>R</b><sub>2</sub> <b>R</b></i>
w.r.t <i><b>R</b></i>
No LaTeX packages or document layout commands (\section etc.) will be involved; just the subset of TeX allowed in stackexchange postings.
While Mathjax does handle this beautifully, unfortunately Javascript options are off the table, as this is for an email digest--it has to be static HTML output. Inline CSS is fine. I know there's a Node.js version of Mathjax that can approximate its output in static form (with a buhc of caveats about how the result won't be browser-responsive and other things I don't care about), but I want Python.
The best option I've found seems to be plasTeX, but all the documentation there seems to be about converting whole .tex files to .html, or, for some reason, .xml files, which is much more than I want to do.
I suppose, if need be, I could generate temporary .html files and then use BeautifulSoup to parse out only the part I'm interested in, but this seems a bit silly. Since I'm talking about doing this maybe 50 times per script invocation, this would certainly be doable.
Is there a simple way to use plasTeX or any other python package to get html equivalents of short latex snippets?

You may find https://github.com/alvinwan/TexSoup useful. Using this library, you could replace the boldfaced parts in two lines. Although a sufficient number of regexes could do, TexSoup gives you a bit more flexibility.
from TexSoup import TexSoup
soup = TexSoup(r"$\textbf{R}\textbf{R}_1=\textbf{R}_2\textbf{R}$")
for b in soup.find_all('textbf'):
b.replace("<b>{args[0]}</b>".format(args=b.args))

Related

Rails, HTML to JSON?

Given a static HTML page, is there an automated way to generate json?
For a large website that contains a lot of static HTML I am wanting to generate json for RSS feeds and search functionality and am looking for a way to convert HTML to json.
I could obviously write json templates for every page and every language but that would be a unmaintainable. That would double an 800page website to 1600 pages and that is not an option.
One approach I thought of could be to write a bot that would loop through the routes to index the pages and save data to a database which would give me all the choices I could wish for, for searching such as solr, elastic search, thinking sphinx etc...
I could use capybarra to aid me in this by visiting each path and extracting text to save to a database in a rake task as a background job but not sure how that would work in a production environment and it seems that such a common requirement might have already been achieved but for the life of me I can't find one.
I would be far happier (I think) if I could find a way to convert HTML text content to JSON
Any ideas? Has this already been done? are there any gems that might help? or is there built in functionality that I have not thought of, maybe a way to get html into a hash that could then be converted into json? whatever the approach it needs to be automated. I'm just stuck for the best approach.
Basically html looks a lot like xml, but with strong tag meanings, so you could use xml to json conversion, if it all ends up getting tree of html tags embedded in each other.
And so your question becomes this question Except you might get problems with single tags, without closing one. So you might get all of these and put a closing bracket after each one before trying to get it as hash from xml. Oh, early answer. Btw in general for parsing text data you should look at regular expressions.
I chose to go with a nokogiri solution in the end and wrote a parser to meet my needs

Activating HTML with Haskell

I have a large pile of lecture notes in raw HTML format. I would like to add interactive content to these notes, in particular incorporating online exercises. I have some experience implementing online exercises as cgi-bin executables compiled from Haskell code running on the server, interacting with a student record file and sending suitable HTML back to the browser, using Text.Xhtml to generate the content. Now I plan to integrate the notes and the exercises.
The trouble is that I don't want to spend ages manually transforming my raw HTML into Haskell code to generate exactly the raw HTML I started with. Instead, I'd like to put my Haskell code and my HTML in the same source file, with placeholders in the latter for content generated by the former. A suitable tool should then transform this file into Haskell source code for (e.g.) a cgi-bin executable which generates the corresponding page.
Before I go hacking up such a piece of kit, I thought I'd ask if there's better technology out there already. The fixed points are the large legacy lump of HTML, the need to implement the assessment of the exercises in Haskell, and the need to interact with student records on the server. The handicap is that I need to use the departmental web server and I can't reconfigure it (ok, maybe I could ask nicely): that's one of the reasons I currently use cgi-bin executables, which are just fine on our server already, but I'm open to other possibilities.
My current plan is to write a (I mean adapt an existing) preprocessor to support a special syntax for defining functions of type
Html -> ... -> Html -> Html
that looks a lot like raw HTML with splice points. Then what I do with my existing raw HTML is indent it a bit and mark the holes.
But would that be a waste of time? Please, please tell me that this question is a duplicate!
There are Haskell frameworks like Yesod and Happstack which use templating engines like you describe.
Have you looked at the haskell wiki at http://www.haskell.org/haskellwiki/HSP or
http://www.haskell.org/haskellwiki/Web/Libraries/Templating ?
They may do what you need.
You might find someting to do the job here: Templating packages for Haskell.
And you should probably look into Snap, Yesod or Happstack for serving the content.
I have a large pile of lecture notes in raw HTML format. I would like to add interactive content to these notes, in particular incorporating online exercises.
There is already a system (called "ActiveHs"), written in Haskell, that allows to put lecture notes and interactive exercises in one file.
See:
http://pnyf.inf.elte.hu/fp/UsersGuide_en.xml
http://pnyf.inf.elte.hu/fp/Constructive_en.xml
I can really say that it is very well written code and completely open source!

Is there anything wrong with YAML format to be joined to the web standards

Well, I think YAML is really fantastic...
It's beautiful, easy to read, clever syntax...compared to any other data serialization format.
As a superset of JSON we could say it's more elaborated, hence its language evolution.
But I see some different opinions out there, such:
YAML is dead,
don't use yaml and so on...
I simply can't understand on what this is based because it seems so nice :)
If we take few well succeeded examples over the web such as Ruby on Rails, we know they use yaml for simple configuration, but one thing that gets me curious is why yaml is not being part of most used formats over web like XML and JSON.
If you take twitter for example...why not offer the data in YAML format from the API as well?
Is there something wrong by doing it?
We can see the evolution on no-sql databases like couchdb, mongo, all json based, even one great project called jsondb which looks very lightweight and it definitely can do the job.
But when writing data structures in json I really can't understand why YAML is not being used instead.
So one of my concerns would be if is there something wrong with YAML?
People can say it's complex, but well, if you pretend to use the same features you would get in json it's definitely not. You will get a more beautiful file for sure tho and with no hassle. It would be indeed more complex if you decide to use more features, but that's how things are, at least you have the possibility to use it if you want to.
The possibility to choose if you want or not to use double-quotes for string is fantastic makes everything cleaner and easier to read....well you see what's my point :)
So my question would be, why YAML is not vastly used in place of JSON?
Why it doesn't seem that it will be used for data structure transfers within the online community?
All I can see is people using it for simple configuration files and nothing else...
Please bear with me since I might be completely wrong and very big projects might be happening and my ignorance on the subject didn't allow me to be a part of it :)
If is there any big project based on yaml out there I would be very happy to know about it
Thanks in advance
It's not that there's something wrong with YAML — it's just that it doesn't offer any compelling benefits in many cases. YAML is basically a superset of JSON. For most purposes, JSON is quite sufficient — people wouldn't be using advanced YAML features even if they had a full YAML parser — and its close ties to JavaScript make it fit in well with the technologies that Web developers are using anyway.
TLDR: People are already using as much YAML as they need. In most cases, that's JSON.
YAML uses more data than non-prettified JSON. It's great for files that humans might want to edit themselves but when all you're doing is passing data around, you're wasting bandwidth if you're using YAML.
If you need an explanation: each space in UTF-16 is two bytes. YAML uses spaces for indentation, and newline characters for nesting.
Take this example:
foo:
bar:
- foo
- bar
This requires 44 characters (including newline characters). The equivalent JSON would be only 29 characters:
{"foo":{"bar":["foo","bar"]}}
Then just imagine what happens if you URL-encode the YAML. It becomes 95 characters:
foo%3A%0A%20%20%20%20bar%3A%0A%20%20%20%20%20%20%20%20-%20foo%0A%20%20%20%20%20%20%20%20-%20bar
Meanwhile the JSON just becomes 64 characters:
%7B%22foo%22%3A%7B%22bar%22%3A%5B%22foo%22%2C%22bar%22%5D%7D%7D
The size increase to YAML from JSON is more than double when it's URL-encoded, in the above example. And I'm sure you can just imagine that the longer your YAML file, the more and more this difference will increase.
Oh, and one other reason not to use YAML: stackoverflow.com does not support YAML syntax highlighting... ! (Of course, I would argue that YAML is so beautiful that it doesn't need syntax highlighting. That's kind of the point of YAML, I think.)
In Ruby many people argue that configuration should be Ruby, rather than YAML. This saves the parsing stage, means you don't have to learn the new syntax, and don't end up with ERB tags everywhere when you are dynamically generating YAML content (Rails fixtures).
Personally I have to agree, and can't see what YAML would offer to network transfers that would make it a worthwhile consideration over JSON.
YAML has an amount of problems, there is a good article
YAML: probably not so great after all on that.
Short summary (in addition to problems already listed in other answers):
Unreadable except for simple and short things
Insecure by default
Has portability issues
Very complex, with amount of surprising behaviors
I considered using YAML few times and never did. The reason always had to do with white spaces for indentation. While I personally love this, even to me it sounded like asking for trouble, because
For sure someone will make a mistake, not expecting that changing white spaces will break the file. Sometimes someone who has no idea about the language / format has to go to the file to change one number or string.
You can't guarantee that everybody everywhere will have it's comparison / merging / SC software configured properly to catch white space or empty lines differences.

Which technology should I use to transform my latex documents into html documents

I want to write a little program that transforms my TeX files into HTML. I want to parse the documents and turn the macros (the build-in and of course my own) into HTML pieces. Here are my requirements:
predefined rules (e.g. begin{itemize} \item text \end{itemize} => <br> <p>text </p> <br/>)
defining own CSS style
ability to convert formulars (extract the formulars, load them in an imagecreator and then save the jpg/png)
easy to maintain and concise
I know there are several technologies out there, but I don't exactly know which is the best for me. Here are the technologies which flow into my mind
Ruby (I/O is easy, formular loading via webrat),
XML XSLT (I don't think that I need just overhead)
perl (there are many libs out there but I'm not quite familiar with it)
bash (I worked with sed and was surprised how easy it was to work with regular expressions)
latex2html ... (these converters won't work for me and they don't give me freedom in parsing)
Any suggestions, hints and comments are welcome.
Thanks for your time, folks.
have a look at pandoc here. it can also be installed on linux or os x. Though it won't do your custom macros. The only thing I've seen that can do a decent job with custom macros is tex4ht, but to really work well you need to be producing .DVI files. If you have a ton of custom macros, writing your own converter is going to take an ass load of time. Even if you only have a few custom macros, it's still going to be a pain. good luck!
Six: TeX
Seven: Haskell
(I gave up trying to persuade SO to start numbering my list from 6).

How can I extract HTML content efficiently with Perl?

I am writing a crawler in Perl, which has to extract contents of web pages that reside on the same server. I am currently using the HTML::Extract module to do the job, but I found the module a bit slow, so I looked into its source code and found out it does not use any connection cache for LWP::UserAgent.
My last resort is to grab HTML::Extract's source code and modify it to use a cache, but I really want to avoid that if I can. Does anyone know any other module that can perform the same job better? I basically just need to grab all the text in the <body> element with the HTML tags removed.
I use pQuery for my web scraping. But I've also heard good things about Web::Scraper.
Both of these along with other modules have appeared in answers on SO for similar questions to yours:
how can i screen scrape with perl
how can i extract xml of a website and save in a file using perls lwp
how do i extract an html title with perl
can you provide an example of parsing html with your favorite parser
how do I extract content from html file using perl
HTML::Extract's features look very basic and uninteresting. If the modules that draegfun mentioned don't interest you, you could do everything that HTML::Extract does using LWP::UserAgent and HTML::TreeBuilder yourself, without requiring very much code at all, and then you would be free to work in caching on your own terms.
I've been using Web::Scraper for my scraping needs. It's very nice indeed for extracting data, and because you can call ->scrape($html, $originating_uri) then it's very easy to cache the result you need as well.
Do you need to do this in real-time? How does the inefficiency affect you? Are you doing the task serially so that you have to extract one page before you move onto the next one? Why do you want to avoid a cache?
Can your crawler download the pages and pass them off to something else? Perhaps your crawler can even run in parallel, or in some distributed manner.