PDFium to create page numbers of a book - puppeteer

I am converting HTML to PDFs with some page numbering requirements that are standard in book printing.
Page numbers should be hidden on some pages
Front matter, which means everything up to the actual text, should have Roman page numbers (i, ii)
Page numbers should restart from 1 at the beginning of actual text
Table of Contents should show page numbers
Hiding page numbers is very important. The three others are big nice-to-haves.
I am using html-pdf-node, which uses Puppeteer -> Chromium -> PDFium.
Since html-pdf-node only supports very simple page numbering, I am looking at possibilities of doing it in one of the deeper levels.
I tried to search the documentation of the four levels and I checked possibilities of removing or editing page numbers with other tools. With Chromium/PDFium the answer might still be in the docs even I did not see it. I also thought of forking our own version of PDFium if possible.

Related

Maintaining font style/formatting into a form that doesn't support html/markdown

I have looked into the previous postings to do with this area but haven't found any relevant answers as perhaps I am asking the wrong question.
On the popular design site Dribbble, there seem to be interesting formatting changes in profile names that break from the conventions of the site's styling.
Alot of people have been adding special characters (ΔδΓ etc.) that can be achieved by pasting into their profile form and saving changes, yet some users have somehow managed to enter formatted versions of their name, despite the profile form not supporting HTML or Markdown. You can see an example in the images below.
An example of copying the font to Google with maintained formatting
When opening in inspector, it also shows the formatted type
How could this be done in a simple text input form that doesn't support HTML/Markdown?
These are almost certainly Unicode characters, just like these characters that you reference in your question: ΔδΓ.
For example, Unicode's mathematical alphanumeric symbols section includes symbols that look like the ones in your screenshot. Since these are separate Unicode characters there is no need for additional formatting.
Users will need to have a font that supports those characters installed locally to view them.

Changing SettingsFlyout Header style to allow long Title?

I'm using the Win 8.1 SettingsFlyout in my app. Design has asked for several Entry Points (nomenclature from Guidelines for app settings) with fairly long titles. These settings names work fine in the Settings Charm flyout (the one that lists all the entry points), but when the user drills into a specific flyout the title (as displayed in the header) gets truncated. This is especially painful in foreign languages, where any attempt to control the translation becomes a bureaucratic pain. The ~20 characters available is insufficient for what Design wants here.
Is there a way to change the style of the Header (or at least the Title) in my SettingsFlyouts so that the Title can be displayed on multiple lines?

How to embed table within text and produce pdf output using Perl

I have a requirement to produce letters to send to customers which will contain a report within the letter text. The idea is that the user can create letter paragraphs which can be saved in a database for later use, can be sequenced and can appear either before or after a report. The report will be in table form.
I've looked using PDF::Table and PDF::API2, (both of which are good at what they do), however, both place 'items' on the page in fixed positions and not create a free flowing document.
Unless I've missed something, there is no way to add a table immediately after a paragraph of text or vice versa as page positions are required.
I have thought about using HTML::Template to create the basic letter, then HTML::HTMLDoc to convert to PDF, but would need the ability to insert a page break on change of customer.
What is my best option to achieve the above result please?
Many Thanks
There are only two ways that I've had any success with.
The first is the Apache XML-FOP project. This is a huge, sprawling Java library and specification for turning XML documents into nicely formatted PDFs. I was never good enough with XML stylesheets and transformations to get to grips with this.
The second is to generate openoffice/libreoffice documents and then use a copy of libreoffice in headless mode to convert them to PDFs. This is what I generally end up doing. You may want a minimal X11 installation for fonts etc with Xvfb as a fake display.
For editing the documents I've had success with the OpenOffice-OODoc distribution. HTH.

Html to Word long document

I can create an extensive word document using html including a cover page, header & footer, page numbers etc.
But my problem is; when my document is too long (like 100 pages or more) and I open the doc with Word 2003:
the document can be loaded and I can see the cover page.
but when I try to scroll down a little bit to examine the report, Word starts a long lasting process ( I don't know what it is) and does not respond.
if the doc is about 60 pages, the process lasts about 5 min. And then I can navigate through the document.
I have tried the following:
Disabled Spelling and Grammar check
Disabled auto-save
Is there anyone with a similar experience? I am creating the document with html and a few vml tags embedded in the document. What can be the cause of this unresponsive behavior?
Word is not created for handling large documents. There are several places where behavior is not O(n log n) (with n length of the document). You need to at least disable page numbering.
If you really want to find out: create some test cases and find out:
start with a plain text only, nothing fancy at all, generate a 100 pages and see if the problem persists.
step by step add features until the problem surfaces (fastest is to halve the feature difference).
it is likely that there are more features than just one leading to these performance problems, so you have to be careful with the application of Newton's method
And when you know, tell us.

Programmatically detecting "most important content" on a page

What work, if any, has been done to automatically determine the most important data within an html document? As an example, think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
Note: Ideally, the method would work with well-formed markup, and terrible markup. Whether somebody uses paragraph tags to make paragraphs, or a series of breaks.
Readability does a decent job of exactly this.
It's open source and posted on Google Code.
UPDATE: I see (via HN) that someone has used Readability to mangle RSS feeds into a more useful format, automagically.
think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
I would probably try something like this:
open URL
read in all links to same website from that page
follow all links and build a DOM tree for each URL (HTML file)
this should help you come up with redundant contents (included templates and such)
compare DOM trees for all documents on same site (tree walking)
strip all redundant nodes (i.e. repeated, navigational markup, ads and such things)
try to identify similar nodes and strip if possible
find largest unique text blocks that are not to be found in other DOMs on that website (i.e. unique content)
add as candidate for further processing
This approach of doing it seems pretty promising because it would be fairly simple to do, but still have good potential to be adaptive, even to complex Web 2.0 pages that make excessive use of templates, because it would identify similiar HTML nodes in between all pages on the same website.
This could probably be further improved by simpling using a scoring system to keep track of DOM nodes that were previously identified to contain unique contents, so that these nodes are prioritized for other pages.
Sometimes there's a CSS Media section defined as 'Print.' It's intended use is for 'Click here to print this page' links. Usually people use it to strip a lot of the fluff and leave only the meat of the information.
http://www.w3.org/TR/CSS2/media.html
I would try to read this style, and then scrape whatever is left visible.
You can use support vector machines to do text classification. One idea is to break pages into different sections (say consider each structural element like a div is a document) and gather some properties of it and convert it to a vector. (As other people suggested this could be number of words, number of links, number of images more the better.)
First start with a large set of documents (100-1000) that you already choose which part is the main part. Then use this set to train your SVM.
And for each new document you just need to convert it to vector and pass it to SVM.
This vector model actually quite useful in text classification, and you do not need to use an SVM necessarily. You can use a simpler Bayesian model as well.
And if you are interested, you can find more details in Introduction to Information Retrieval. (Freely available online)
I think the most straightforward way would be to look for the largest block of text without markup. Then, once it's found, figure out the bounds of it and extract it. You'd probably want to exclude certain tags from "not markup" like links and images, depending on what you're targeting. If this will have an interface, maybe include a checkbox list of tags to exclude from the search.
You might also look for the lowest level in the DOM tree and figure out which of those elements is the largest, but that wouldn't work well on poorly written pages, as the dom tree is often broken on such pages. If you end up using this, I'd come up with some way to see if the browser has entered quirks mode before trying it.
You might also try using several of these checks, then coming up with a metric for deciding which is best. For example, still try to use my second option above, but give it's result a lower "rating" if the browser would enter quirks mode normally. Going with this would obviously impact performance.
I think a very effective algorithm for this might be, "Which DIV has the most text in it that contains few links?"
Seldom do ads have more than two or three sentences of text. Look at the right side of this page, for example.
The content area is almost always the area with the greatest width on the page.
I would probably start with Title and anything else in a Head tag, then filter down through heading tags in order (ie h1, h2, h3, etc.)... beyond that, I guess I would go in order, from top to bottom. Depending on how it's styled, it may be a safe bet to assume a page title would have an ID or a unique class.
I would look for sentences with punctuation. Menus, headers, footers etc. usually contains seperate words, but not sentences ending containing commas and ending in period or equivalent punctuation.
You could look for the first and last element containing sentences with punctuation, and take everything in between. Headers are a special case since they usually dont have punctuation either, but you can typically recognize them as Hn elements immediately before sentences.
While this is obviously not the answer, I would assume that the important content is located near the center of the styled page and usually consists of several blocks interrupted by headlines and such. The structure itself may be a give-away in the markup, too.
A diff between articles / posts / threads would be a good filter to find out what content distinguishes a particular page (obviously this would have to be augmented to filter out random crap like ads, "quote of the day"s or banners). The structure of the content may be very similar for multiple pages, so don't rely on structural differences too much.
Instapaper does a good job with this. You might want to check Marco Arment's blog for hints about how he did it.
Today most of the news/blogs websites are using a blogging platform.
So i would create a set of rules by which i would search for content.
By example two of the most popular blogging platforms are wordpress and Google Blogspot.
Wordpress posts are marked by:
<div class="entry">
...
</div>
Blogspot posts are marked by:
<div class="post-body">
...
</div>
If the search by css classes fails you could turn to the other solutions, identifying the biggest chunk of text and so on.
As Readability is not available anymore:
If you're only interested in the outcome, you use Readability's successor Mercury, a web service.
If you're interested in some code how this can be done and prefer JavaScript, then there is Mozilla's Readability.js, which is used for Firefox's Reader View.
If you prefer Java, you can take a look at Crux, which does also pretty good job.
Or if Kotlin is more your language, then you can take a look at Readability4J, a port of above's Readability.js.