How Custom Translator handles the inlines in TMX - microsoft-translator

Should I care about the markup content elements (<bpt>, <ept>, <it>, and <ph>) when uploading TMX for training?
Should we clean up those (e.g. by removing bpt-ept pairs, or replacing ph with actual values?

Related

How to extract pdf text from BT and ET section line by line and how to decode Tj block to unicode in pdfbox

As the title, I have the special requirement to extract text line by line or block by block of BT and ET.
below is the pdf content, I tried PDFTextstripper class, but it is not what I want,
so any one has the solution to resolve the problem?
I wanna parse this : [ (=\324Z\016) ] TJ.......
this is my pdf: https://dl.dropboxusercontent.com/u/63353043/docu.pdf
below is my code:
enter coList<Object> tokens = pages.get(0).getContents().getStream().getStreamTokens();
tokens.forEach(s->{
if(s instanceof COSString){
System.out.print(s.toString());
}
but, I get thoes:
COSString{&Ð}COSString{O6}COSString{&³}COSString{p»}COSString{6±}COSString{˛¨}COSString{+^}COSString{+·}COSString{˚©}COSString{9}COSString{O©}COSString{en}COSString{˛¨}COSString{Fœ}COSString{0ł}COSString{Q¯}COSString{#”}COSString{˛¨}COSString{+^}COSString{(Ï}COSString{˚©}COSString{9}COSString{O©}COSString{en}COSString{Zo}COSString{#°}COSString{˜}COSString{p»}COSString{#Š}COSString{5×}COSString{,
}COSString{:É}COSString{(Ù}COSString{4ÿ}COSString{ä}COSString{_Á}COSString{˛¨}COSString{:É}COSString{p»}COSString{O©}COSString{en}COSString{#p}COSString{/F}COSString{O©}COSString{en}COSString{F,}COSString{_N}COSString{!}COSString{9»}COSString{]˘}COSString{!¢}COSString{˜.}COSString{p»}COSString{#°}COSString{˜}COSString{#p}COSString{<:}COSString{Zo}COSString{1¸}COSString{ä}COSString{˚~}COSString{F³}COSString{!Ø}COSString{]Š}COSString{2}COSString{6±}COSString{˛¨}COSString{gî}COSString{+·}COSString{9á}COSString{XS}COSString{hP}COSString{h[}COSString{˜º}COSString{˚.}COSString{p»}COSString{5d}COSString{5×}COSString{]˘}COSString{_ö}COSString{#c}COSString{2˚}COSString{]˜}COSString{+·}COSString{9á}COSString{p»}COSString{#b}COSString{˚.}COSString{eÚ}COSString{;
}COSString{!5}COSString{:É}COSString{XS}COSString{hP}COSString{h[}COSString{˜º}COSString{p»}COSString{B!}COSString{&Ø}COSString{,}COSString{/F}COSString{^r}COSString{˛²}COSString{2&}COSString{˜.}COSString{N}COSString{*ø}COSString{˜.}COSString{&¢}COSString{+B}COSString{+·}COSString{9á}COSString{ä}COSString{ZX}COSString{˚}COSString{˛µ}COSString{6«}COSString{0ł}COSString{!Ø}COSString{p»}COSString{O©}COSString{en}COSString{F,}COSString{Q±}COSString{"}}COSString{XS}COSString{&Ð}COSString{_N}COSString{!}COSString{9»}COSString{˛²}COSString{F,}COSString{(Ï}COSString{O©}COSString{en}COSString{F³}COSString{/?}COSString{˛¨}COSString{=­}COSString{˚4}COSString{9}COSString{p»}COSString{;}COSString{5×}COSString{_Á}COSString{˜³}COSString{#É}COSString{0·}COSString{F,}COSString{Q±}COSString{"}}COSString{p»}COSString{1û}COSString{"}}COSString{˚.}COSString{O©}COSString{en}COSString{p»}COSString{#°}COSString{˜}COSString{Lê}COSString{5d}COSString{Zo}COSString{1¸}COSString{"G}COSString{˚.}COSString{ä}COSString{&Ð}COSString{Kå}COSString{Yª}COSString{#°}COSString{#´}COSString{5ê}COSString{p»}COSString{(Ï}COSString{O©}COSString{en}COSString{ZR}COSString{pÉ}COSString{p„}COSString{˜}COSString{G“}COSString{_û}COSString{%v}COSString{pÎ}COSString{1û}COSString{"}}COSString{1¹}COSString{F,}COSString{˛µ}COSString{5×}COSString{˜}COSString{2˚}COSString{]˜}COSString{+·}COSString{9á}COSString{p»}COSString{O´}COSString{5×}COSString{#b}COSString{˚.}COSString{+·}COSString{9á}COSString{p»}COSString{˜}COSString{]y}COSString{/0}COSString{}COSString{2§}COSString{˚.}COSString{˛¨}COSString{8E}COSString{N}COSString{*ø}COSString{22}COSString{++}COSString{&¢}COSString{+B}COSString{)%}COSString{ä}COSString{&Ð}COSString{!Í}COSString{˚b}COSString{f¨}COSString{Y)}COSString{.}COSString{"Q}COSString{5ê}COSString{p»}COSString{)*}COSString{7D}COSString{˛¨}COSString{˜³}COSString{˚b}COSString{P¥}COSString{&Ð}COSString{!Í}COSString{˚b}COSString{˛µ}COSString{G“}COSString{0m}COSString{F,}COSString{0m}COSString{<i}COSString{˛³}COSString{p»}COSString{;fi}COSString{˛µ}COSString{BÞ}COSString{\}COSString{F,}COSString{BO}COSString{Bˆ}COSString{Q™}COSString{-Ž}COSString{F,}COSString{!Ñ}COSString{Fr}COSString{p»}COSString{$™}COSString{/½}COSString{BO}COSString{Bˆ}COSString{F,}COSString{#™}COSString{5×}COSString{˛¨}COSString{nƒ}COSString{nƒ}COSString{p»}COSString{˚}COSString{5×}COSString{f‰}COSString{P¥}COSString{#Š}COSString{\\}COSString{F,}COSString{p°}COSString{1¹}COSString{<:}COSString{6±}COSString{C®}COSString{DÙ}COSString{˛µ}COSString{Q¯}COSString{_Á}COSString{9Ë}COSString{F,}COSString{˚b}COSString{#°}COSString{˜}COSString{p»}COSString{_Á}COSString{9Ë}COSString{F,}COSString{˚b}COSString{˚}COSString{<:}COSString{6±}COSString{C®}COSString{DÙ}COSString{˛µ}COSString{Cˆ}COSString{/?}COSString{1¸}COSString{"G}COSString{p°}COSString{p“}COSString{/4}COSString{˜.}COSString{p»}COSString{+·}COSString{O©}COSString{en}COSString{F,}COSString{˚3}COSString{9}COSString{7D}COSString{#Þ}COSString{DÙ}COSString{;}COSString{T}COSString{T`}COSString{5“}COSString{˛²}COSString{p»}COSString{]2}COSString{ }COSString{]2}COSString{(Ï}COSString{p°}COSString{:}COSString{6«}COSString{Må}COSString{&Ð}COSString{˛µ}COSString{M;}COSString{0·}COSString{e;}COSString{O«}COSString{aw}COSString{˚b}COSString{F,}COSString{FÇ}COSString{ZH}
(If this wasn't so long, it would have been more appropriate as a comment to the question instead of as an answer. But comments are too limited.)
The OP in his question shows that he essentially wants to parse the content stream of e.g. a page and extract the strings drawn in a legible form. He attempts this by simply taking the tokens in the content stream and looking at the COSString instances in there:
List<Object> tokens = pages.get(0).getContents().getStream().getStreamTokens();
tokens.forEach(s->{
if (s instanceof COSString) {
System.out.print(s.toString());
}
});
Unfortunately the output looks like a mess.
Why do the string values in the content stream look so messy?
The reason for this is that those COSString instances represent the PDF string objects as they are, and that there is no single encoding of PDF string objects in content streams, not even a limitation to a few standardized ones.
The encoding of a string completely depends on the definition of the font currently active when the string drawing instruction in question is executed.
Fonts in PDFs can be defined to use either some standard encoding or a custom one and it is very common, in particular in case of embedded font subsets, to use custom encodings mapping constructed similar to this:
the code 1 to the glyph of this font which is first used on the page,
the code 2 on the second glyph of this font on the page which is not identical to the first,
the code three to the third glyph of this font on the page not identical to either of the first two,
etc...
Obviously there is no good to conjecture the meaning of string bytes for such encodings.
Thus, when parsing the content stream, you have to keep track of the current font and look up the meaning of each byte (or multi byte sequence!) of a COSString in the definition of that current font in the resource dictionary of the current page.
How to map those messy bytes using the current font definition?
The encoding of a PDF font might have to be determined in different ways.
There may be a ToUnicode map in the font definition which shows you which bytes to map to which character.
Otherwise the encoding may be a standard encoding like MacRomanEncoding or WinAnsiEncoding in which case one has to bring along the mapping table oneself (they are printed in the PDF specification).
Otherwise the encoding might be based on such a standard encoding but deviations are given by a mapping from codes to names of glyphs. If certain standard names are used, the character can be derived from that name. These names are listed in another document.
Otherwise some CIDSystemInfo entry may point to yet another standard Registry and Ordering from which to derive a mapping table specified in other documents.
Otherwise the font program itself may include usable mappings to Unicode.
Otherwise ???
Any pitfalls to evade?
The current font is a PDF graphics state attribute. Thus, one does not only have to remember the most recently set font but also consider the effects of operations changing the whole graphics state, in particular the save-graphics-state and restore-graphics-state operations which push the current graphics state onto a stack or pop it there-from.
How can PDF libraries help you?
PDF libraries which support you in text extraction can do so by doing all the heavy lifting for you,
parsing the content stream,
keeping track of the graphics state,
determining the encoding of the current font,
translating any drawn PDF strings using that encoding,
and only forward the resulting characters and some extra data (current position on the page, text drawing orientation, font and font size, colors, and other effects) to you.
In case of PDFBox 2.0.x, this is what the PDFTextStreamEngine class does for you: you only have to override its processTextPosition method to which PDFBox forwards those enriched character information in the TextPosition parameter. As you also want to know the starts and ends of the BT ... ET text object envelopes, you also have to override beginText and endText.
The class PDFTextStripper is based on that class and collects and sorts those character information bits to build a string containing the page text which it eventually returns.
In PDFBox 1.8.x there was a very similar PDFTextStripper class but the base methods were not that properly separated in a base class, everything was somewhat more intermingled and it was harder to implement one's own extraction ways.
(In other PDF libraries there are similar constructs, sometimes event based like in PDFBox, sometimes as a collected sequence of Textposition-like objects.)

html idml viewer

I am trying to implement a c# idml to html converter. I've managed to produce a single flat html file similar to the one produced by the indesign export.
What I would like to do is to produce html that will be as similar as possible to the indesign view like an html idml viewer. To do this, I need to find the text that can fit into a textframe, I can extract the story text content but I can't really find a way to split this content into frames/pages.
Is there any way I can achieve that?
Just extracting the text from a story isn't enough. The way the text is laid out is controlled by TextFrames in the Spread documents. Each TextFrame has a ParentStory attribute, showing which story it loads text from, and each frame has dimensions which determine the layout. For unthreaded text frames (ie. one story <> one frame), that's all you need.
For threaded frames, you need to use the PreviousTextFrame and NextTextFrame attributes to create the chain. There is nothing in the IDML to tell you how much text fits in each frame in a threaded chain, you need to do the calculation yourself based on the calculated text dimensions (or using brute force trial and error).
You can find the spreads in the main designmap.xml:
<idPkg:Spread src="Spreads/Spread_udd.xml" />
And the spread will contain one or more TextFrame nodes:
<Spread Self="udd" ...>
<TextFrame Self="uf7" ParentStory="ue5" PreviousTextFrame="n" NextTextFrame="n" ContentType="TextType">...</>
...
</Spread>
Which will in turn link to a specific story:
<Story Self="ue5" AppliedTOCStyle="n" TrackChanges="false" StoryTitle="$ID/" AppliedNamedGrid="n">...</>
(In this example the frames are not threaded, hence the 'n' values.
All this is in the IDML documentation, which you can find with the other InDesign developer docs here: http://www.adobe.com/devnet/indesign/documentation.html
Microsoft and Adobe have proposed a new module for css named Regions which allow you to do flow tekst into multiple containers. Keep in mind that you will never be able to create an html page that looks exactly like an Indesign document.
http://www.w3.org/TR/css3-regions/
For now only IE10 and webkit nightly support it: http://caniuse.com/#feat=css-regions

Mapping plain text back into HTML document

Situation: I have a group of strings that represent Named Entities that were extracted from something that used to be an HTML doc. I also have both the original HTML doc, the stripped-of-all-markup plain text that was fed to the NER engine, and the offset/length of the strings in the stripped file.
I need to annotate the original HTML doc with highlighted instances of the NEs. To do that I need to do the following:
Find the start / end points of the NE strings in the HTML doc. Something that resulted in a DOM Range Object would probably be ideal.
Given that Range object, apply a styling (probably using something like <span class="ne-person" data-ne="123">...</span>) to the range. This is tricky because there is no guarantee that the range won't include multiple DOM elements (<a>, <strong>, etc.) and the span needs to start/stop correctly within each containing element so I don't end up with totally bogus HTML.
Any solutions (full or partial) are welcome. The back-end is mostly Python/Django, and the front-end is using jQuery. We would rather do this on the back-end, but I'm open to anything.
(I was a bit iffy on how to tag this question, so feel free to re-tag it.)
Use a range utility method plus an annotation library such as one of the following:
artisan.js
annotator.js
vie.js
The free software Rangy JavaScript library is your friend. Regarding your two tasks:
Find the start / end points of the […] strings in the HTML doc. You can use Range#findText() from the TextRange extension. It indeed results in a DOM Level 2 Range compatible object [source].
Given that Range object, apply a styling […] to the range. This can be handled with the Rangy Highlighter module. If necessary, it will use multiple DOM elements for the highlighting to keep up a DOM tree structure.
Discussion: Rangy is a cross-browser implementation of the DOM Level 2 range utility methods proposed by #Paul Sweatte. Using an annotation library would be a further extension on range library functionality; for example, Rangy will be the basis of Annotator 2.0 [source]. It's just not required in your case, since you only want to render highlights, not allow users to add them.

Converting a VB6-like specification of a form into a Web page displaying the form

As a part of my current pet project, I have to write an program that takes as input the specification of a form and generates as output the necessary HTML and CSS code to display the form as a Web page. The exact input format is irrelevant to this question, but the information it contains is similar to the information in a Visual Basic 6 form file, minus the Visual Basic code. In particular, the sizes and positions of the form's controls are represented as XY pairs in an absolute coordinate system.
The main problem I am facing is that I do not know how to map the information contained in the input format (control sizes and positions, tab order, etc.) to the information contained in the HTML and CSS files (when to use placeholder divs, when to use CSS floats, etc.).
I emphasize on the word information because I do of course know how to generate HTML- and CSS-formatted files given the information they should contain.
In particular, the sizes and positions of the form's controls are
represented as XY pairs in an absolute coordinate system.
In that case, the easiest method is to use absolute positioning.
Something like this: http://jsfiddle.net/R8K6u/
You should be able to directly map width/height/left/top from the VB6 equivalent.

Data URIs in GWT

Is it possible to create data URI's in GWT?
I want to inject a byte array image as an actual image using a data URI.
You should checkout ClientBundle in GWT's trunk. It will create data urls automatically for browsers that support them and fallbacks for that other browser: http://code.google.com/p/google-web-toolkit/wiki/ClientBundle
The feature won't ship until GWT 2.0, but it's in heavy use now.
Yes. It is completely possible to do this. I'd done it for an application until I realized IE6 doesn't handle binary data streams this way. You can do it in several ways. For the purposes of my example, I'm already assuming that you've converted the byte array to a string somewhere, and that it is properly encoded and of the proper type for your data URI. I'm also assuming you know the basic format (or can find it) of your chosen data scheme.
I've taken these examples from the Wikipedia article on data URI scheme.
The first is to just use raw HTML to make the image reference as you normally would and have it inserted into the page.
HTML html = new HTML("<img src=\"data:image/png;base64,
iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAABGdBTUEAALGP
C/xhBQAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9YGARc5KB0XV+IA
AAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAF1J
REFUGNO9zL0NglAAxPEfdLTs4BZM4DIO4C7OwQg2JoQ9LE1exdlYvBBeZ7jq
ch9//q1uH4TLzw4d6+ErXMMcXuHWxId3KOETnnXXV6MJpcq2MLaI97CER3N0
vr4MkhoXe0rZigAAAABJRU5ErkJggg==\" alt=\"Red dot\">");
You can also just use an image. (Which should produce roughly the same output HTML/JS.)
Image image = new Image("data:image/png;base64,
iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAABGdBTUEAALGP
C/xhBQAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9YGARc5KB0XV+IA
AAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAF1J
REFUGNO9zL0NglAAxPEfdLTs4BZM4DIO4C7OwQg2JoQ9LE1exdlYvBBeZ7jq
ch9//q1uH4TLzw4d6+ErXMMcXuHWxId3KOETnnXXV6MJpcq2MLaI97CER3N0
vr4MkhoXe0rZigAAAABJRU5ErkJggg==");
This allows you to use the full power of the Image abstraction on top of your loaded image.
I'm still thinking that you may want to expand on this solution and use GWT's deferred binding mechanism to deal with browsers that do not support data URIs. (IE6,IE7)