ghostscript ps2ps crops document adding some margins/ borders - border

I feel noob on this one... But here it comes:
Let's take a Vector program with a PS exporter (with no font sub setting: important to change text dynamically in the future) more specifically the Inkscape version 0.46.
Document size A4 and lets draw some lines very close to the border and a simple text, after that you export your ps as noborder.ps:
Everything is really fine! What is on the first lines of postscript file?
%!PS-Adobe-3.0
%%Creator: 0.46
%%Pages: 1
%%Orientation: Portrait
%%BoundingBox: 0 0 596 842
%%HiResBoundingBox: 0 0 596 842
%%DocumentMedia: plain 596 842 0 () ()
%%EndComments
%%Page: 1 1
Now we need to generate a PS file from this PS file (Why? Some new fonts that cannot be uploaded to the printer and dynamically text are changed. PS2PS is a good choice to embed fonts and other elements prior to printing). Let's use ps2ps from ghostscript 8.7.
user#server:/$ ps2ps noborder.ps whyborder.ps
Very good! No errors on running... But... What? BORDERS? MARGINS? CROPPING?
Lets look at the whyborder.ps header:
%!PS-Adobe-3.0
%%Pages: (atend)
%%BoundingBox: 5 6 587 792
%%HiResBoundingBox: 5.000000 6.791406 586.732813 792.000000
%.........................................
%%Creator: GPL Ghostscript 870 (pswrite)
%%CreationDate: 2015/09/09 16:09:24
%%DocumentData: Clean7Bit
%%LanguageLevel: 2
%%EndComments
%%BeginProlog
% This copyright applies to everything between here and the %%EndProlog:
Why is the BoundingBox changed??? Why add borders, margins, cropping?
I have tested options like "-dEPScrop", papersize... But the cropping remains... Why???

Firstly, stop using an ancient version of Ghostscript! 8.70 is now 6 years old, the current version is 9.16 (shortly to be superceded by 9.18).
Secondly, when experimenting like this do not use a script, use the command line directly. The device being used in the archaic version of GHostscript was pswrite, which was a very poor implementation and only supported level 1 output. The current code uses the ps2write device which is a much more powerful and flexible solution.
Note that in all cases running the input through Ghostscript does not 'embed fonts' or 'edit' or 'compress' the original. What happens is that your input is interpreted to produce graphics primitives which are fed to the device API, the device in question then processes the primitives. For a rendering device this means calling the graphics library to render the primitive to the canvas. For a high level device it means re-emitting the primitive , for example as a PDF operation.
ANY such processing brings inherent risks of approximation, the pswrite device was even worse in that much of the content was rendered to images. So in general it really doesn't embed new fonts, it just embeds pictures of the glyphs. This scales really badly and because the bounding box depends on the resolution can result in signficant inaccuracies.
You should really avoid doing ths unless there is no alternative. If you really must do it, be prepared to accept compromises, do not use archaic versions of Ghostscript and don't use the crappy old pswrite device.

Thank you KenS;
By upgrading the ghostscript to version 9.16 and using the following command everything was fine:
user#server:/$ ps2ps -sPAPERSIZE=a4 noborder.ps whyborder916.ps
This is a old and stable system (PHP/Bash/Ghostscript/CUPS), used for many years as a factory labeling system with old postscript printers also. More recently there was the need to change font style; ps2ps was the best choice to "embed" the glyphs and barcodes (postcript language) that dynamically change with the production line and packaging without need to change the printers (different models from different manufacturers). Since PostScript is a language with few changes in years, never crossed my mind to change the GS version.
At this point this is a real money saver! Thanks again!

Related

Rendering with Chromium direct to my own 'canvas' (e.g. GDI+)

First, a quick description of the end-goal:
I'm building a cross-platform, .NET Core-based, printing app. This app will be able to print all sorts of file types with custom page settings, such as headers, footers, and margins. A key feature is it supports multiple-pages-up (e.g. a landscape sheet of paper with two portrait pages rendered side by side...called "2up").
Printing HTML is important not just because of printing HTML, but I want to use all the great HTML-based syntax-highlighting out there for source code (e.g. www.prismjs.com).
The app is basically done but for one major problem: I can't get the HTML to render well enough. So far I've implemented source code printing three ways:
1) As plain text with my own line-numbering and line-wrap engine. This works wonderfully for everything I can throw at it, but it does not support syntax highlighting.
2) Using Html-Renderer (https://github.com/ArthurHub/HTML-Renderer/issues) an OSS .NET-based Html Renderer. This implementation is the weakest because Html-Renderer's CSS support is really weak. It can't handle hardly anything prismjs or highlightjs' generate.
3) Usinglitehtml' (www.litehtml.com) via LiteHtmlSharp. This was very promising and I almost have it working with some major hacks, but litehtml also does not support key, modern, HTML/CSS features.
Neither Html-Renderer or litehtml support the CSS page-break-before feature that, when combined with media print would let me ensure lines are not split between pages.
What I really want to use is the Chromium rendering engine. litehtml provides a fantastic API for this sort of problem: It calls me whenever it needs to render, and I draw (text, table borders, images, etc...) using GDI+. My dream is to find something in Chromium (CEF, Puppeteer, ???) that provides a similar API.
Or, an alternative, an API that will let me pass in a GDI+ Graphics (or HDC) and the renderer will render to that surface.
With Html-Renderer I measure calculate # pages like this.
SizeF size = HtmlRender.MeasureGdiPlus(g, html, containingSheet.GetPageWidth());
int numPages = (int)(size.Height / containingSheet.GetPageHeight());
My page rendering code (e.g. OnPaint) looks like this:
SizeF size = new SizeF(containingSheet.GetPageWidth(), containingSheet.GetPageHeight());
HtmlRender.RenderGdiPlus(g, html, PointF(0, 0), size );
With htmllite the OnPaint code looks like this:
// Set the clip such that any extraLines are clipped off bottom
g.SetClip(new Rectangle(0, 0, (int)Math.Round(PageSize.Width), (int)Math.Round(PageSize.Height - remainingPartialLineHeight)));
LiteHtmlSize size = new LiteHtmlSize(Math.Round(PageSize.Width), Math.Ceiling(PageSize.Height));
litehtml.Document.Draw((int)-0, (int)-yPos, new position {
x = 0,
y = 0,
width = (int)size.Width,
height = (int)size.Height
});
And in this case the call to litehtml.Document.Draw causes a bunch of callbacks into my app that I process using the same Graphics the OnPaint is called with.
Most discussions of CEF and Chromium point to ScreenshotAsync etc... which will not do because I need to be rendering to a PRINTERS HDC (or GDI+) and blitting bitmaps will loose quality.
I have poured over the Chromium source and I cannot find the obvious way to say to CEF/Chromium "render page 1 (defined as Height/Width) to this GDI+ Graphics object" then "render page 2..." etc... The printing support (and how pdfium is integrated come close!).
Chromium issue 311308 indicates I'm hosed until this work gets picked up again.
Note: I have full access to nodejs w/in my app. I have built a dotnet/nodejs bridge, which is how I convert the raw text file of a source code file to richly formatted, line-numbered, syntax-highlighted html via prismjs. This means I could easily use puppeteer/Headless Chrome if I could just figure out the right APIs.
Does anyone have a suggestion that might help? I'm willing to contribute to Chromium if it's not major heart surgery.

mxClient renders different shapes all as squares

Several weeks ago I have been asked to upgrade a web application based on a very old version of MXGraph library (version 2.4). The application integrated also the 'grapheditor' a sort of demo application evolved later in Diagramly
and then in Draw.io). Recently I completed the more problematic step, the transition from old "grapheditor" to Draw.io, so I am now able to open all the previous diagrams (saved as plain XML), modify and save them consistently.
Ok, this is the nice part. The bad side is the 'read-only' section of the application ,where the users can more or less, only view the graph.
This page is based on the mxClient.js that renders the graph described in the xml through this code:
var graph = new mxGraph(container);
var diagram = mxUtils.parseXml(xml);
var codec = new mxCodec(diagram);
codec.decode(diagram.documentElement, graph.getModel());
graph.fit();
Upgrading the MX library to the last version (3.9.10) the same code works but some shapes are not rendered properly, they appears as squares instead of
circles, ellipses, etc. The two following images are an example of this misbehavior
Graph in the draw.io:
Same graph rendered by mxClient:
After some tries I discovered that the old mxClient is able to render the same graph perfectly (as draw.io does) so I think there have to be something wrong (or missing) in my code or mxGraph installation/configuration.
As a temporary workaround I can keep in place the old version of mxGraph but obviously I'd like to use the new one.
Can someone give me an hint on this? Any help would be very appreciated.
The tape shape isn't part of core mxGraph, it's part of the GraphEditor example, in the additional shapes JavaScript.
If you look at the style of the ellipse, it's probably not the one in the core, most likely another one from Shapes.js.
Either pull in shapes.js, or use the viewer in draw.io.

Appbar guidance results in "Invalid qualifier: SCALE-240"

The Windows universal app guidance for app bars suggests you include app bar icons are 100 scale (32x32), 140 scale (45x45) and 240 scale (77x77) icons.
The issue is that when I include a 240 scale I get the following warning when I compile: Invalid qualifier: SCALE-240
It seems to me that the scale is not supported. My question then is should I include it, remove it or change to a different scale (perhaps 180)?
Scale-240 is recommended for Windows Phone apps and works there (the default templates provide Scale-240 assets). Windows Store apps typically use scales-80,100,150, and 180. See How to name resources using qualifiers
The Scale-240 asset won't cause any problems at runtime, but will be ignored on Windows. You'll definitely want to include other scales instead or with it.

How can I display a JPG image so that a very low resolution displays first?

I've noticed that on some sites, a very low resolution version of an image gets displayed underneath the final version before it's done loading, to give the impression that the page is loading faster. How is this done?
This is called progressive JPEG. When you save a picture using a tool like Photoshop you need to specify you want to use this JPEG flavor.
I've found this Photoshop "Save for Web" dialog sample where you will find the whole Progressive option enabled:
What you are asking for depends upon the decoder and display software used. As noted, it occurs in progressive JPEG images. In that type of JPEG, the coefficients are broken down into separate scans.
The decode then needs to update the image in between decoding scans rather than just at the end of the image.
There was more need for this in the days of dial up modems. Unless the image is really large, it is usually faster just to wait and display the whole image.
If you are programming, the display software you use may have an option to update after scans.
Most libraries now use a model where you decode an image file stream into a generic image buffer. Then you display the image buffer. In this model, there generally is no place to display the images on the fly.
In short, you enable this by creating progressive JPEG images. Whether the image displays fading in dependents entire on what is used to display the image.
As an alternative, you can batch optimize all your images using the ImageMagick's convert command like this:
convert -strip -interlace plane input.jpg output.jpg
You can use these other options instead of plane.
Or just prefix the output's filename with PJPEG
convert -strip input.jpg PJPEG:output.jpg
Along with a proper file search or filename expansion (e.g.):
for i in images/*; do
# Your conversion command
done
The strip option is for stripping any profiles or comments, to make the conversion "cleaner". You may also want to set the -quality option to reduce the quality loss.

how to create gif animation from a stack of jpgs

I have around 200 jpg images. I need to stack them so that i can convert them into a simple animated gif image. Are there any free tools available to do that job? My os is windows.
I'm not so bothered about the quality of the output.
Try using ImageMagick's convert utility. I have used it to create animated gifs from a set of images (in any format) in the past.
Use the command
convert -delay 20 -loop 0 *.jpg animated.gif
Might want to look at GiftedMotion: http://www.onyxbits.de/giftedmotion
In theory this would work
ffmpeg -f image2 -i image%d.jpg video.avi
ffmpeg -i video.avi -pix_fmt rgb24 -loop_output 0 out.gif
If you'd like a flexible on-line solution, I just used GIFmaker.me and it worked great. It lets you change the frame order, change the size, set the speed, and set the repeat cycles. You can view the animated GIF and download it when you're finished.
Edit: I just used another on-line tool that GIFmaker refers to on their site. GIFcreator is even more flexible, letting you duplicate frames, change the delay for each frame, remove frames, and reverse frames. It also has a more flexible resize capability.
ImageJ and FIJI provide a powerful GUI for doing this is (FIJI is a re-package of ImageJ that also includes some widely used plugins). These powerful (but free!) programs may be overkill, but depending on your needs this may be the way to go, since this is a somewhat common and crucial task for biologists.
Also FIJI can open a large array of different image types, can save to GIF or AVI, and it is very easily scriptable (insternally with in Python or Java) for automating custom tasks etc.
Step-by-step instructions (from Here and Here) are as follows:
Put your images in a folder, and name them in sequence (eg. make sure they
open alphabetically in the right order, perhaps by adding the
desired frame numbers to the start of the filenames).
(On MacOS, this automator action could help)
In FIJI, Select `File > Import > Image Sequence..."
Choose your folder, and then any options (eg. scaling the images)
Preview the video with the Play button in the resulting window's
corner.
To change the frame rate, choose Image > Stacks > Animation > Animation Options...
Select File > Save As > Animated GIF... or AVI... and you're done.
For GIF you can choose the delay between frames (ie. frame rate), and looping option. For AVI you can choose the frame rate.