I'm try to use modesty/pdf2json and the output is very useful, but i'm try to figuring the measure units that the library uses. They call it "Page Units", and according to the pdf specs, this is'nt equal to the 1/72 (point), because an entire page has 51 Page Units on height
Anybody knows what is this Page Unit? Where i can find info about this measurement?
Many thanks in advance.
TL;DR
The important thing to understand is that x,y and element width/height are relative units that are related to page width/height by a ratio that can be translated to any destination ratio by dividing by the existing units and multiplying by the desired units.
Here are the boring details:
PDF's don't have a standard "size" -- you can print anything you like to PDF which may include landscape or portrait orientation, different page sizes (Standard, A0-A5, Legal, Tabloid, Custom), etc. The size of a PDF is in inches so the translation to pixels (including with pdf2json) is not a fixed "24px" as indicated in #async5's answer.
The key to programmatically getting the results you want is to utilize the parsed PDF information (page width and page height) along with how you need to render it (pixel count varies by density of display resolution but an "inch" is always an "inch") and how that translates to the destination resolution you're targeting.
Since the same physical device often supports multiple resolutions (changing the logical DPI) - there may be a difference between the native pixel density and the synthesized density set by the user and so the basis for translating from PDF Units to a local display is going to be a scale factor that's made up of the difference between the PDF file and the target dpi of the physically rendered version of it. This same idea applies with PDF parsing libraries which may use a different DPI than the native "72dpi" of the pdf file itself.
While 96dpi is the Microsoft standard size (72dpi is Apple's standard), the choice of either doesn't give you a correct pixel offset b/c pdf2json or pdf.js don't know anything about the end-user display. For pdf2json coordinates (x/y) they are simply relative measurements between a position on a plane (which is defined by a width/height). So standardized to a 8.5"x11" position with 72dpi would be done as follows:
pdfRect.x = pdfRect.x * ((8.5 * 72) / parsedPdf.formImage.Width);
pdfRect.y = pdfRect.y * ((11 * 72) / parsedPdf.formImage.Pages[0].Height);
This kind of formula would work no matter what pdf2json's internal DPI is -- or frankly whatever other PDF parsing library you choose to use. That's because it cancels out those units by division and multiplying using whatever units you need. Even if today pdf2json internally uses 96dpi and downscales by 1/4 and later changes to 72dpi and downscaling by 1/2 the math above for converting to the pixel offset and dpi would work independent of that code change.
Hope this is helpful. When I was dealing with the problem it seemed the Internet was missing a spelled out version of this. Many people solving specific concrete source/destination resolution issues (including specific to a library) or talking about it in the abstract but not explaining the relationship very clearly.
Whatever pdf2json produces is not related to the PDF.js (PDF.js uses standard PDF space unit as a base)
So based on https://github.com/modesty/pdf2json/blob/3fe724db05659ad12c2c0f1b019530c906ad23de/lib/pdfunit.js :
pdf2json gets data from PDF.js in 96dpi units
scales every unit by 1/4
So page unit equal (96px/inch * 1inch / 4) = 24px.
In your example height is equal 51 * 24px = 1,224px, or 51 * 0.25inch = 12.72inch
Related
I've followed the answer in this post Cesium label blurred but have had no luck.
I've made sure that viewer.scene.fxaa = false and nothing seems to change. Please see my screenshot attached.
Does anyone have a fix for this?
Thanks so much!
Be wary of hard-coding something like viewer.resolutionScale = 2. There's a baked-in assumption on this line of code that the user probably has a high-DPI screen, and their browser is scaling up the webpage accordingly. Running this line of code on a system that is already using a 1:1 pixel ratio may cause it to render twice as wide and twice as tall as what the device can actually display.
Here's an alternate suggestion:
viewer.resolutionScale = window.devicePixelRatio
It's not perfect, but better than a hard-coded 2. It will attempt to get the Cesium viewer to exactly match the device's native pixels, which may not be the same size as "CSS pixels", particularly on high-DPI screens.
A value of "1" here (the default) means that Cesium's viewer canvas pixels are the same size as the webpage's idea of CSS pixels, which may be larger and chunkier than the screen's own native pixels. Higher numbers act as a multiplier on the WebGL canvas resolution, taking more graphics memory and performance. You may find that the machine you're testing this on already has a window.devicePixelRatio of 1.5 or 2.0, so the line above may not act any differently from a hard-coded 2 on your particular machine. But checking the local devicePixelRatio is better than making assumptions.
This can be fixed by adding:
viewer.resolutionScale = 2
May impact performance, but it seems fine so far.
What size for my textures should I use so it looks good on android AND desktop and the performance is good on android? Do I need to create a texture for android and a texture for desktop?
For a typical 2D game you usually want to use the maximum texture size that all of your target devices support. That way you can pack the most images (TextureRegion) within a single texture and avoid multiple draw calls as much as possible. For that you can check the maximum texture size of all devices you want to support and take the lowest value. Usually the devices with the lowest size also have a lower performance, therefor using a different texture size for other devices is not necessary to increase the overall performance.
Do not use a bigger texture than you need though. E.g. if all of your images fit in a single 1024x1024 texture then there is no gain in using e.g. a 2048x02048 texture even if all your devices support it.
The spec guarantees a minimum of 64x64, but practically all devices support at least 1024x1024 and most newer devices support at least 2048x2048. If you want to check the maximum texture size on a specific device then you can run:
private static int getMaxTextureSize () {
IntBuffer buffer = BufferUtils.newIntBuffer(16);
Gdx.gl.glGetIntegerv(GL20.GL_MAX_TEXTURE_SIZE, buffer);
return buffer.get(0);
}
The maximum is always square. E.g. this method might give you a value of 4096 which means that the maximum supported texture size is 4096 texels in width and 4096 texels in height.
Your texture size should always be power of two, otherwise some functionality like the wrap functions and mipmaps might not work. It does not have to be square though. So if you only have 2 images of 500x500 then it is fine to use a texture of 1024x512.
Note that the texture size is not directly related to the size of your individual images (TextureRegion) that you pack inside it. You typically want to keep the size of the regions within the texture as "pixel perfect" as possible. Which means that ideally it should be exactly as big as it is projected onto the screen. For example, if the image (or Sprite) is projected 100 by 200 pixels on the screen then your image (the TextureRegion) ideally would be 100 by 200 texels in size. You should avoid unneeded scaling as much as possible.
The projected size varies per device (screen resolution) and is not related to your world units (e.g. the size of your Image or Sprite or Camera). You will have to check (or calculate) the exact projected size for a specific device to be sure.
If the screen resolution of your target devices varies a lot then you will have to use a strategy to handle that. Although that's not really what you asked, it is probably good to keep in mind. There are a few options, depending on your needs.
One option is to use one size somewhere within the middle. A common mistake is to use way too large images and downscale them a lot, which looks terrible on low res devices, eats way too much memory and causes a lot of render calls. Instead you can pick a resolution where both the up scaling and down scaling is still acceptable. This depends on the type of images, e.g. straight horizontal and vertical lines scale very well. Fonts or other high detailed images don't scale well. Just give it a try. Commonly you never want to have a scale factor more than 2. So either up scaling by more than 2 or down scaling by more than 2 will quickly look bad. The closer to 1, the better.
As #Springrbua correctly pointed out you could use mipmaps to have a better down scaling than 2 (mipmaps dont help for upscaling). There are two problems with that though. The first one is that it causes bleeding from one region to another, to prevent that you could increase the padding between the regions in the atlas. The other is that it causes more render calls. The latter is because devices with a lower resolution usually also have a lower maximum texture size and even though you will never use that maximum it still has to be loaded on that device. That will only be an issue if you have more images than can fit it in the lowest maximum size though.
Another option is to divide your target devices into categories. For example "HD", "SD" and such. Each group has a different average resolution and usually a different maximum texture size as well. This gives you the best of the both world, it allows you to use the maximum texture size while not having to scale too much. Libgdx comes with the ResolutionFileResolver which can help you with deciding which texture to use on which device. Alternatively you can use a e.g. different APK based on the device specifications.
The best way (regarding performance + quality) would be to use mipmaps.
That means you start with a big Texture(for example 1024*1024px) and downsample it to a fourth of its size, until you reach a 1x1 image.
So you have a 1024*1024, a 512*512, a 256*256... and a 1*1 Texture.
As much as i know you only need to provide the bigest (1024*1024 in the example above) Texture and Libgdx will create the mipmap chain at runtime.
OpenGL under the hood then decides which Texture to use, based on the Pixel to Texel ratio.
Taking a look at the Texture API, it seems like there is a 2 param constructor, where the first param is the FileHandle for the Texture and the second param is a boolean, indicating, whether you want to use mipmaps or not.
As much as i remember you also have to set the right TextureFilter.
To know what TextureFilter to you, i suggest to read this article.
What is the intended difference between these 2 functions:
var size = cc.Director.getInstance().getWinSize();
var sizePx = cc.Director.getInstance().getWinSizeInPixels();
In my case they both return the exact same value.
In which cases should they return different values?
In recent versions of Cocos2dx the given answer is no longer accurate, specifically since the framework has dropped support of the explicit retina mode in favor of providing the programmer with ability to set the resolution of the game independently of the screen and asset resolution, performing scaling when necessary.
Strictly speaking, function getWinSize() returns the value of whatever resolution you choose (using CCGLView::setDesignResolution(float, float, ResolutionPolicy)), in pixels; getWinSizeInPixels() returns design resolution, multiplied by the content scale factor, which is, again, provided by you with CCDirector::setContentScaleFactor(float). If you do not provide the values with these functions, Cocos2dx will choose the design resolution based on an arbitrary value depending of the current platform. For example, on iOS it will use the size of the provided CAEAGLView in pixels (which may be less than the real device resolution in some cases), both getWinSize() and getWinSizeInPixels() will return the same value.
getWinSize() and getWinSizeInPixels() will return different values if you are scaling your resources to the game resolution. In such case getWinSizeInPixels() indicates what the resolution would be if you didn't have to scale the resources.
Some possible setups to illustrate how the system works:
1 set of images, 1 design resolution, many device resolutions = images scaled whenever design != device (lower quality of looks for upscale, unnecessary memory/cpu usage for downscale), single resolution layout code (presumably simpler)
multiple sets of images, 1 design resolution, many device resolutions = less need to scale the images because different assets cover wider scope of targets, single resolution layout code is preserved
multiple sets of images, multiple/adoptable design resolutions, many device resolutions = less need to scale, code must be explicitly agnostic of the resolution (presumably more complex)
It is possible I've got something wrong since I've started looking into Cocos2dx not so long ago, but that's the results I've got after some testing.
One returns points, logical pixels, the other physical pixels. In Retina displays both values are different (2x).
I am generating a PDF from HTML using a library, and all the size parameters that I am giving are in pixels. This seems kind of odd. I just googled the internet for A4 size in pixels, and can I just use these values everywhere?
Is this how it should be done? Will the generated PDF look correctly?
Otherwise, do I need to somehow compute the pixel size using information from the screen?
Then, how do PDF's work if they can be sent to others and still look comparatively the same?
PDF internally uses the same graphics model as PostScript. PDF is derived from PostScript. Basically,...
...it uses the very same operators that are available in PostScript, but renames them from being long and fully readable to short 1-, 2- or 3-letter abbreviations;
...however, it strips all features that make PostScript a full-blown programming language;
...and it adds a few new graphics capabilities, such as tranparencies and direct TrueType fonts embedding.
PDF also uses the same basic measurement unit as PostScript: 72 points == 1 inch. You may also use fractions of points. This is the device independent way of stating dimensions.
If you ever use pixels, you can do so. If you do, the absolute size of a graphic object on the display or the printed paper then depends on the current resolution of the display or printer. A square of 72px x 72px is 1inch x 1inch at a 72dpi resolution, but it is 0.1inch x 0.1inch at a 720dpi resolution. Therefore using pixels is a device dependent way of stating dimensions.
A4 dimensions are 'width x height = 595 x 842 pt'.
PDF's inherently a print medium, and its internal coordinates work in terms of 'points' (72pts per inch). The PDF rendering software (Acrobat, FoxIt, Ghostscript, etc...) will query the output device for its DPI rating and internally convert all the point-basec coordinates into device-specific pixel sizes when it comes time to render the PDF for display/print.
You can specify sizes in pixels while building a PDF, certainly. But remember that pixel sizes differ. A 300x300 pixel image will be 1" x 1" square on a 300dpi printer, but 3" by 3" on a 100 dpi monitor.
I sometimes edit my files in GIMP, when I want to edit my pictures and such. When I export for PDF it depends on the quality I need.
I just use this chart (for A4):
I've just gotten into web development seriously, and I'm trying to make a page that appears the same physical size ( in inches ) across all browsers and platforms
I believe a combination of percentage values and inch values can make a consistent UI.
my own system is a 15.4 inch screen with 1920x1200 pixels i.e. 144 DPI.
Here is the most simple HTML code that fails to appear the right size on any browser except FireFox (Tried on Chrome 3, 4, Opera 10.5, IE7)
<html><head>
<body>
<div
style="position:absolute; width:2in; height:1in; border:1px solid" >
hello world</div>
</body></html>
Chrome, Opera and IE render a .67 inch box ( They seem to be assuming a 96 DPI screen )
I am running Windows XP, but I see no reason why that would make a difference. Similar incorrect rendering on other machines I have tested.
I thought when I say "1in" in HTML it means one actual inch in the real world....
How do I handle this?
Thanks in advance,
Vivek
Edit :
in 2006 I developed an activex control which did live video editing for a website, in 2008 we started seeing lots of Vista use and higher DPI screens which made the UI unusable, I reworked the application so that everything scaled according to DPI, since then everyones happy that they don't need glasses to use the feature....
The whole reason that Win7 and Vista have this "DPI scaling" mode is to allow non-DPI aware apps to display ( but since it basically scales up the app's canvas, the apps look blurry ).
I can't believe that calling GetDeviceCaps() or the X-Windows equivalent is harder than hardcoding 96 DPI. Anyway it wouldnt affect any page that measures in pixels....
Can't be done. Period.
Screens are a grid of pixels and that is the only unit recognized by the physical display. Any other measurement must be converted to pixels for display.
This conversion is done by the operating system's graphics subsystem, which has an internal "DPI" setting - quoted because it's arbitrary and does not necessarily correspond to the actual physical DPI of any real-world device. Windows, for example, defaults to 96 DPI regardless of the display that it's connected to.
When you look at a page with something with a height of "1in", your machine looks at it, calculates that, since it's set for 144 DPI, "1in" means 144 pixels and makes it 144 pixels tall, regardless of the physical distance that those 144 pixels will occupy on your display.
When the typical Windows user with the default 96 DPI setting looks at it, their computer calculates that "1in" = 96px and makes it 96 pixels tall, again regardless of the physical distance that will correspond to those 96 pixels.
And you know what? This is a good thing. It means that a user with poor vision can lower their video driver's DPI setting and everything sized in inches, cm, point, em, or other 'real-world' units (i.e., basically anything other than percent or pixels) will get bigger so that they can see it more easily. Conversely, those of us who don't mind small text can artificially increase our DPI settings to shrink it all and get more onto our screens.
The tyranny of the printed page, forcing things to be a certain size whether we like it or not, is no more. Embrace the power this gives your users over their experience. Don't try to take that away - you won't succeed and you'll just annoy people in the process.
Instead of giving the size in px you just give it in percentage . so that it can fit on any screen based on the percentage .
I don't really think you can, to be honest.
The DPI of the screen is determined in hardware, i.e. a 15.4" Screen with 1920x1080 resolution = 144DPI.
Why do you need the site to appear as the same physical dimensions? Surely the same size, proportional to the screen is enough? i.e. if my resolution is 1920x1080, your site should take up 50% of the wide. If I'm using 1600x1050, it should take up 60%?
In short — you can't, at least for use on screen.
Using real world units depends on having clients knowing the correct DPI for the system. Between clients that don't bother, and systems which are not configured correctly, it is a hopeless situation.
If you want something to scale so it is a reasonable size for the users system, then you are likely best off using em units to scale based on the user's font size. This still has problems where the defaults are not suitable and the user hasn't changed them, but it is right more often than physical units.