Consistency of CSS Pixel Font Sizes - html

Suppose that I want to render two capital 'T's, both precisely one inch tall. One 'T' is rendered on a Macbook Pro with Retina Display (220 DPI, CSS pixel ratio 2), and the other is rendered on an iPhone 4S (326 DPI, CSS pixel ratio 2). In order to do this, I set the font size on the Macbook Pro to 220 / 2 = 110px, and the font size on the iPhone 4S to 326 / 2 = 163px. However, the two 'T's are still not the same size. Why? Also, what can I do to make sure that the two 'T's are of the same physical size?
Edit: Yes, I am using media queries. I didn't want to get into the gory details, but I hinted at it by mentioning the use of the CSS pixel ratio. Assume that I am somehow magically able to obtain the DPI and CSS pixel ratio of the client monitor in all cases.
Thanks for your insight!

I think the concept under which this has to be done is media queries and there is no mention that you are doing this using media queries. You can target both iPhone as well as iOS using media queries to get the desired effects:
More is here

Related

Why does 1px in CSS not match with 1 px from screen resolution?

If you inspect body/html of this site (or any else) from dev tools you can clearly see width (x) of the page are 15-20% smaller compared to your screen resolution.
Where the difference come from ?
I understand why there could be a small differences +-50px coz of borders/scrollbars etc. of browser itself, but I'm sure they aren't 400 missing px wide.
I also checked if my browser settings is set to 100% scale
Calculated Sizes
It's pretty common these days to have calculated sizes, clamps, or min/max definitions. So, first thing to check is whether these are affecting the numbers you expect.
Pixel Density
Modern devices have different screen densities. Basically, this means more physical blinky lights are used to represent one "theoretical pixel", which allows for super high resolution rendering.
Mathmatecially, there will be a ratio of real pixels (blinky lights) to theoretical pixels, for example the iPhone 11 Pro has a 3x pixel density, and the Samsung Galaxy S10 has a 4x pixel density. This is "how many device pixels there are for each CSS pixel.
To look at this another way, the 1440 physical pixels on the Samsung represents 360 CSS pixels.
A full rundown is available in this article.

Why are media queries defined in small values like 500px when phones are greater than that? [duplicate]

This question already has an answer here:
How do high-DPI devices like Retina displays and 1080p phones handle pixels in media queries?
(1 answer)
Closed 2 years ago.
When designing websites we often use media queries to make the website more responsive in different devices. However, I noticed that most media queries are defined with min-width, and min-width usually has small values like 768px, 920px, etc.
I just do not see how this makes the website mobile friendly because most phones have very high resolutions, take the iPhone X for example, the resolution is 2436 x 1125 pixels. Aren't the media queries useless then? The resolution for some phones is almost the same as, if not better than, some monitors, would this not mean that websites should look the same in some phones and monitors? What exactly am I missing?
Pixels here refer to CSS pixels rather than physical pixels. From MDN:
The CSS pixel—denoted in CSS with the suffix px—is a unit of length which roughly corresponds to the width or height of a single dot that can be comfortably seen by the human eye without strain, but is otherwise as small as possible.
When a website uses the <meta name='viewport' content='width=device-width'> declaration, the actual CSS pixels you get is a much smaller number than the declared resolution of the device.
For example, for my iPhone you'd get approximately 376px in width and 635px in height. The phone itself has a resolution three times larger, so you can say it has a device pixel ratio of 3.

Why the iPhone 5 has the width only of 320 pixels? [duplicate]

from UIImage reference:
#property(nonatomic, readonly) CGSize size
The dimensions of the image, taking orientation into account.
Discussion
In iOS 4.0 and later, this value reflects the logical size of the image and is measured in points. In iOS 3.x and earlier, this value
always reflects the dimensions of the image measured in pixels.
What's the difference between pixels and points in iOS?
A pixel on iOS is the full resolution of the device, which means if I have an image that is 100x100 pixels in length, then the phone will render it 100x100 pixels on a standard non-retina device. However, because newer iPhones have a quadrupled pixel density, that same image will render at 100x100 pixels, but look half that size. The iOS engineers solved this a long time ago (way back in OS X with Quartz) when they introduced Core Graphics' point system. A point is a standard length equivalent to 1x1 pixels on a non-retina device, and 2x2 pixels on a retina device. That way, your 100x100 image will render twice the size on a retina device and basically normalize what the user sees.
It also provides a standard system of measurement on iOS devices because no matter how the pixel density changes, there have always been 320x480 points on an iPhone screen and 768x1024 points on an iPad screen.*
But at the same time, you can basically disregard the documentation considering that retina devices were introduced with iOS 4 at a minimum, and I don't know of too many people still running iOS 3 on a newer iPhone. But if such a case arises, your UIImage would need to be rendered at exactly twice its dimensions in pixels on a retina iPhone to make up for the pixel density difference.
*Starting with the iPhone 5, the iPhone's dimensions are now no longer standardized. Please use the appropriate APIs to retrieve the screen's dimensions or use layout constraints.
The Ultimate Guide To iPhone Resolutions
These guys did an awesome job, take a look here — The Ultimate Guide To iPhone Resolutions.
Using data from http://www.paintcodeapp.com/news/ultimate-guide-to-iphone-resolutions I set up the formula sqrt(pointWidth^2+pointHeight^2)/diagonalInches to figure out how many points each phone displayed per inch.
Results:
iPhone 2G, 3G, 3GS, 4, 4s = 164.825201164068082 Points Per Inch
iPhone 5, 5s = 162.9846618550346903
iPhone 6 = 162.8061416117083255
iPhone 6 Plus = 153.535954278463216
As you can tell a point is roughly the same size on each phone. Using the same webopage, you can set up the same formula with the pixel values, and you'll notice large irregularities due to higher pixel densities on the newer phones.
iOS pixels, points, units
We should review them in context of
coordinate systems
absolute and relative positioning
Differences
pixels(absolute) - pixel-based coordinate systems of current screen. Is used in very specific cases. For example CGImage (The width/height, in pixels, of the required image), CGContext
points(virtual pixels, logical pixels)(absolute) - default Point-based coordinate systems. It is a kind of density independent pixel - dp in Android.
units - unit coordinate systems(relative). From 0.0 to 1.0. For example CALayer.anchorPoint
[iOS Frame vs Bounds]

Retina - Correlation between device pixel ratio and size of image?

I don't quite understand what the window.devicePixelRatio value is, and how it dictates what size image (2x, 3x, etc) I need for that device.
For instance, on an iMac 5K Retina (Late 2015), I'd expect the pixel ratio to be at least 3 or so, but it's actually 2, the same as an iPad Air and iPhone 6s. Does that mean it prefers a 2x bitmap? 3x?
devicePixelRatio is the ratio between physical pixels and device-independent pixels (dips) on a given device. You can think of dips as what the display "acts" like.
For example: a non-retina 27" iMac has a width of 2560 physical pixels. Everything is displayed 1:1, so it's also 2560 dips wide, so the devicePixelRatio is 1.
On your retina 27" iMac, the width is 5120 physical pixels. But the display "acts" like it's only 2560 pixels wide, so that everything is shown at the same physical size as the non-retina iMac. Therefore, it's still 2560 dips wide, so the devicePixelRatio is 2 (5120 / 2560), and you would serve 2x images.
(You can look up what the dips values are for your system – if you have a retina display – by going to System Preferences > Displays > Display and switching the Resolution toggle to Scaled, then hovering over the different options. For Default, on the 5K iMac, it'll say "Looks like 2560 x 1440").
To date, standard practice for graphics destined for Retina displays is still to provide an image that's twice the usual, non-Retina size.
Reminder: it is good "bandwidth hygiene" to serve an image only as large as needed for the current user's device size and resolution. Solutions to that are outside the scope of this question.

IPad Retina Resolution Vs Normal Resolution

So I am about to create a full screen game for iPad. The retina display (new iPad) has a resolution of 2048 x 1536 pixels while the old iPad has resolution of 1024x768 pixels.
My question is, should I set width of my container div to be 2048 pixels and then use media queries to change dimensions and positions of different HTML elements based on window width? I will be using lots of absolute positioning and I am not sure how will that work between old and new iPad.
Or will the retina display still return 1024 in media queries?
Also, do I need two sets of images? One set of images for the retina display and one set of images (with half the size) for the old iPad display?
I am slightly confused. Thank you for explaining this to me.
You should set it at 1024, and then target retina and higher resolution displays with device pixel ratio media queries.
Read more here:
http://menacingcloud.com/?c=highPixelDensityDisplays