Chrome auto-rotates any image from a file input drawn to a canvas based on it's exif data. This is great, but iOS doesn't do the same. Is there a way to prevent this behavior in so I can just transform the image myself. With a fix I wrote it works in iOS, disabling that fix works on Android ... would rather disable/enable then play the browser identifying game.
I've tried setting the style of the image to image-orientation: none; .... but that didn't do anything. Still rotated it.
Edit: I detected this by looking to see if the 'imageOrientation' on the style object was undefined or an empty string on a newly create img tag. Maybe not a perfect test, but it worked for my situations I tested. Not sure on how future proof it is.
This should be future proof:
// returns a promise that resolves to true if the browser automatically
// rotates images based on exif data and false otherwise
function browserAutoRotates () {
return new Promise((resolve, reject) => {
// load an image with exif rotation and see if the browser rotates it
const image = new Image();
image.onload = () => {
resolve(image.naturalWidth === 1);
};
image.onerror = reject;
// this jpeg is 2x1 with orientation=6 so it should rotate to 1x2
image.src = 'data:image/jpeg;base64,/9j/4QBiRXhpZgAATU0AKgAAAAgABQESAAMAAAABAAYAAAEaAAUAAAABAAAASgEbAAUAAAABAAAAUgEoAAMAAAABAAIAAAITAAMAAAABAAEAAAAAAAAAAABIAAAAAQAAAEgAAAAB/9sAQwAEAwMEAwMEBAMEBQQEBQYKBwYGBgYNCQoICg8NEBAPDQ8OERMYFBESFxIODxUcFRcZGRsbGxAUHR8dGh8YGhsa/9sAQwEEBQUGBQYMBwcMGhEPERoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoaGhoa/8IAEQgAAQACAwERAAIRAQMRAf/EABQAAQAAAAAAAAAAAAAAAAAAAAf/xAAUAQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIQAxAAAAF/P//EABQQAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQEAAQUCf//EABQRAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQMBAT8Bf//EABQRAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQIBAT8Bf//EABQQAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQEABj8Cf//EABQQAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQEAAT8hf//aAAwDAQACAAMAAAAQH//EABQRAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQMBAT8Qf//EABQRAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQIBAT8Qf//EABQQAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQEAAT8Qf//Z';
});
}
The only way to really find out for sure if the browser rotates based on exif data: Load up an image with exif ratation and see how it comes out.
This is due to an update in Chrome 81 that now has and respects the 'image-orientation' property. https://developer.mozilla.org/en-US/docs/Web/CSS/image-orientation
Chrome now defaults all images to 'from-image' meaning it will read the EXIF data to determine the rotation data of the image. Below is basically what I did to detect if the browser supports functionality like this since future versions of iOS and other browsers expect to do this also.
function browserImageRotationSupport(){
let imgTag = document.createElement('img');
return imgTag.style.imageOrientation !== undefined;
}
I was able to use this test to differentiate the browsers:
if (CSS.supports("image-orientation", "from-image")) {
...
}
const iOS = !!navigator.platform && /iPad|iPhone|iPod/.test(navigator.platform);
I use this snippet to check if it is IOS and only rotate the canvas ctx if it is IOS. I think older versions of android don't auto-rotate the image because I still have bugreports coming in from android users.
Setting the CSS on the canvas element as opposed to the img will fix this if you're drawing to a canvas that is part of the DOM.
canvas {
image-orientation: none;
}
As of writing the element has to be in the DOM because it uses the computed style. That only exists in a DOM context. You can read more in the issue on the Chromium tracker.
https://bugs.chromium.org/p/chromium/issues/detail?id=158753
Related
I and my team have been struggling lately to find an explanation why does Firefox produce larger WebM/VP8 video files compared with Chrome when using the MediaRecorder API in our project.
In short, we record a MediaStream from a HTMLCanvas via the captureStream method. In attempt to isolate everything from our app that might affect this, I developed a small dedicated test app which records a <canvas> and produces WebM files. I've been performing tests with the same footage, video duration, codec, A/V bit rate and frame rate. However, Firefox still ends up creating up to 4 times larger files compared with Chrome. I also tried using a different MediaStream source like the web camera but the results were similar.
Here is a fiddle which should demonstrate what I am talking about: https://jsfiddle.net/nzwasv8k/1/ https://jsfiddle.net/f2upgs8j/3/.
You can try recording 10-sec or 20-sec long videos on both FF and Chrome, and notice the difference between the file sizes. Note that I am using only 4 relatively simple frames/images in this demo. In real-world usage, like in our app where we record a video stream of a desktop, we reached the staggering 9 times difference.
I am not a video codec guru in any way but I believe that the browsers should follow the same specifications when implementing a certain technology; therefore, such a tremendous difference shouldn't occur, I guess. Considering my knowledge is limited, I cannot conclude whether this is a bug or something totally expected. This is why, I am addressing the question here since my research on the topic, so far, led to absolutely nothing. I'll be really glad, if someone can point what is the logical explanation behind it. Thanks in advance!
Because they don't use the same settings...
The webm encoder has a lot of other params than the ones we've got access to from the MediaRecorder.
These params may all have an influence on the output file's size, and are up to implementors to set.
Here are snapshots I took of the videos generated from your updated fiddle [click to enlarge]:
Chrome 1
Firefox 1
Chrome 2
Firefox 2
I hope you can appreciate the difference of quality, it's about the same as between webp's 0.15 vs 0.8 quality params, and the sizes also reflects these changes.
const supportWebPExport = document.createElement('canvas').toDataURL('image/webp').indexOf('webp') > -1;
const mime = supportWebPExport ? 'image/webp' : 'image/jpeg';
const img = new Image();
img.onload = doit;
img.crossOrigin = 'anonymous';
img.src = "https://i.imgur.com/gwytj0N.jpg";
function doit() {
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
canvas.width = this.width,
canvas.height = this.height;
ctx.drawImage(this, 0,0);
canvas.toBlob(b => appendToDoc(b, '0.15'), mime, 0.15);
canvas.toBlob(b => appendToDoc(b, '0.8'),mime, 0.8);
}
function appendToDoc(blob, qual) {
const p = document.createElement('p');
p.textContent = `quality: ${qual} size: ${blob.size/1000}kb`;
p.appendChild(new Image).src = URL.createObjectURL(blob);
document.body.appendChild(p);
}
So yes, that's how it is... One way or the other could be better for your cases, but the best would be that we, web-devs, get access to these parameters. Unfortunately, this is not an easy thing to do from the specs point-of-view...
If an asset (for example, a .jpg image) is found in the DOM, but marked with 'display: none' CSS, which browsers will download that asset, even though it doesn't technically display it to the user?
This is a website load speed question. I want to know how CSS display properties affect page load time. This question has been asked before on StackOverflow. However, that was two years ago, and I've heard rumors that things have changed since then.
Internet Explorer versions 6+ all appear to load the image. Firefox doesn't appear to load the image in versions 3-5, but do load the image from version 6 and up. As for Chrome, the image would be loaded at least as far back as version 14. Safari 4 and up also load the image.
Run the test yourself: http://jsfiddle.net/jonathansampson/4L9adwcu/
(function () {
var image = document.createElement( "img" ),
timeout = setTimeout(function timeout () {
alert( "Image was not loaded." );
}, 3000);
function loaded () {
clearInterval( timeout );
alert( "Image was loaded." );
}
if ( image.addEventListener ) {
image.addEventListener( "load", loaded );
} else if ( image.attachEvent ) {
image.attachEvent( "onload", loaded );
}
image.style.display = "none";
image.src = "http://gravatar.com/avatar/a84a8714fd33f6676743f9734a824feb";
document.body.appendChild( image );
}());
If I had to speculate as to why this was the case, I suspect it would have something to do with loading DOM resources as quickly as possible so they will be ready when they're needed. If the Image is not added to the document (meaning we remove document.body.appendChild...) it will not be requested.
You can keep images from being loaded by using a data-* attribute instead. When the image is needed, swap the src value out for the data-src value, and at that point the browser will load the image:
<img data-src="http://example.com/dont-load-me.png" />
The actual swap would be fairly straight-forward:
imageReference.src = imageReference.getAttribute( "data-src" );
I should mention that I am an engineer on the Internet Explorer team.
With specific regard to images, this test has already been done by W3 located at: http://www.w3.org/2009/03/image-display-none/results
I'm facing a CORS issue that is driving me insane. Allow me to share an example URL:
http://www.jungledragon.com/image/19905/mature_female_eastern_forktail.html/zoom
As the issue can only be reproduced once per page, here is a list of other images:
http://www.jungledragon.com/all/recent
From that overview, you can open any photo page. Next, from that photo page click the image once more to launch it fullscreen, as that is where the issue lies.
Now allow me to explain the setup, and the problem. The site itself is hosted on a Linux server within my control. The site is at www.jungledragon.com. The images, however, are stored at Amazon S3, where the image bucket has an alias of media.jungledragon.com.
The basic situation is simple:
<div id="slideshow-image-container">
<div class="slideshow-image-wrapper">
<img src="http://media.jungledragon.com/images/1755/19907_large.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1409788810&Signature=QH26XDrVuhyr1Qimd7IOBsnui5s%3D" id="19907" class="img-slideshow img-sec wide" data-constrained="true" data-maxheight="2056" crossorigin="anonymous">
</div>
</div>
As you can see, I'm just using the normal 'html' way of loading an image. The image URL is signed and can time out, but that shouldn't be relevant. It is my understanding that CORS does not apply to this situation, since loading images from an external domain this way has been supported for decades. The image is not loaded using javascript, after all.
Just to be sure though, the crossorigin attribute is set in HTML. Furthermore, as a way of testing, I have set a very liberal CORS policy on the image bucket:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-Type</AllowedHeader>
<AllowedHeader>x-amz-acl</AllowedHeader>
<AllowedHeader>origin</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Now, the situation gets a bit more complicated. The fullscreen image viewer is supposed to get a background color that is the dominant/average color of the actual image on screen. That color is calculated using canvas, yet it is only calculated once. The first time it is calculated for that image, the result is communicated to the back-end using an ajax call and then stored forever. Subsequent visits to the image will not run the calculation logic again, it will simply set the background color of the body element and all is good.
Here is the logic that does the calculation:
<script>
$( document ).ready(function() {
<?php if (!$bigimage['dominantcolor']) { ?>
$('#<?= $bigimage['image_id'] ?>').load(function(){
var rgb = getAverageRGB(document.getElementById('<?= $bigimage['image_id'] ?>'));
document.body.style.backgroundColor = 'rgb('+rgb.r+','+rgb.g+','+rgb.b+')';
if (rgb!==false) {
$.get(basepath + "image/<?= $bigimage['image_id'] ?>/setcolor/" + rgb.r + "-" + rgb.g + "-" + rgb.b);
}
});
<?php } ?>
});
Yes, I'm mixing in back-end code with front-end code. The above code says that if we do not yet know the dominant color in the scene, calculate it. The load function is used because at document ready, the actual image from the normal html may not have been loaded completely. Next, if the dominant color is not known yet, and the image is loaded, we trigger the function that calculates the dominant color. Here it is:
function getAverageRGB(imgEl) {
var blockSize = 5, // only visit every 5 pixels
defaultRGB = {r:0,g:0,b:0}, // for non-supporting envs
canvas = document.createElement('canvas'),
context = canvas.getContext && canvas.getContext('2d'),
data, width, height,
i = -4,
length,
rgb = {r:0,g:0,b:0},
count = 0;
if (!context) {
return defaultRGB;
}
height = canvas.height = imgEl.naturalHeight || imgEl.offsetHeight || imgEl.height;
width = canvas.width = imgEl.naturalWidth || imgEl.offsetWidth || imgEl.width;
imgEl.crossOrigin = "anonymous";
context.drawImage(imgEl, 0, 0);
try {
data = context.getImageData(0, 0, width, height);
} catch(e) {
/* security error, img on diff domain */
return false;
}
length = data.data.length;
while ( (i += blockSize * 4) < length ) {
++count;
rgb.r += data.data[i];
rgb.g += data.data[i+1];
rgb.b += data.data[i+2];
}
// ~~ used to floor values
rgb.r = ~~(rgb.r/count);
rgb.g = ~~(rgb.g/count);
rgb.b = ~~(rgb.b/count);
return rgb;
}
The following line is CORS-relevant:
data = context.getImageData(0, 0, width, height);
Although I believe I have set up CORS correctly, I can live with this code failing in some browsers. It seems to work fine in Firefox and IE11, for example. If it fails, I would expect it to fail calculating the dominant color. However, something far worse is happening in highly specific cases: the image is not shown alltogether.
My thinking is that my 'classic' loading of the image via img src tags should have nothing to do with this script working or failing, in all cases at least the image should just load, irrespective of the canvas trick.
Here are the situations I discovered where the image does not load alltogether, which I consider a major issue:
On iOS7 on iPhone 5, the first load works fine. The calculation may fail but the image loads. Refreshing the page often breaks the image. 3rd and 4th tries then continue to succeed, and so on.
Worse, at work in Chrome 36 the image does not load alltogether. I say at work, since at home it is not an issue. Possibly a proxy makes the difference. I can refresh all I want, for images that do not have the calculation ran yet, it keeps failing.
The natural thing to do then is to debug it using Chrome's inspector. Guess what? With the inspector open, it always succeeds. The image will always load and the CORS request headers and responses look perfectly fine. This leaves me with virtually no way to debug this. I can tell though that when opening the inspector when the image does not load does give me the "CORS error" in the console, from the previous request I made. Refreshing with the inspector open will then make that go away.
From reading other questions I've learned that cache may be an influence, yet more likely the issue lies in the origin header not sent by the browser. I believe the issue may be in that direction, yet I fail to understand this:
How it influences my "normal" loading of the image using img tags
How it is only an issue behind a proxy (supposedly) in Chrome, and only when the inspector windows is closed
How it works so unreliably and inconsistently in Safari on iOS
As said, I can live with only some browsers succeeding with the canvas part, but I can't live with the image not being normally loaded in any case. That part should just work.
I realize the situation is incredibly hard for you to debug, but I hope my explanation triggers some much-needed help.
Update: I've discovered that when I remove crossorigin="anonymous" from the img tag, the image will load correctly in the specific scenarios I mentioned. However, the consequence of that move is that the color calculation will no longer work in Chrome, not at home and not at work. It continues to work in Firefox though. I'm investigating what to do next.
I managed to solve the issue myself. I still cannot fully explain cause and effect here, but this is what I did:
I removed crossorigin="anonymous" from the html's img element. This will at least make sure that the image is always loaded.
The color calculation part I solved by basically rewriting its logic:
var imgSrc = $('#<?= $bigimage['image_id'] ?>').attr('src');
var cacheBurstPrefix = imgSrc.indexOf("?") > -1 ? '&' : '?';
imgSrc += cacheBurstPrefix + 'ts=' + new Date().getTime();
var imagePreloader = new Image();
imagePreloader.crossOrigin = "Anonymous";
imagePreloader.src = imgSrc;
$(imagePreloader).imagesLoaded(function() {
var rgb = getAverageRGB(imagePreloader);
document.body.style.backgroundColor = 'rgb('+rgb.r+','+rgb.g+','+rgb.b+')';
if (rgb!==false) {
$.get(basepath + "image/<?= $bigimage['image_id'] ?>/setcolor/" + rgb.r + "-" + rgb.g + "-" + rgb.b);
}
});
Instead of reusing the img element from the html, I'm creating a new in-memory image element. Using a cache bursting technique I'm making sure it is freshly loaded. Next, I'm using imagesLoaded (a 3rd party plugin) to detect the event of this in-memory image being loaded, which is far more reliable than jQuery's load() event.
I've tested extensively and can confirm that in no case does normal image loading ever break again. It works in every browser and proxy situation. As an added bonus, the color calculation part now seems to work in far more browsers, including several mobile browsers.
Although I am still not confident on the root cause, after much frustration I'm very happy with the new situation.
I've created a small javascript game and I tested on my local computer in all major browsers and it works fine . After that I uploaded the game on my hosting server and the game won't display in Chrome , the canvas area is grey but it works fine in firefox , anyone knows why ? Here is the link for the demo
http://djordjepetrovic.rs/igrica/
In your catcher_game.js file I found at least on of this:
draw: function(){
basket_catcherImg = new Image();
basket_catcherImg.src = 'images/basket.png';
ctx.drawImage(basket_catcherImg, this.x, this.y, this.w, this.h);
// ...
This won't work very well. It works locally on your computer as the image is cached from disk.
Loading images is an asynchronous operation so your drawImage needs to wait until the image is loaded - the proper way is:
draw: function(){
var me = this;
basket_catcherImg = document.createElement('img');
basket_catcherImg.onload = function() {
ctx.drawImage(basket_catcherImg, me.x, me.y, me.w, me.h);
}
basket_catcherImg.src = 'images/basket.png';
//...
You need to do this with other such instances of img as well.
The reason you need me here is because this is changed to the image element when called on the onload callback. So you need to keep a reference to the original this context.
Also replace new Image() to createElement('img') as there is currently an issue in Chrome that doesn't handle this correctly.
Nice graphics by the way!
I have couple of html elements (div/span etc.) I want to be able my HTML5 Canvas to read that html and render it as it is. Later on I will convert it into bytes with the help of canvas.toDataURL() and save as an image
I am not looking for plug-in based solutions and this is specifically targeted to IE9/10
Appreciate any help towards this !!
You can perhaps use this solution:
https://github.com/niklasvh/html2canvas
html2canvas( [ document.body ], {
onrendered: function(canvas) {
/* canvas is the actual canvas element,
to append it to the page call for example
document.body.appendChild( canvas );
*/
}
});
Note: If images are loaded from non-origin (and doesn't have accept header set) it won't show up.
Optionally you can use your server as a proxy to fetch the images and serve them to client:
<img src="http://myserver.com/getexternalimage?http....