The problem that I am having is that the images are being pulled through to my website in different sizes, which means they are sometimes blurred.
For Eg.
Here are two videos on Vimeo which exactly the same thumbnail image:
https://vimeo.com/315599618/settings
https://vimeo.com/335868910/settings
I'm attaching screenshot of the 2 videos embedded on my site.
Video 315599618 (Test123 on our site) has a well defined image, and 335868910 (Swiped on our site) has a blurred image.
Test123 imageblurScreenshot in my website
This two functions I am using to get the image from vimeo api
public static function getVimeoData1($vimeo_url)
{
if( !$vimeo_url ) return false;
#$data = json_decode( file_get_contents( 'http://vimeo.com/api/oembed.json?url=' . $vimeo_url ) );
if( !$data ) return false;
//return $data->thumbnail_url;
return $data;
}
public static function getVimeoData2($vimeo_id)
{
if( !$vimeo_id ) return false;
#$data = unserialize(file_get_contents("http://vimeo.com/api/v2/video/$vimeo_id.php"));
if( !$data ) return false;
//return $data[0];
return $data;
}
When I open those images separately, it is clear that for Swiped it is pulling through a smaller version of the thumbnail, which is why it is blurred when enlarged (see Test 123 and Swiped images attached). But there is no obvious reason why it would pull through a smaller version of the thumbnail as we are using the same code to pull through the images in both cases.
The site is built in PHP(Laravel) and the embed the Vimeo videos are using the following format - https://player.vimeo.com/video/335868910. If it helps I have included the code that is used to display the images as a footer in this email.
Please, can anyone help me to understand why this is happening and what we can do about it?
If you're using oEmbed to get a video's thumbnail, you should specify the media dimensions you want returned. Without specifying dimensions, you'll get a default size back, or some other unknown size.
Your oEmbed request should look like this:
https://vimeo.com/api/oembed.json?url=https://vimeo.com/335868910&width=1920&height=1080
The full list of oEmbed argumens is found here: https://developer.vimeo.com/api/oembed/videos
Related
I develop a project that able to make a self-graphic image and download it to the computer, but I stack in big problem.
I don't know where is exactly my problem, but it happened when I try to download the image on a different screen.
Suppose that the user make this graphic
when I try to download it on the 27' screen it's like the image above, On the other hand on the 14' screen is looks like this.
Edited: another issue, right now I work on 2 screens left 27' and right is the 14', on left screen download the image clearly and when I drag the window (without change window size) to another screen (14') it's again downloaded it smaller.
const onDownloadImage = () => {
var node = document.getElementById("section");
htmlToImage
.toPng(node)
.then(function (base64) {
let encoded = base64?.split(",");
setBase64(encoded[1]);
triggerBase64Download(base64, "My Canvas");
hideEditElements(false);
})
.catch(function (error) {
console.error("oops, something went wrong!", error);
});};
the library I used :
html-to-image to get the elements by id and convert them to base64.
and react-base64-downloader to download the base64 image.
Thanks all !!
I'm facing a CORS issue that is driving me insane. Allow me to share an example URL:
http://www.jungledragon.com/image/19905/mature_female_eastern_forktail.html/zoom
As the issue can only be reproduced once per page, here is a list of other images:
http://www.jungledragon.com/all/recent
From that overview, you can open any photo page. Next, from that photo page click the image once more to launch it fullscreen, as that is where the issue lies.
Now allow me to explain the setup, and the problem. The site itself is hosted on a Linux server within my control. The site is at www.jungledragon.com. The images, however, are stored at Amazon S3, where the image bucket has an alias of media.jungledragon.com.
The basic situation is simple:
<div id="slideshow-image-container">
<div class="slideshow-image-wrapper">
<img src="http://media.jungledragon.com/images/1755/19907_large.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1409788810&Signature=QH26XDrVuhyr1Qimd7IOBsnui5s%3D" id="19907" class="img-slideshow img-sec wide" data-constrained="true" data-maxheight="2056" crossorigin="anonymous">
</div>
</div>
As you can see, I'm just using the normal 'html' way of loading an image. The image URL is signed and can time out, but that shouldn't be relevant. It is my understanding that CORS does not apply to this situation, since loading images from an external domain this way has been supported for decades. The image is not loaded using javascript, after all.
Just to be sure though, the crossorigin attribute is set in HTML. Furthermore, as a way of testing, I have set a very liberal CORS policy on the image bucket:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-Type</AllowedHeader>
<AllowedHeader>x-amz-acl</AllowedHeader>
<AllowedHeader>origin</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Now, the situation gets a bit more complicated. The fullscreen image viewer is supposed to get a background color that is the dominant/average color of the actual image on screen. That color is calculated using canvas, yet it is only calculated once. The first time it is calculated for that image, the result is communicated to the back-end using an ajax call and then stored forever. Subsequent visits to the image will not run the calculation logic again, it will simply set the background color of the body element and all is good.
Here is the logic that does the calculation:
<script>
$( document ).ready(function() {
<?php if (!$bigimage['dominantcolor']) { ?>
$('#<?= $bigimage['image_id'] ?>').load(function(){
var rgb = getAverageRGB(document.getElementById('<?= $bigimage['image_id'] ?>'));
document.body.style.backgroundColor = 'rgb('+rgb.r+','+rgb.g+','+rgb.b+')';
if (rgb!==false) {
$.get(basepath + "image/<?= $bigimage['image_id'] ?>/setcolor/" + rgb.r + "-" + rgb.g + "-" + rgb.b);
}
});
<?php } ?>
});
Yes, I'm mixing in back-end code with front-end code. The above code says that if we do not yet know the dominant color in the scene, calculate it. The load function is used because at document ready, the actual image from the normal html may not have been loaded completely. Next, if the dominant color is not known yet, and the image is loaded, we trigger the function that calculates the dominant color. Here it is:
function getAverageRGB(imgEl) {
var blockSize = 5, // only visit every 5 pixels
defaultRGB = {r:0,g:0,b:0}, // for non-supporting envs
canvas = document.createElement('canvas'),
context = canvas.getContext && canvas.getContext('2d'),
data, width, height,
i = -4,
length,
rgb = {r:0,g:0,b:0},
count = 0;
if (!context) {
return defaultRGB;
}
height = canvas.height = imgEl.naturalHeight || imgEl.offsetHeight || imgEl.height;
width = canvas.width = imgEl.naturalWidth || imgEl.offsetWidth || imgEl.width;
imgEl.crossOrigin = "anonymous";
context.drawImage(imgEl, 0, 0);
try {
data = context.getImageData(0, 0, width, height);
} catch(e) {
/* security error, img on diff domain */
return false;
}
length = data.data.length;
while ( (i += blockSize * 4) < length ) {
++count;
rgb.r += data.data[i];
rgb.g += data.data[i+1];
rgb.b += data.data[i+2];
}
// ~~ used to floor values
rgb.r = ~~(rgb.r/count);
rgb.g = ~~(rgb.g/count);
rgb.b = ~~(rgb.b/count);
return rgb;
}
The following line is CORS-relevant:
data = context.getImageData(0, 0, width, height);
Although I believe I have set up CORS correctly, I can live with this code failing in some browsers. It seems to work fine in Firefox and IE11, for example. If it fails, I would expect it to fail calculating the dominant color. However, something far worse is happening in highly specific cases: the image is not shown alltogether.
My thinking is that my 'classic' loading of the image via img src tags should have nothing to do with this script working or failing, in all cases at least the image should just load, irrespective of the canvas trick.
Here are the situations I discovered where the image does not load alltogether, which I consider a major issue:
On iOS7 on iPhone 5, the first load works fine. The calculation may fail but the image loads. Refreshing the page often breaks the image. 3rd and 4th tries then continue to succeed, and so on.
Worse, at work in Chrome 36 the image does not load alltogether. I say at work, since at home it is not an issue. Possibly a proxy makes the difference. I can refresh all I want, for images that do not have the calculation ran yet, it keeps failing.
The natural thing to do then is to debug it using Chrome's inspector. Guess what? With the inspector open, it always succeeds. The image will always load and the CORS request headers and responses look perfectly fine. This leaves me with virtually no way to debug this. I can tell though that when opening the inspector when the image does not load does give me the "CORS error" in the console, from the previous request I made. Refreshing with the inspector open will then make that go away.
From reading other questions I've learned that cache may be an influence, yet more likely the issue lies in the origin header not sent by the browser. I believe the issue may be in that direction, yet I fail to understand this:
How it influences my "normal" loading of the image using img tags
How it is only an issue behind a proxy (supposedly) in Chrome, and only when the inspector windows is closed
How it works so unreliably and inconsistently in Safari on iOS
As said, I can live with only some browsers succeeding with the canvas part, but I can't live with the image not being normally loaded in any case. That part should just work.
I realize the situation is incredibly hard for you to debug, but I hope my explanation triggers some much-needed help.
Update: I've discovered that when I remove crossorigin="anonymous" from the img tag, the image will load correctly in the specific scenarios I mentioned. However, the consequence of that move is that the color calculation will no longer work in Chrome, not at home and not at work. It continues to work in Firefox though. I'm investigating what to do next.
I managed to solve the issue myself. I still cannot fully explain cause and effect here, but this is what I did:
I removed crossorigin="anonymous" from the html's img element. This will at least make sure that the image is always loaded.
The color calculation part I solved by basically rewriting its logic:
var imgSrc = $('#<?= $bigimage['image_id'] ?>').attr('src');
var cacheBurstPrefix = imgSrc.indexOf("?") > -1 ? '&' : '?';
imgSrc += cacheBurstPrefix + 'ts=' + new Date().getTime();
var imagePreloader = new Image();
imagePreloader.crossOrigin = "Anonymous";
imagePreloader.src = imgSrc;
$(imagePreloader).imagesLoaded(function() {
var rgb = getAverageRGB(imagePreloader);
document.body.style.backgroundColor = 'rgb('+rgb.r+','+rgb.g+','+rgb.b+')';
if (rgb!==false) {
$.get(basepath + "image/<?= $bigimage['image_id'] ?>/setcolor/" + rgb.r + "-" + rgb.g + "-" + rgb.b);
}
});
Instead of reusing the img element from the html, I'm creating a new in-memory image element. Using a cache bursting technique I'm making sure it is freshly loaded. Next, I'm using imagesLoaded (a 3rd party plugin) to detect the event of this in-memory image being loaded, which is far more reliable than jQuery's load() event.
I've tested extensively and can confirm that in no case does normal image loading ever break again. It works in every browser and proxy situation. As an added bonus, the color calculation part now seems to work in far more browsers, including several mobile browsers.
Although I am still not confident on the root cause, after much frustration I'm very happy with the new situation.
I thought the thumbnail component of Bootstrap automatically took an image, and cut out a specific size thumbnail to slap on the site. Either I'm using this feature wrong or this is completely false. Everytime I create class="thumbnail", I just get a slightly smaller version of the photo.
As a photographer, my photos are huge, and the "thumbnails" are taking up half the page! Certainly I could resize everything in Photoshop to a 200X200px size and upload these as thumbnails, but I feel like there must be a beter way of doing this.
I tried in the html itself just putting width="200" and height="200" after img src, but the problem here is that instead of cutting a 200X200 square out of the image, the image was scaled down proportionally using one dimension. I.e., the image fits in a 200X200 square, but rather than filling up the entire square (since that's not the original proportions), it fills up a 200X100 type area.
Can anyone help?
Bootstrap won't resize your images. It is a CSS framework, it can only add styles to your web page but you can't use it to do any backend work, like thumbnails creation. Setting the dimensions in the HTML img tag doesn't resize your image neither, it is still the same image that is sent to the browser. So in order to boost your website performance, you need to think of another solution, you don't want to send huge images through the network if you're only gonna show them with thumbnail size.
What kind of website are you running?
For example, Wordpress automatically handles the thumbnails creation when you upload files through the Media Manager. I'm pretty sure most of the CMS have this functionnality.
If you have a PHP website without a CMS, you could try using a library such as phpthumb, that would require some coding though.
If you have a static HTML website, then you have no other choice but to resize the images manually in Photoshop.
By the way, do you optimize your images too? (compression with tools such as tinypng or jpeg-optimizer
I've got a little code to generate thumb:
public function generateThumb($pathToImage, $pathToThumb, $extension, $maxDim)
{
//Create a new image according to "MimeType"
switch($extension){
case "gif":
$img = imagecreatefromgif("{$pathToImage}");
break;
case "jpeg":
$img = imagecreatefromjpeg("{$pathToImage}");
break;
case "png":
$img = imagecreatefrompng("{$pathToImage}");
break;
}
// load image and get image size
$width = imagesx($img);
$height = imagesy($img);
// calculate thumbnail size, vertical and horizontal orientation
$new_width = ($width > $height) ? $maxDim : floor($width * ( $maxDim / $height ));
$new_height = ($height > $width) ? $maxDim : floor($height * ( $maxDim / $width ));
// create a new temporary image
$tmp_img = imagecreatetruecolor($new_width, $new_height);
// copy and resize old image into new image
imagecopyresized($tmp_img, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height);
// save thumbnail into a file
imagejpeg($tmp_img, "{$pathToThumb}");
}
I'm sorry, but I can't remember where I found the original script (On SO and other website).
Using that you can generate a thumb each time you add a new image and then use this thumb in your thumbnail preview.
Nop, the Bootstrap will not resize files for You.
Even if You're the only user of the site u can make a file-upload form for yourself. Take a look at Paperclip - it's awesome. And do not forget to protect this form to disallow network users upload some ugly images on your site
Bookstrap is not for resizing api. You can use many of the Image Compression apps that are available.
I would recommend http://shrinkjpeg.com since it does not upload the images to server and compresses them locally in browser.
How can I enable getUserMedia/HTML5 Webcam access calls on Raspbian(chromium) to the camera? I only found answers to stream pictures to HTML5 sites, but I actually need this on device. I already have the code running to get the Pictures with JS. Moreover, raspivid is showing me correct pictures. So how do I make Chromium to notice the camera?
Thank you!
If you just want to stream video from the camera into a web page then this is straightforward.
You need to use Firefox or Chrome as the browser (and Opera??), create element in your web page and then include JS code similar to this:
navigator.getUserMedia(
{
video: true,
audio: false
},
function(stream) {
if (navigator.mozGetUserMedia) {
video.mozSrcObject = stream;
} else {
var url = window.URL || window.webkitURL;
video.src = url ? url.createObjectURL(stream) : stream;
}
mediaStream = stream;
video.play();
},
function(error) {
console.log("ERROR: " + error);
}
);
There are details to deal with such as resizing the output window to match the input stream.
Take a look at my Tutorial on this which includes a simple working demo and complete code - as well as static image capture from the video feed.
You can use a bit of a JavaScript workaround, as the actual methods for recording directly from the browser using getusermedia aren't implemented yet. Whammy.js is a good place to start (https://github.com/antimatter15/whammy) and there's a good guide here: http://www.html5rocks.com/en/tutorials/getusermedia/intro/ (Too much code for me to put here!)
Not sure if that's what you're asking, but it's there should you need it.
I am pre-loading some images and then using them in a lightbox. The problem I have is that although the images are loading, they aren't being displayed by the browser.
This issue is specific to Chrome. It has persisted through Chrome 8 - 10, and I've been trying on and off to fix it all this time and have got nowhere.
I have read these similar questions,
Chrome not displaying images though assets are being delivered to browser
2 Minor Crossbrowser CSS Issues. Background images not displaying in Google Chrome?
JavaScript preloaded images are getting reloaded
Which all detail similar behaviour but in Chrome for Mac. Whereas this is happening in Windows.
All other browsers seem to be fine.
If you have Firefox and Chrome open, load the page in Firefox, and then in Chrome, the images appear.
Once you have manually loaded the images, using the Webkit webdev toolbar thingy, they always show up
All the links the images and such are fine and working
Clearing everything from Chrome doesn't seem to make any difference (cache, history, etc)
If anyone has any ideas it would be fantastically helpfull, as I'm literally all out of options here.
PS, Apologies if there are late replies, I'm off on holiday for a week tomorrow! :D
Update
Here is the javascript function which is preloading the images.
var preloaded = new Array();
function preload_images() {
for (var i = 0; i < arguments.length; i++){
document.write('<');
document.write('img src=\"'+arguments[i]+'\" style=\"display:none;\">');
};
};
Update
I'm still having issues with this, and I've removed the whole preloading images function. Perhaps delivering a style sheet via document.write() isn't the best way?
Chrome might not be preloading them as it's writing to the DOM with no display, so it might be intelligent enough to realise it doesn't need to be rendered. Try this instead:
var preloaded = new Array();
function preload_images(){
for (var x = 0; x < preload_images.arguments.length; x++)
{
preloaded[x] = new Image();
preloaded[x].src = preload_images.arguments[x];
}
}
The Javascript Image object has a lot of useful functions as well you might find useful:
http://www.javascriptkit.com/jsref/image.shtml
onabort()
Code is executed when user aborts the
downloading of the image.
onerror()
Code is executed when an error occurs
with the loading of the image (ie: not
found). Example(s)
onload()
Code is executed when the image
successfully and completely downloads.
And then you also have the complete property which true/false tells you if the image has fully (pre)loaded.
It turns out that Chrome takes into account the HTTP Caching and discards any preloaded images immediately after preload if the Caching is incorrectly set to expire.
In my case I am generating the images dynamically and by default the response was sent to the browser with immediate expiration.
To fix it I had to set the following below:
Response.Cache.SetExpires(DateTime.Now.AddYears(1));
Response.Cache.SetCacheability(HttpCacheability.Public);
return File(jpegStream, "image/jpeg");