Should i load image from server or fetch from url? - html

I am kind of new in web development, I wanted to create a Vuejs app of dota 2, there will be a grid of cards, in each card will be hero profile, I have all these heroes data locally in a json file with img path include, problem us there are so many images about 100 heroes and each have 4 ability so 100 hero avatar + 400 abilities it's kind of huge. Should I deploy these images to my hosting server or change img path to fetch from dota2.com ? Which is better and faster ? What effect on client side? My problem is I don't know if the client load images on the go or they need to download all 500 images first for my website to work.
If I am using url fetch, it mean my website is static or dynamic? Can I publish it to GitHub or need Firebase to hosting it?

Static or dynamic refers to whether the data is constant or changes given a set of conditions. It seems that you have a list of images based on a field of Valve's API result set. I would suggest using those images for at least the following reasons:
A site as popular as valved more likely has caching across multiple regions. This means that if you are in Montana and the user is in Thailand there might be a copy of the image in a server closer to them than your single server; unless you also implement something similar of course.
You will not have to pay for the bandwidth which you are potentially authorized to use given you are using their API and they are given you a specific URL. You should read their terms and conditions.
If the image of the character changes it will update accordingly.
it is possible to serve multiple versions of the images based on the device. For instance a different image may be served for a mobile device than a desktop computer.
Note that the 3rd reason could be though of as being dynamic but it is not on your side. It is static in the sense that you will always use the image retrieved from the API. Also, consider the negative side effect of your json file if Valve changes an image. You should consider recreating the json file every so often.
The client side will be the same either way - a request will be issued for each image to either your server or the Valve server. The speed at which the image is served will be the determining factor.

Related

Spring Application - Getting images on Amazon S3 to client

I am building a Spring Web Application hosted on Elastic Beanstalk. I use S3 to store user uploaded images which works great. What I don't understand is how fetching images from S3 to the client work. I found three alternatives.
1.Get the image in a controller and send it to the client. Like this:
S3Object object = amazonS3Client.getObject("bucketname", "path/to/image");
2.Open up all images and reach it directly by an URL in the client. Something like this:
<img src="http://aws.amazon.com/bucket/path/to/image.jpg">
3.Use signed download URLs that only working for a certain time. Like this:
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest("bucketname", "path/to/image");
String url = conn.generatePresignedUrl(request)
Im not sure which approach to go for. Routing it through the web server seems unnecessary, since it loads the server. Open the URLs to anyone might higher requests and costs since anyone can use the images. And the third way is new to me, haven't really seen anyone practising this which makes me insecure if this is really the way to go.
So, how is this usually done?
And how is this used in the development environment versus production environment. I guess its not changing? Or is it common to use spring profiles to change the location of static content while developing and only use S3 for production?
If your hosting Javascript, CSS on S3, is it then most common to go for approach 2 and open them up for everyone?
For me it depends upon the requirements you have for access control for images uploaded by a user.
If the images are non-sensitive i.e. it wouldn't really matter if someone else got hold of another user's images, then I would go for approach 2.
If on the other hand it would be a disaster if someone managed to get hold of another user's images, then I would go for approach 3 (or some other form of expiring token access to the images).
The last time I did this I went for approach 2 because the images were non-sensitive. To try and prevent people from discovering images, we did apply a hashing function to the name of the image, but again I wasn't massively concerned about this. In either case, a well defined bucket structure that can be easily worked out by the application when constructing the URL for an image is useful. So for you, perhaps consider something like:
s3:bucket_name/images/users/<hashed_and_salted_user_name>/<user_images>
As for you request regarding dev vs prod environments, then matching a bucket name to the Spring profile is the approach we used. So for example:
s3:bucket_name/prod/images/users/user/foo.jpg
s3:bucket_name/dev/images/users/user/foo.jpg
As you can probably guess we had Spring profiles named "prod" and "dev". The code for building image URLs took into account the name of the current Spring profile when creating the URL. Gives a nice separation between environments.
In terms of CSS and Javascript, then I tend to host obfuscated/minified versions in the production S3 buckets, and full versions in the dev/test buckets (mainly for performance rather that trying to hide code). In addition I'd use some sort versioning/naming structure in how you host CSS/Javascript in S3 so that you can determine what "version" of resources your app is using. So for example:
s3:bucket_name/css/app-1.css
s3:bucket_name/css/app-2.css
The version of the CSS/Javascript resources is updated each time you push a new version into production.
By going down this path you kinda look at S3 as the final resting place for a piece of Javascript/CSS when it is ready to go into the wide world of production. Once there, you know it will never change. If CSS/Javascript does change, then the user has to fetch a new resource from S3 as the version will be incremented. You can hook this into your build process so that your main app is always referencing the latest version of CSS/Javascript. I found this has two useful functions:
Makes it very easy to determine which version of a resource your application is running with
Makes it very easy to cache resources (either with browser or something like CloudFront) as you know they will never change
Hope that helps.

windows store app with photos as static content

I am making an windows store application and one of the requirements is that it must not use data connection. The application includes a lot of photos (around 400, all compressed as much as possible, around 8 kb each).
What would be the best practice handling this situation? How should I "preload" them in to the application?
If your requirement for zero net connectivity is real, then you have two options.
ONE. You can ship your app, and request that the user get the images from some other location and then give them to your app, loading the image in a somewhat manual way.
I know this does not sound ideal, but I wanted to acknowledge it was an option.
TWO. You can ship your app with the images embedded inside the installation package. This means as soon as your app installs, the user has all the images, no net necessary.
Here's how:
Create a new folder in your project, call it "Images" or something.
Drag your images into that folder (using Visual Studio).
Refer to your images like this "ms-appx:///Images/MyImage.jpg"
Note: it's up to you how you keep track of all your images, you could iterate through the folder over and over again, but it's probably best you just hard code the list in some class.
It's really that easy.
Best of luck!
Can you just store them in your project solution and refer to them in your code using the ms-appx:/// format?
Also if you are using so many images, prefer jpg images or compress your images using
https://tinypng.com/
Hope that helps.

Using data URI for images in IE 6 and 7

I am currently developing a web-app. It contains images which are generated dynamically in the server (and thus takes some time to appear after requested) and then dished out. So I thought that I will use HTML5 local-storage API to cache the images, so that on subsequent requests for the same image, it can be served instantly. For tha, I plan to use base64 encoding of the image as the source instead of using a source URL.
Instead of requesting the image from the server, the JS will now first check whether that image data is currently available in the local storage (say an image with attribute 123 is stored in the local storage with 123 as key, and the base 64 encoding as the value). If yes, then just change the image's source with the value obtained from there. Else request the server to send the encoding, upon receiving which, it is stored in the cache.
Problem is IE6 and IE7 don't support it. There is a workaround, as described here, but that involves a server side CSS file to contain the image data. Since images will be generated on the fly, that won't serve our purpose. How else can I achieve this in IE6 and IE7?
Alternatively don't try and cache anything clientside. Cache the generated images on the server side and host those images like normal. You don't need localstorage and cache it client side.
In other words:
generate image server side using your script
cache it somewhere like /httpdocs/cache/images/whatever-hash.jpg
serve the image in your document <img src="/cache/images/whatever-hash.jpg">
If generating an image takes 5 seconds and you have 120 concurrent users requesting 100 unique pages and your server script can only handle processing 4 threads at any given time that comes out to
5 seconds x (120 /4) / 60 = 2.5 minutes of server processing time before the last user in the queue's image is served and the data stored in localstorage.
The same time spent will be true if all users requested the same exact page. There would be no real benefits from caching per user since every user will have to ask the server to generate their own image. Also since localstorage will get invalidated often the more the user does they will feel a considerately slow user experience and in my opinion bail on your app.
Caching the file on the server will have many more benefits IMHO. Sure it takes up server storage space but these days it's rather cheap and you can get a cloud CDN (example www.maxcdn.com) to combat the load.
If you still decide you need to cache client side, because IE6/IE7 doesn't support localstorage or data URI so check out the following
You'll need a Web Storage shim for IE6/IE7 there's a list at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills
You'll need a way to store the generated image as blob temporarily and stick it in the storage. Example: http://robnyman.github.com/html5demos/localstorage/
You could also use a Canvas Shim and a toBlob shim: https://github.com/eligrey/canvas-toBlob.js/
set the headers to inform the browser that the resource is cached :
header("Last-Modified: " . date("D, d M Y H:i:s", getlastmod()));
in PHP
or
Response.Cache.SetLastModified(DateTime.Now);
in .net
This way the browser will cache the resource.

Quick question regarding CSS sprites and memory usage

Well, it's more to do with images and memory in general. If I use the same image multiple times on a page, will each image be consolidated in memory? Or will each image use a separate amount of memory?
I'm concerned about this because I'm building a skinning system for a Windows Desktop Gadget, and I'm looking at spriting the images in the default skin so that I can keep the file system looking clean. At the same time I want to try and keep the memory footprint to a minimum. If I end up with a single file containing 100 images and re-use that image 100 times across the gadget I don't want to have performance issues.
Cheers.
What about testing it? Create a simple application with and without spriting, and monitor your windows memory to see which approach is better.
I'm telling you to test it because of this interesting post from Vladimir, even endorsed by Mozilla "use sprites wisely" entry:
(...) where this image is used as a sprite. Note that this is a 1299x15,000 PNG.
It compresses quite well — the actual download size is around 26K - but browsers
don't render compressed image data. When this image is downloaded and
decompressed, it will use almost 75MB in memory (1299 * 15000 * 4).
(At the end of Vladimir's post there are some other great references to check)
Since I don't know how Windows renders it's gadgets (and if it's not going to handle compressed image data), it's dificult IMHO to say exactly which approach is better without testing.
EDIT: The official Windows Desktop blog (not updated since 2007) says the HTML runtime used for Windows Gadgets is MSHTML, so I think a test is really needed to know how your application would handle the CSS sprites.
However, if you read some of the official Windows Desktop Gadgets and Windows sidebar documentation, there's an interesting thing about your decision to not use css sprites, in the The GIMAGE Protocol section:
This protocol is useful for adding
images to the gadget DOM more
efficiently than the standard HTML
tag. This efficiency results
from improved thumbnail handling and
image caching (it will attempt to use
thumbnails from the Windows cache if
the requested size is smaller than 256
pixels by 256 pixels) when compared
with requesting an image using the
file:// or http:// protocols. An added
benefit of the gimage protocol is that
any file other than a standard image
file can be specified as a source, and
the icon associated with that file's
type is displayed.
I would try to use this protocol instead of CSS sprites and do some testing too.
If none of this information would help you, I would try to ask at Windows Desktop Gadgets official forums.
Good luck!
The image will show up one time in the cache (as long as the url is the same and there's no query string appended to the file name). Spriting is the way to go.
Webbrowsers identifies cacheable resources by their ETag response header. If it is absent or differs among requests, then the image may be downloaded and stored in cache multiple times. If you (actually, the webserver) supply an unique and the same ETag header for each unique resource, then any decent webbrowser is smart enough to keep one in cache and reuse it as long as its Expires header allows.
Any decent webserver will supply the ETag header automatically for static resources, it is often autogenerated based on a combination of the local filename, the file length and the last modified timestamp. But often they don't add the Expires header, so you need to add it yourself. After judging your post history here at Stackoverflow I safely assume that you're familiar with Apache HTTPD as web server, so I'd suggest to have a look at mod_expires documentation to learn how to configure it yourself to an optimum.
In a nutshell, serve the sprite image along with an ETag and a far future Expires header and it'll be okay.

Pros and Cons of a separate image server (e.g. images.mydomain.com)?

We have several images and PDF documents that are available via our website. These images and documents are stored in source control and are copied content on deployment. We are considering creating a separate image server to put our stock images and PDF docs on - thus significantly decreasing the bulk of our deployment package.
Does anyone have experience with this approach?
I am wondering about any "gotchas" - like XSS issues and/or browser issues delivering content from the alternate sub-domain?
Pro:
Many browsers will only allocate two sockets to downloading assets from a single host. So if index.html is downloaded from www.domain.com and it references 6 image files, 3 javascript files, and 3 CSS files (all on www.domain.com), the browser will download them 2 at a time, with the other blocking until a socket is free.
If you pull the 6 image files off onto a separate host, say images.domain.com, you get an extra two sockets dedicated to download your images. This parallelizes the asset download process so, in theory, your page could render twice as fast.
Con:
If you're using SSL, you would need to either get an additional single-host SSL certificate for images.domain.com or a wildcard SSL certificate for *.domain.com (matches any subdomain). Failure to do so will generate a warning in the browser saying the page contains mixed secure and insecure content.
You will also, with a different domain, not send the cookies data with every request. This can increase performance.
Another thing not yet mentioned is that you can use different web servers to serve different sorts of content. For example, your static content could be served via lighttpd or nginx while still serving your dynamic content off Apache.
Pros:
-load balancing
-isolating a different functionality
Cons:
-more work (when you create a page on the main site you would have to maintain the resources on the separate server)
Things like XSS is a problem of code not sanitizing input (or output for that matter). The only issue that could arise is if you have sub-domain specific cookies that are used for authentication.. but that's really a trivial fix.
If you're serving HTTPS and you serve an image from an HTTP domain then you'll get browser security alert warnings pop up when you use it.
So if you do HTTPS, you'll need to buy HTTPS for your image domain awell if you don't want to annoy the hell out of your users :)
There are other ways around this, but it's not particularly in the scope of this answer - it was just a warning!