html5-Video-Tag vs. Streaming Server - html

Thanks to html5 it's very easy to stream a video to the customer by just adding it to the page with the -tag.
Anyway lots of big services (you-tube etc.) are still using streaming servers and flash to stream videos.
What's the benefit of this technologies?
Does it make any performance difference or difference for users perception? How does html5-video handle many request at a time?

At the moment the main benefits of flash are:
Easy to add Ads
Browser compatibility
No need to support many different codecs
So if you need any of this you might choose Flash

Related

Basic architecture to serve, stream and consume large audio files to minimize client-side resource consumption and latency

I am trying to build a web application which will need to have audio streaming functionality implemented in some way. Just to give you guys some context: It is designed to be a purely auditive experience/game/idkhowtocallit with lots of different sound assets varying in length and thus file size. The sound assets to be provided will consist of ambient sounds, spoken bits of conversation, but also long music sets (up to a couple of hours). Why I think I won't be able to just host these audio files on some server or CDN and serve them from there is, because the sound assets will need to be fetched and played dynamically (depending on user interaction) and as instantly as possible.
Most importantly, consuming larger files (like the music sets and long ambient loops) as a whole doesn't seem to be client-friendly at all to me (used data consumption on mobile networks and client-side memory usage).
Also, without any buffering or streaming mechanism, the client won't be able to start playing these files before they are downloaded completely, right? Which would add the issue of high latencies.
I've tried to do some online research on how to properly implement a good infrastructure to stream bigger audio files to clients on the server side and found HLS and MPEG-DASH. I have some experience with consuming HLS players with web players and if I understand it correctly, I would use some sort of one-time transformation process (on or after file upload) to split up the files into chunks and create the playlist and then just serve these files via HTTP. From what I understand the process should be more or less the same for MPEG-DASH. My issue with these two techniques is that I couldn't really find any documentation on how to implement JavaScript/TypeScript clients (particularly using the Web Audio API) without reinventing the wheel. My best guess would be to use something like hls.js and bind the HLS streams to freshly created audio elements and use these elements to create AudioSources in my Web Audio Graph. How far off am I? I'm trying to get at least an idea of a best practice.
To sum up what I would really appreciate to get some clarity about:
Would HLS or MPEG-DASH really be the way to go or am I missing a more basic chunked file streaming mechanism with good libraries?
How - theoretically - would I go about limiting the amount of chunks downloaded in advance on the client side to save client-side resources, which is one of my biggest concerns?
I was looking into hosting services as well, but figured that most of them are specialized in hosting podcasts (fewer but very large files). Has anyone an opinion about whether I could use these services to host and stream possibly 1000s of files with sizes ranging from very small to rather large?
Thank you so much in advance to everyone who will be bothered with helping me out. Really appreciate it.
Why I think I won't be able to just host these audio files on some server or CDN and serve them from there is, because the sound assets will need to be fetched and played dynamically (depending on user interaction) and as instantly as possible.
Your long running ambient sounds can stream, using a normal HTMLAudioElement. When you play them, there may be a little lag time before they start since they have to begin streaming, but note that the browser will generally prefetch the metadata and maybe even the beginning of the media data.
For short sounds where latency is critical (like one-shot user interaction sound effects), load those into buffers with the Web Audio API for playback. You won't be able to stream them, but they'll play as instantly as you can get.
Most importantly, consuming larger files (like the music sets and long ambient loops) as a whole doesn't seem to be client-friendly at all to me (used data consumption on mobile networks and client-side memory usage).
If you want to play the audio, you naturally have to download that audio. You can't play something you haven't loaded in some way. If you use an audio element, you won't be downloading much more than what is being played. And, that downloading is mostly going to occur on-demand.
Also, without any buffering or streaming mechanism, the client won't be able to start playing these files before they are downloaded completely, right? Which would add the issue of high latencies.
If you use an audio element, the browser takes care of all the buffering and what not for you. You don't have to worry about it.
I've tried to do some online research on how to properly implement a good infrastructure to stream bigger audio files to clients on the server side and found HLS and MPEG-DASH.
If you're only streaming a single bitrate (which for audio is usually fine) and you're not streaming live content, then there's no point to HLS or DASH here.
Would HLS or MPEG-DASH really be the way to go or am I missing a more basic chunked file streaming mechanism with good libraries?
The browser will make ranged HTTP requests to get the data it needs out of the regular static media file. You don't need to do anything special to stream it. Just make sure your server is configured to handle ranged requests... most any should be able to do this right out of the box.
How - theoretically - would I go about limiting the amount of chunks downloaded in advance on the client side to save client-side resources, which is one of my biggest concerns?
The browser does this for you if you use an audio element. Additionally, data saving settings and the detected connectivity speed may impact whether or not the browser pre-fetches. The point is, you don't have to worry about this. You'll only be using what you need.
Just make sure you're compressing your media as efficiently as you can for the required audio quality. Use a good codec like Opus or AAC.
I was looking into hosting services as well, but figured that most of them are specialized in hosting podcasts (fewer but very large files). Has anyone an opinion about whether I could use these services to host and stream possibly 1000s of files with sizes ranging from very small to rather large?
Most any regular HTTP CDN will work just fine.
One final note for you... beware of iOS and Safari. Thanks to Apple's restrictive policies, all browsers under iOS are effectively Safari. Safari is incapable of playing more than one audio element at a time. If you use the Web Audio API you have more flexibility, but the Web Audio API has no real provision for streaming. You can use a media element source node, but this breaks lock screen metadata and outright doesn't work on some older versions of iOS. TL;DR; Safari is all but useless for audio on the web, and Apple's business practices have broken any alternatives.

Is there a way to offer multiple video qualities (resolutions) without uploading multiple videos in HTML5 video player?

I'm trying to add a few videos to my website using HTML5. My videos are all 1080, but I want to give people the option to watch in a lower quality if needed. Can I do this without having to upload multiple videos (1 for each quality) without the usage of a server-side language?
I've been extensively searching for this. Haven't find anyone say that it can't be done, but no one said it can either. I am using Blogger as my host, which is why I am can't use server-side languages.
Thank you.
without the usage of a server-side language?
Yes, of course. The client can choose what version of the video to download.
Can I do this without having to upload multiple videos (1 for each quality)
Not practically, no. You need to transcode that video and upload those different versions.
Haven't find anyone say that it can't be done
A couple things to consider... first is that a video file can contain many streams. I don't know what your aversion is to multiple files, but yes it is possible to have several bitrates of video in a single container. A single MP4, for example, could easily contain a 768 kbps video, a 2 Mbps video, and an 8 Mbps video, while having a single 256 kbps audio track.
To play such a file, a client (implemented with Media Source Extensions and the Fetch API) would need to know how to parse the container and make ranged requests for specific chunks out of the file. To my knowledge, no such client exists as there's little point to it when you can simply use DASH and/or HLS. The browser certainly doesn't do this work for you.
Some video codecs, like H.264, support the concept of scaling. The idea here is that rather than having multiple encodings, there's just one where additional data enhances the previous video that was sent. There is significant overhead with this mechanism, and even more work you'd have to do. Not only does your code now need to understand the container, but now it has to handle the codec in use as well... and it needs to do it efficiently.
To summarize, is it possible to use one file? Technically, yes. Is there any benefit? None. Is there anything off-the-shelf for this? No.
Edit: I see now your comment that the issue is one of storage space. You should really put that information in your question so you can get a useful answer.
It's common for YouTube and others to transcode things ahead of time. This is particularly useful for videos that get a ton of traffic, as the segments can be stored on the CDN, with nodes closer to the clients. Yes, it's also possible to transcode on-demand as well. You need fast hardware for this.
No.
I can't fathom how this could ever be possible. Do you have an angle in mind?
Clients can either download all or part(s) of a file. But to do this you would have to somehow download only select pixels of each frame. Even if you had knowledge of which byte-ranges of each frame were which pixels, the overhead involved in requesting each byte-range would be greater than the size of the full 1080p video.
What is your aversion to hosting multiple qualities? Is it about storage space, or complexity/time of conversion?

Web Audio Streaming/Seeking (Mobile + Desktop) with Tracking

It's been a long time since I've needed to build a streaming media solution for the web since there are so many good services providing basic needs. Ages ago I had done streaming deployments with Red5 and a frontend player like Flow or JW, but despite a lot of searching I haven't been able to determine what the best modern options are to achieve a simple web solution that has good native compatibility with mobile and desktop. I'm hoping someone can suggest a good open source stack that would be a good fit for what I'm trying to do:
Mobile + Desktop Web Audio Support (with the possibility of some video at some point)
Streaming (not live on-air streaming, but the ability to seek and listen without downloading the whole file and only buffering enough to for good quality as to not waste bandwidth)
Track how much of a stream has been listened to (not the amount downloaded)
Protecting streams (this should be pretty easy either using headers or URL patterns that expire the link/connection after a certain point of time by writing that logic into the server side but I wanted to mention it anyways)
Lightweight, maybe 10 to 20 concurrent audio streams for now.
My initial thought, since my knowledge in the streaming space is dated, was to do it with RTMP, RTSP or HLS and something like JWPlayer for the frontend, depending on which protocol seemed to have the best support across the most devices. Or, if the client side support is good I could probably skip the media server and go with psuedo-streaming via the webserver. On the server side I could handle the access control to the streams and on the front end player I could use the API to keep track of the amount of seconds played and just ping that data in a ajax request to keep track of how much has actually been listened too (as opposed to downloaded/buffered).
The thing is I have a feeling that is probably an obsolete way to go about it in 2017. Even though I haven't turned up anything specific yet, I had a feeling perhaps there was a better solution out there utilizing Node and perhaps websockets to accomplish the same thing with a server and player coupled more tightly (server aware of playhead/actual amount played vs front end only) or a solution that leverages newer standards (MPEG-DASH?). Plus, in the past getting the other streaming protocols to be compatible across all devices was a pain and IIRC there is no universal HTML5 support for any given one of the older protocols across all browsers.
So what are the best current approaches to tackle something like this? Is a media server + a front end like JW or Flow still one of the better ways to go or are there better/easier ways to deploy a solution like this?

Video recording/playback/storage for website

I would like to implement video recording/playback/storage capability for my website. I'm done a bit of research, for HTML5 recording, there is RecordRTC which is based on WebRTC. For playback there's video.js. I want to be able to store them on s3 but I haven't figured out how.
1) Is this the best way to do it without paying for cloud based commercial ones such as ziggeo, nimbb and pipe?
2) are there any alternatives that i should look into?
3) how does storage work after recording using RecordRTC and uploading to s3? Do i need to do any sort of compression?
Any help would be great! Really appreciate it
Video recording is the future of all websites in our eyes - and by our I mean here at Ziggeo (full disclosure, I work at Ziggeo :) ).
In regards to recording there are many ways to do it and it is up to you to go with a specific one or implement all of them, so you could do it through Flash, WebRTC (https://webrtc.org/), or ORTC (https://ortc.org/).
We are currently offering you to record using WebRTC plus fallback with Flash and are working on implementing ORTC as well.
Now as mentioned above, there are many ways to do it and it is up to you, however it is up to your end users also since they might not be able to record over flash due to company policy or your website is on HTTP so you can not use WebRTC, etc.
With your own implementation you need to run the numbers and combine it all together (and work on keeping it up and running), while here at Ziggeo we do that for you and keep improving our SDKs and features.
Further more we also allow you to push the videos to S3 buckets, FTP, YouTube and Facebook - soon to DropBox as well.
So if you are like us, you will probably like to go down the road of do it yourself. If you however want to have time to work on your website, apps, and other things and just have the video, I do suggest using some service.
In regards to compression. It is good to mention that we do transcoding of all videos that are uploaded to our servers (You can see more here: https://ziggeo.com/features/transcoding). There is an original video that is kept and next to it the transcoded video (which can have watermark or some effects, etc. while it does not need to).
In general you want to 'standardize' the uploaded videos since different browsers will give you different video data containers and this would give you the upper hand so that it is easier to make adjustments to them later on for preview depending on the browser that is used.
To summarize:
1) - This depends on what kind of recording/playback and storage you need. If it is professional then using a service such as Ziggeo will help you focus on the important part of your service - like website design, functionality and similar, while if it is for fun and play you still have a free plan on Ziggeo, or you could get your sleeves up and do some codding :)
2) - I would personally look into WebRTC and ORTC if I was making implementation myself to see which one I would need more (or would be easier for me to implement). Once you find the one that you like, they usually offer some suggestions on their forums with what works best for them. (Be prepared however to need flash implementation at some point as well if it is business related setup)
3) It is best to standardize what you store in terms of resolution, video data containers and similar and often it is good to keep the original videos as well, so that you can always re-encode them if that is needed (which can happen in early stages of development).

Can you use HTML 5 to stream video 24/7 on a website?

I've got a website and I've been looking for ways to embed a 24/7 webcast. I've looked at options such as Ustream and Justin.TV however, these do not work on mobile devices, which is what I really need.
I don't have that much knowledge on how streaming works but I've read that the streaming Engine Wowza is another option. I also found that HTML 5 player works cross platform and on any mobile device aswell.
If I were to use Wowza would it work with HTML 5 player? And am I even going in the right path with how I can do this. I also have a home dedicated server for streaming to a cloud wouldn't be required.
I'm very amateur just trying to broadcast my television program on my website for viewing. Any advice would help here. Thanks
Wowza can packetize video as http live streaming (HLS) which, although an Apple invention, works on most HTML5-capable browsers except IE11: http://www.jwplayer.com/html5/hls/ . Many players will fall back to using Flash for browsers which don't support native HLS or H.264 encoding. Flash uses http dynamic streaming (HDS) rather than HLS, so you would add that as another packetizer in wowza. (Wowza calls these packetizers "cupertinostreamingpacketizer" and "sanjosestreamingpacketizer" respectively.)
You would then point your preferred HTML5 video player (jwplayer, flowplayer, etc) at the URL http:// your-wowza-server.com:1935/live/yourstreamname/playlist.m3u8 [1]. For Flash fallback in flowplayer you can use the f4m resolver and the http-streaming plugin, as in the first example here, to access the subtly different URL http:// your-wowza-server.com:1935/live/yourstreamname/manifest.f4m. I'm sure something similar applies in players like jwplayer and others.
The main problem with Wowza is how much it costs: for your own server you're looking at around $55 per month per channel [2]. At least during testing, you may find it cheaper to get Wowza on Amazon EC2 devpay: $5/month rental plus an extra couple of cents per hour on your normal EC2 instance costs.
[1] Assuming you're using Wowza's default /live/ application on port 1935
[2] A channel is roughly the number of streams you're sending to the server to be re-broadcast
We developped a custom HTML5 player which we wanted to make compatible with HLS and fragmented mp4 for LIVE events. We started on Zencoder but realize they were not able to do genrate fragmented mp4.
I would like to explore the flash fallback solution and the wowza( probably on AWS) for the packaging.
Would you be available to consult on this project?
We use www.bitcodin.com for event-based or 24/7 live transcoding and streaming. It generates DASH - which can be playback natively in HTML5 using the bitdash MPEG-DASH players - as well as HLS for iOS devices. You can find an example here: http://www.dash-player.com/demo/live-streaming/