I am developing a video chat application using twilio. I would like to check the bitrate of a video stream playing in the browser to study how the bitrate will be affected at different bandwidths. How can I do this?
Twilio developer evangelist here.
You can measure various data about the incoming and outgoing streams using the WebRTC getStats API. There's a really good article that walks through the available stats that you should read to understand it. I would try to write more about it here, but reading the spec and checking out that article will be more accurate and useful to you.
Hope this helps.
Many videos actually will have variable bit rate, so you can either get an average by simply dividing the file size by the time, or alternatively use a tool like VLC player which will show you the bit rate changing over time (on a Mac it shows the numbers but I believe on Windows it shows a graph):
If you are more interested in the download bandwidth itself, you can use developer tools in Chrome to see the bit rate.
If you open developer tools and go to the network tab you should see a waterfall column.
Hover over the timeline here in a row that corresponds to your video download and you can see all the details about the request and response including the time it took. The time combined with the size which is also in the row will show you the actual achieved download bit rate.
Here is an example for a YouTube video:
Related
I'm trying to add a few videos to my website using HTML5. My videos are all 1080, but I want to give people the option to watch in a lower quality if needed. Can I do this without having to upload multiple videos (1 for each quality) without the usage of a server-side language?
I've been extensively searching for this. Haven't find anyone say that it can't be done, but no one said it can either. I am using Blogger as my host, which is why I am can't use server-side languages.
Thank you.
without the usage of a server-side language?
Yes, of course. The client can choose what version of the video to download.
Can I do this without having to upload multiple videos (1 for each quality)
Not practically, no. You need to transcode that video and upload those different versions.
Haven't find anyone say that it can't be done
A couple things to consider... first is that a video file can contain many streams. I don't know what your aversion is to multiple files, but yes it is possible to have several bitrates of video in a single container. A single MP4, for example, could easily contain a 768 kbps video, a 2 Mbps video, and an 8 Mbps video, while having a single 256 kbps audio track.
To play such a file, a client (implemented with Media Source Extensions and the Fetch API) would need to know how to parse the container and make ranged requests for specific chunks out of the file. To my knowledge, no such client exists as there's little point to it when you can simply use DASH and/or HLS. The browser certainly doesn't do this work for you.
Some video codecs, like H.264, support the concept of scaling. The idea here is that rather than having multiple encodings, there's just one where additional data enhances the previous video that was sent. There is significant overhead with this mechanism, and even more work you'd have to do. Not only does your code now need to understand the container, but now it has to handle the codec in use as well... and it needs to do it efficiently.
To summarize, is it possible to use one file? Technically, yes. Is there any benefit? None. Is there anything off-the-shelf for this? No.
Edit: I see now your comment that the issue is one of storage space. You should really put that information in your question so you can get a useful answer.
It's common for YouTube and others to transcode things ahead of time. This is particularly useful for videos that get a ton of traffic, as the segments can be stored on the CDN, with nodes closer to the clients. Yes, it's also possible to transcode on-demand as well. You need fast hardware for this.
No.
I can't fathom how this could ever be possible. Do you have an angle in mind?
Clients can either download all or part(s) of a file. But to do this you would have to somehow download only select pixels of each frame. Even if you had knowledge of which byte-ranges of each frame were which pixels, the overhead involved in requesting each byte-range would be greater than the size of the full 1080p video.
What is your aversion to hosting multiple qualities? Is it about storage space, or complexity/time of conversion?
I've seen using the html5 api it's possible to record/upload video content straight from the browser. This issue in a project I'm currently working on is the video recordings can be very long/big and I'd like to mitigate upload time for the user.
Ideally the video would be uploaded in one of two ways:
As it's being recorded (streaming upload).
For worse network connections, upload the video in smaller chunks (so store locally and then upload a chunk every 5 minutes, let's say).
Does anyone have any guidance on if these could practically work with the current level of html5 functionality and if so, if there any good resources on the subject?
WebRTC based MediaStream Recording (http://www.w3.org/TR/mediastream-recording/) sounds like it is what you are looking for, as Robert suggest in the comments.
There is a Javascript library available on GitHub which looks like it should meet your needs:
https://github.com/streamproc/MediaStreamRecorder
In particular they note:
MediaStreamRecorder is useful in scenarios where you're planning to submit/upload recorded blobs in realtime to the server! You can get blobs after specific time-intervals.
I'm developing a Chrome Packaged App with video playback feature.
First of all, I want to allow the user to stream online media (e.g. MP4 video), and at the same time, saving the video to a location chosen by the user. Is there a way to achieve so?
Also, I want to save the locations of media played by the user, and allow the user to play it later without locating it again. Do anyone have some ideas on that?
Thank you guys very much.
You should be able to do what you want. Your best bet currently is to use the chrome.fileSytem API, which lets you save files to a location chosen by the user. You can also use retainEntry and restoreEntry to allow you to play the files back in later sessions, however I don't believe that is not available in stable channel yet (it is currently restricted to the dev channel, but should be available for general use in version 31).
Also check out the chrome.mediaGalleries API. It is designed to provide access to media, however it doesn't provide the write capabilities you need yet.
Streaming can be done using HTML5 Video tag.
Please check :
http://html5doctor.com/the-video-element/
Also, you can use plugins like :
http://www.videojs.com/
I'm currently stuck with this problem and I hope somebody can help me out. I'm trying to create some sort of decoder that will convert a video stream that will act as a video input device so I can use it in Wirecast (video streaming program).
At this stage I use mjpeg IP cameras as video sources using this neat little program that allows me to convert a raw IP address:port into a input device, this works perfectly with unlimited cameras but does not support RTSP nor H.264, I have since upgraded a few cameras so I can get access to HD video.
I have tried a number of RTSP source filters from all over the net, and some programs like xpwebcam to get access to their H.264 filter but no luck as yet. I have tried to create my own filter using GraphStudio but it is beyond my understanding.
The IP cameras video feed URL looks like this..
Video Feed:
rtsp://xxx.xxx.xxx.xxx/0/video0
where videoX = 0,1,2 for resolution.
rtsp://user:pass#10.0.0.10/0/video0
or rtsp://#10.0.0.10/0/video0 for non-protected cameras, it's a private network so it does not matter, what ever will work.
I can successfully stream the video feed live using VLC but not much else, I'm not sure if there's a way to turn a stream into a input device.
I have been trying to do this for weeks now but had very little luck in getting it to work.
Please help me :)
As a professional photographer with many years in the field this question struck me as rather interesting. The answer you are looking for can be found at this site:
http://alax.info/blog/1416
The site lists the update you need for your equipment.
If you have no source filter can't you simply read from source and write to a file and have your other program read it from a file simultaneously. I have used such a trick many times on unix. Can't see why it cannot work here.
I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions