Is it possible to pass raw video frame TO a browser? - html

Is possible to pipe raw video frames to a browser/website? For instance the decoding could be done locally in Gstreamer, and then that could be forwarded somehow to a browser.
EDIT:
I realize that my description was a bit shaky. The use case I would like to have is to send encoded video to someone, decode it on their computer, do some advance filtering that cannot be done in the browser, and then pipe the frames to the browser. Obviously re-encoding it would just be a waste of time and energy.
All that I can find is ppl saying that video frames can be grabbed FROM a browser, no-one seems to be interested to SENT TO a browser. The horrible option could be to use webrtc and to re-encode the frames into VP8 and then to send it to the browser.
So my final question is whether it is possible to write to the rendering pipeline of a browser? I know next to nothing about web programming, I usually just deal with images and video.
Thank you for your support :)
PS: forgive my lack of knowledge, is it possible to have a client on someone's computer, writting to a local tcp port, and to access that tcp port from a website in the browser? (potentially asking the user to allow the connection?)

Yes, this is possible. Since you're running a local GStreamer pipeline, you might look into this project: https://github.com/Samsung/ChromiumGStreamerBackend Basically, they're using GStreamer as the native renderer in-browser.
Aside from that, you can create a browser extension which executes an application and gets data from Gstreamer, to shuffle to your application. https://developer.chrome.com/extensions/nativeMessaging
If you don't want to make an extension, you can instead create a small Web Socket server.
Either way, you can write the raw pixel data to a Canvas... no need to re-encode/decode the video. https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API

Related

Basic architecture to serve, stream and consume large audio files to minimize client-side resource consumption and latency

I am trying to build a web application which will need to have audio streaming functionality implemented in some way. Just to give you guys some context: It is designed to be a purely auditive experience/game/idkhowtocallit with lots of different sound assets varying in length and thus file size. The sound assets to be provided will consist of ambient sounds, spoken bits of conversation, but also long music sets (up to a couple of hours). Why I think I won't be able to just host these audio files on some server or CDN and serve them from there is, because the sound assets will need to be fetched and played dynamically (depending on user interaction) and as instantly as possible.
Most importantly, consuming larger files (like the music sets and long ambient loops) as a whole doesn't seem to be client-friendly at all to me (used data consumption on mobile networks and client-side memory usage).
Also, without any buffering or streaming mechanism, the client won't be able to start playing these files before they are downloaded completely, right? Which would add the issue of high latencies.
I've tried to do some online research on how to properly implement a good infrastructure to stream bigger audio files to clients on the server side and found HLS and MPEG-DASH. I have some experience with consuming HLS players with web players and if I understand it correctly, I would use some sort of one-time transformation process (on or after file upload) to split up the files into chunks and create the playlist and then just serve these files via HTTP. From what I understand the process should be more or less the same for MPEG-DASH. My issue with these two techniques is that I couldn't really find any documentation on how to implement JavaScript/TypeScript clients (particularly using the Web Audio API) without reinventing the wheel. My best guess would be to use something like hls.js and bind the HLS streams to freshly created audio elements and use these elements to create AudioSources in my Web Audio Graph. How far off am I? I'm trying to get at least an idea of a best practice.
To sum up what I would really appreciate to get some clarity about:
Would HLS or MPEG-DASH really be the way to go or am I missing a more basic chunked file streaming mechanism with good libraries?
How - theoretically - would I go about limiting the amount of chunks downloaded in advance on the client side to save client-side resources, which is one of my biggest concerns?
I was looking into hosting services as well, but figured that most of them are specialized in hosting podcasts (fewer but very large files). Has anyone an opinion about whether I could use these services to host and stream possibly 1000s of files with sizes ranging from very small to rather large?
Thank you so much in advance to everyone who will be bothered with helping me out. Really appreciate it.
Why I think I won't be able to just host these audio files on some server or CDN and serve them from there is, because the sound assets will need to be fetched and played dynamically (depending on user interaction) and as instantly as possible.
Your long running ambient sounds can stream, using a normal HTMLAudioElement. When you play them, there may be a little lag time before they start since they have to begin streaming, but note that the browser will generally prefetch the metadata and maybe even the beginning of the media data.
For short sounds where latency is critical (like one-shot user interaction sound effects), load those into buffers with the Web Audio API for playback. You won't be able to stream them, but they'll play as instantly as you can get.
Most importantly, consuming larger files (like the music sets and long ambient loops) as a whole doesn't seem to be client-friendly at all to me (used data consumption on mobile networks and client-side memory usage).
If you want to play the audio, you naturally have to download that audio. You can't play something you haven't loaded in some way. If you use an audio element, you won't be downloading much more than what is being played. And, that downloading is mostly going to occur on-demand.
Also, without any buffering or streaming mechanism, the client won't be able to start playing these files before they are downloaded completely, right? Which would add the issue of high latencies.
If you use an audio element, the browser takes care of all the buffering and what not for you. You don't have to worry about it.
I've tried to do some online research on how to properly implement a good infrastructure to stream bigger audio files to clients on the server side and found HLS and MPEG-DASH.
If you're only streaming a single bitrate (which for audio is usually fine) and you're not streaming live content, then there's no point to HLS or DASH here.
Would HLS or MPEG-DASH really be the way to go or am I missing a more basic chunked file streaming mechanism with good libraries?
The browser will make ranged HTTP requests to get the data it needs out of the regular static media file. You don't need to do anything special to stream it. Just make sure your server is configured to handle ranged requests... most any should be able to do this right out of the box.
How - theoretically - would I go about limiting the amount of chunks downloaded in advance on the client side to save client-side resources, which is one of my biggest concerns?
The browser does this for you if you use an audio element. Additionally, data saving settings and the detected connectivity speed may impact whether or not the browser pre-fetches. The point is, you don't have to worry about this. You'll only be using what you need.
Just make sure you're compressing your media as efficiently as you can for the required audio quality. Use a good codec like Opus or AAC.
I was looking into hosting services as well, but figured that most of them are specialized in hosting podcasts (fewer but very large files). Has anyone an opinion about whether I could use these services to host and stream possibly 1000s of files with sizes ranging from very small to rather large?
Most any regular HTTP CDN will work just fine.
One final note for you... beware of iOS and Safari. Thanks to Apple's restrictive policies, all browsers under iOS are effectively Safari. Safari is incapable of playing more than one audio element at a time. If you use the Web Audio API you have more flexibility, but the Web Audio API has no real provision for streaming. You can use a media element source node, but this breaks lock screen metadata and outright doesn't work on some older versions of iOS. TL;DR; Safari is all but useless for audio on the web, and Apple's business practices have broken any alternatives.

playing a growing mp3 file on a web page

need your advice.
I have a web-serivice which generates mp3 files out of wavs.
The process of conversion takes time and I want visitor to be able to start listening right away, while the conversion is still going on.
Having tried the html5 tag I found that I can only play the part of mp3 file which was ready at the moment the mp3 file was fetched. I mean, it doesn't seem to care that the file might have grown since it was fetched.
What is the right way to approach this situation?
Thanks in advance for any info.
I believe that you can use JPlayer to play them. One of the features is that it does not preload.
EDIT : The
HTML5 audio player also has the preload attribute which can have the following values:
"none": will not prefetch anything
"metadata" (not sure if it's the correct word): will get some basic stuff like length, sample rate
"auto": will prefetch the entire mp3
You need to get a bit more control over the serving process, rather than just leaving it up to your web server.
When your web server responds to an HTTP request, it includes a Length: header that tells the client how big the requested resource is, in bytes. Your web server will only send up to the length available at the time of the request, because it doesn't know the file is about to be appended to. The client will download all of that data, but from the client's perspective, it will have downloaded the entire file when really the file wasn't even done being encoded yet.
To work around this issue, you need to pipe the output of your encoder to both a file, and your client at the same time. For the response data to the client, do not include a Length: header at all. Most clients will work with chunked encoding, allowing you to be compliant with HTTP/1.1. Some clients (early Android, old browsers, old VLC) cannot handle chunked encoding, and will just stream the data as it comes in.
How you do this specifically depends entirely on what you're using server-side, which you didn't specify in your question. Personally, I use Node.js for this as it makes the process very easy. You can simply pipe to both streams. Be careful that if you use the multiple pipe method, the pipes only run as fast as the slowest. Some streaming clients (such as VLC) will lower the TCP window size to not allow too much data to have to be buffered client-side. When this occurs, your writes to disk will run at the speed of the client. This may not matter to you, but it is something to be aware of.

Sending HD movie frames through sockets to Flash

I was wondering if someone has ever done something like this. I have a HD movie (or even 720p one) and I want to send it to a Flash client. I was thinking of using OpenCV in C++ for the decoding and sending part. I had even implemented some of this, but have problems with wrong packet size.
But my question is different, has anyone did anything similar to this? Can this give a chance for performance improvement? I have strong doubts about this, because I think the sending and decoding will be still difficult for the Flash machine. Looking forward to hearing some opinions from more experienced guys.
not a real answer, more like thoughts about your problem:
yes, you must encode HD images, sending 25 fps x 1.5mb over the net is a no-go.
gstreamer was build for exactly that purpose. complicated, maybe, but look at it anyway !
why write a program, when vlc can do all of this already ? (even headless/scripted!)
if there's audio to stream, too - forget opencv. it's a computer-vision lib, not build for your problem there
There are essentially two network protocols that are commonly used to send video from a server to a flash client, HTTP, and RTMP.
HTTP is a well-known standard, easily implemented because it is a plain-text protocol, that allows Flash Player to play on-demand video files, or do what is called pseudo-streaming.
RTMP is a proprietary protocol created by Adobe, that allows real-time streaming as well as video on demand, and can also transport structured binary data (the AMF format) to act as a remote procedure call protocol.
Although now documented, it is much more complicated to implement than HTTP, but there is an open-source library that implements this protocol, librtmp, found at http://rtmpdump.mplayerhq.hu/.
Please note that I have used librtmp with success, on the client side, to have a C program act as a Flash client to publish video on a FMS server. I have no experience of using it on the server side, I don't even know if it's possible at all.
In your case I certainly recommend using HTTP.
Now there is another problem to overcome, it is the fact that for video frames to be properly recognized, they must be embedded in a container that the Flash player can read.
Flash currently supports two container formats, FLV and F4V, the latter being a subset of the MPEG-4 container format.
Also, the video stream must be readable by Flash, and so it must be properly encoded into a format supported on the client-side, for example H.264, Sorensen, or VP6.
It is possible to directly send GIF, JPEG or PNG images as frames, as seen on page 8 of the official Flash Video Specification, but you must realize that in a HD resolution, this will be extremely inefficient, just imagine that at 25 FPS, a single image at 1920x1080 pixels in JPEG is much bigger than the equivalent H.264 frame.
So, in the end, my advice is: do not decode the video on the server, make sure it is in a format compatible with Flash, and use a well-documented protocol to send it as-is.

Prevent stealing HTML5 video in the browser?

I'm looking for a way to securely deliver video to mobile devices. There are two options:
HLS in tag. This works very nicely for iOS and supports adaptive bitrate, perfect for mobile. However, is seems to only work well on iOS. There seems to be only fragmented support for it on Android. I've read that Android has officially supported it since 3.0, but on all the android devices I've tested (>3.0), HLS hasn't played back on the browser.
Progressive download in tag. This will work on iOS and Android devices fine, but the concern is that since it's just a progressive download of the video, that the user find a way to just grab that video once the browser has finished downloading it. This may be more difficult on iOS, but I'm sure it's not that hard to figure out where the browser stored the video download in a tmp folder somewhere.
Either method I'd say can be protected from deeplinking by using an expiring token approach, where the token is generated serverside with a secret key that only the content server knows about. The video request would only be valid for 5 or 10 minutes, would would kill of deeplinking.
Is anyone aware of any way around these issues? Even if I was able to prevent deeplinking, the user could still get the video itself and re-distribute. Perhaps it's just not possible?
Thanks
Rule #1 of the internet:
If you don't want someone stealing it, don't put it online.
Welcome to the circumvention arms race. Brought to you by DownloadHelper.
There's nothing you can do to stop someone who really wants to pirate your video. There are various measures, like those you mention, that make it more difficult, but someone who really wants to copy it could find a way to capture it from memory, or even just point a camera at the screen and record the playback of the video.
It's the same way you protect your car. You install a steering lock, an alarm and an engine immobiliser, and then someone comes alongs and pulls the car onto a flat-bed truck and drives away with it.
Bottom line - you can't stop a determined thief, but you can make theft more difficult so that you're not the most attractive target.
As I was reading the above I could easily get pass all these techniques pretty quickly.
For a project I can't describe too much because of nda, we created our own protocol based on a well known encryption method can't mention that either , military grade) , encoded packets on the server to the protocol, and decoded on the device.
unfortunately this isn't perfect either because a lot of mobile apps can be re-versed engineered and once you get the key game over, very easy on android, of course you could periodically recycle the key, in which case even if they decompiled the android app and got the key it wouldn't work very long.
This is a lot of work and can't be implemented with html5 or hLS or event rtsp.
It also requires a custom server application that takes the video stream re-transmits it with the custom protocol.
On the other hand the protocol was transport agnostic, which meant we could use a variety of transports, tcp, IAP and bluetooth. Also would work on all mobile / desktop platforms.
The other little requirement, is couldn't use a browser, have to be a custom app.

How do I produce a screenshot of a flash swf on the server?

I'm writing a flash app using the open source tools. I would like to load a data file in to the app and capture a screenshot of the stage on the server. The only part that seems mysterious is running the app on the server. In fact, I don't even care if it's the same app running on the server and in the browser--if I can use the flash stage and drawing routines to produce an image server-side, I'm happy. If I have to delve in to flex, fine. Right now I'm having problems finding any starting point at all.
I gather Adobe has some commercial products that may fit the bill, but I'd like to stick with open source, apache, and linux. I know this is probably possible with haxe/neko, but I'd like to use more mainstream tools if possible. Am I asking too much?
EDIT/CLARIFICATION: Many thanks for the responses so far, but I think I've been a bit muddy in my description. I've already written the actual stage-grabbing stuff using the same PNGEncoder class as was suggested. The problem is in actually running the swf on the server side. I don't want to let the client take the screen shot itself, because this opens up the possibility of the client maliciously submitting a screenshot which does not correspond to what is on the stage, that is, I don't want users uploading porn. If I could run the the actionscript code on the server, then I could generate the screenshot from my data files and be sure that the screenshot matches the data, but I have no idea how to run the actionscript or swf on the server.
Swfs run on a client computer, not on the server. The only way it would run on the server would be if you set up a special environment on your server so that it ran a web browser, opened up the page and ran the swf. But even then it would have no correlation to what an external user was doing.
You'll need to run it client side. As far as your security concerns, the best way to get rid of those is to have the php writing the actual image only accept an encrypted form of the image file, which the flash can encrypt. That way they can't simply use the PHP file to upload whatever image they want unless they happened to encrypt it the exact same that your swf did. Next encrypt the swf itself (I recommend SWF Shield) so that a potential hacker cannot read the code to know how to encrypt the image.
We just completed a similar project where we rendered JPGs from SWFs that loaded dynamic data, we used IECapt
Did you try actionscript print commands?
Try and look at this:
http://www.phpclasses.org/browse/package/4312.html
I know this question is long dead, but I had a similar problem and ended up writing a script using applescript + ui scripting to grab the inside area of the preview window of the standalone flash player in OS X. You can grab it off github here.
How about swfdec-thumbnailer from the swfdec-gnome package? It's used to create thumbnails of SWF files but can accept arbitrarily large resolutions with the -s argument.
EDIT: swfdec-gnome has been deprecated in Ubuntu 10.10 in favour of Gnash. Here is a guide on taking screenshots with Gnash (note that certain features like gradients are not yet properly supported).