Is there anyway to buffer video in a Windows Phone 8 app?
I want to create an app that buffers the last 30 seconds or so of video so that the user can tap the screen and get a video file that includes the 30 seconds of video taken prior to their tapping the screen.
I've looked at both the .NET CaptureSource API, and the WP8 only AudioVideoCaptureDevice, both look like they record directly to a file on IsolatedStorage:
For CaptureSource you use a FileSink object to write an mp4 file of your recorded video.
For AudioVideoCaptureDevice you can write to a RandomAccessStream. WP8 doesn't have the InMemoryRandomAccessStream though, so the only way I see to get a RandomAccessStream is to create one from a storage file.
For CaptureSource you could write your own VideoSink class to buffer your video and use that instead of FileSink, but then you you would be stuck working with the Raw video data, and you'd have to write your own encoder to get it into a formal like an mp4.
Is there anything I'm missing, or is buffering video just not possible on WP8 unless you write your own encoder?
I'm not sure you can do This... for various reasons... Maybe you can cache video in memory, making your own implementation of IRandomAccessStream but... as you noted, you need to play in first instance with RAW video, and depending resolution, 30 seconds of raw video and audio can weight more than the total allowed memory for the application so you can had your app closed by the system.
I don't know if you can use a mediaelement to play the video Without showing it to the user and when the user click on play, rewind to Start position and show it to the user, as OS automatically cache streamed videos (This is a happy idea... i don't test This in anyway....)
Sorry for not begin more useful :(
Related
I'm looking for some suggestions or pointers on where to look or how to get started with a project for Windows Phone 8.1. The idea is pretty simple in my mind. I want to constantly record video to a memory stream only keeping say the last five seconds, then an event will trigger saving the video steam to a file on to the phone.
I was originally thinking I could save raw frames to a ring buffer and define the size based on the raw frame size * sample rate. Now I realize that might not work because the video provided by the MediaCapture class will be encoded. Digging on stackoverflow, I came across the idea of using MFTs but it sounds a lot more complicated than I originally had in mind.
Looking around the Development Reference material on MSDN, I'm guessing the MediaCapture class will be my friend. Can I somehow define a fixed size stream for use with MediaCapture.StartRecordToStreamAsync then on my event connect it to MediaCapture.StartRecordToStorageFileAsync? Or perhaps there might be a more appropriate way to do this that I should investigate?
I am trying to develop a test taking website for students. In this website, students should be able to answer the questions(displayed in text format) by using webcam in one go. Currently I have implemented this feature using Flash, it captures the frames and simultaneously sends it to the server. The problem with this technique is that the quality(FPS) of my video is restricted and is dependent on the bandwidth of the internet connection. Also I am not in favor of using flash.
I want that as soon as student clicks on the start button, a timer should start to record the video. The video should get saved on the client's machine (without asking the client to mention the path) and on completion of video, it should automatically get uploaded on the server and when uploading gets completed, the video should be automatically deleted from the client's machine.
In short can anyone give me a starting point, so as to I can proceed with the work. Any helo will be highly appreciated.Thanks!
Here is a good example how to get webcam working on html5:
http://blog.teamtreehouse.com/accessing-the-device-camera-with-getusermedia
It doesnt tell how to upload the video to the server.
Currently I have implemented this feature using Flash, it captures the frames and simultaneously sends it to the server. The problem with this technique is that the quality(FPS) of my video is restricted and is dependent on the bandwidth of the internet connection.
That is actually incorrect.
The fps you're getting depends 100% on:
the webcam quality
the light available in the room (the more light the better)
The resolution you're recording at (lower res results in higher fps even with low quality webcams in low light)
The video should get saved on the client's machine (without asking the client to mention the path) and on completion of video, it should automatically get uploaded on the server and when uploading gets completed, the video should be automatically deleted from the client's machine.
Flash records by streaming (through rtmp) the audio/video data to a media server (Red5, AMS, Wowza). After the recording is stopped you could move the file to a web server and trigger a http download.
In what regards HTML the Media Recording API has been implemented by Firefox and Chrome 49 and it allows you to record to local RAM and download the file as .webm (the audio video codecs might differ btwn browsers).
Disclaimer: I work at Pipe which handles video recording.
I'm currently working on a dynamic MP3 player in AS3. The player will also support continuous (in length) radio streams.
Because my player will include a 'seek' bar, I allow the user to seek through the Sound object's data. Now I know that with a continuous stream, data being stored on the users RAM will never stop, as downloading will never stop on a continuous stream. This means, after a few hours of streaming, allot of RAM is being used by my app. I've tested the app on my own machine, running a very high spec, and the app crashes in my browser. When i say the app crashes, I mean the whole of Flash, meaning I have to restart my browser in order to use Flash again. I know my app is the cause as Flash has never crashed in the past. It only does it when my app has been streaming for 2+ hours.
So what I want to do is only allow the user to cache up to an hours worth of audio. After an hour, I want to clear the first half of the sound objects data, meaning that only the most recent half hours audio is stored and available for seeking.
So I have my stream:
var soundObj:Sound = new Sound();
soundObj.load(new URLRequest('stream.mp3'));
//ect ect
and sound is where the data is stored. So my question: How would I clear the first 30 mins of audio from that object?
Perhaps the Sound class is not meant to reliably play "unlimited" MP3 files, which seems to be your case. It is made to play normal MP3 "songs". Two hours of MP3 sound can easily accumulate to be larger than 200 megabytes of data.
But there is a good solution - use NetConnection and NetStream classes to stream audio instead. There are many tutorials out there. You will also be able to stream your MP3s, just a bit differently - a central server will be involved, which will transcode these MP3s on the fly, delivering it to you in a true "streaming" manner. One of such servers is Adobe Flash Media Server, an overpriced piece of work from Adobe. A lot of free and open-source alternatives exist which will work fine for your purposes - Red5, nginx-rtmp to name a few, that I have tested myself.
So, I'm working with a video source that I'm feeding into my Adobe AIR application via some native extension work, with the goal of ultimately getting it to a Flash Media Server. The video is H.264 encoded and muxed into a FLV container, which aligns me with supported Flash Media Server codecs and NetStream (appendBytes) requirements. I can get the data into AIR just fine.
The mine I stepped onto today, however, is that documentation for NetStream.appendBytes states I must call NetStream.play(null):
Call this method on a NetStream in "Data Generation Mode". To put a NetStream into Data Generation Mode, call NetStream.play(null) on a NetStream created on a NetConnection connected to null. Calling appendBytes() on a NetStream that isn't in Data Generation Mode is an error and raises an exception.
NetStream.play() called with a null parameter yields local FLV playback. I can't publish the stream to FMS in this mode. But my research into Flash seems to indicate NetStream's byte access is my only real hope here when dealing with non-camera or non-web video data.
Q: Can I latch onto the video playback buffer for publish to a FMS? Can I create a sort of pipeline of NetStreams or NetConnections to achieve this? Or is there an alternate approach here for transmitting H.264/FLV data to FMS? (The source of my video cannot communicate with FMS directly.)
The answer to your question is quite simply no. This is apparently implemented as a security feature, which is probably less of a security based issue and more of a sales issue. Adobe likes to block certain capabilities intentionally in order to create the possibility of, or need of another product aka more revenue.
I tried looking into this for you to see if there was some dirty hack where you could attach a camera or something and override the binary data being sent to the stream like you can with Audio but unfortunately, to my knowledge, no such hack is possible. More nfo here: NetStream.appendBytes
Update
You might be able to do something hackish by using ManyCam which is a virtual webcam driver (from what I understand). This will provide a valid camera you can select from flash and you can also select a video file as the source file for ManyCam. See http://manycam.com/user_guide/#HowtoSelectaVideofileasthePictureSourceforManyCam
Update #2
If you're looking for something open source that will do the same thing as manycam, check out the following:
http://code.google.com/p/webcamstudio/wiki/VideoSourceMovie (GPL Licensed)
I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions