FFmpeg record live stream and make mp4 file closer to now to play at html5 video and should updated quickly - html

Now I am developing video clipping system with PHP and FFMpeg.
Recording live stream from media server with rtmp protocol.
Let's suppose we are recording football and I want make highlight soon.
so I can see live streams and also able to clip video.
Please help me if anyone have idea.
Reference URL: https://www.youtube.com/watch?v=x61rge3Q3mw
Thank you.
Angel

It's really quite complicated Task.
But I did like this.
Started record from given URL.
Make ts files per minute.(gave file format, easy to converted to mp4)
Concatenate ts files and made it to mp4( It took really small time)
Then I played mp4 files.
Was Awesome.

Related

Is it necessary to convert database of mp3's to ogg and wave to use html audio?

I have thousands of mp3 files in a database and a website that lets people listen to them. I'm using a flash player but want to move to html5 audio player. Does this mean that I need to make ogg and wave versions of all my audio files? What is a good approach to making these files more accessible?
In short, yes you need to support multiple formats. (Assuming you care about decent browser support.)
If you are lacking disk space, don't get a lot of traffic, and don't mind some delay before the data gets to the user, you can convert these on the fly. Just write some code so that on request, it checks the conversion cache to see if you have already converted the file. If not, convert it on the fly (with something like FFMPEG) and write the data to disk at the same time you are writing it to the client.
As Imre pointed out, browser support is changing all the time. Look up what codecs are supported from time to time.

Is it possible to convert an avi file to mp4 in real time?

I have an AVI file that I would like to be played inside Flowplayer. I understand it uses HTML5 which requires movie files to be converted to MP4/OGV, so I was wondering if there was a framework that exists which will convert an AVI file to an MP4 file in real-time (and without necessarily being stored on the server)
...the more I think about this, the more I'm beginning to think this isn't possible. Please prove me wrong.
The video can be transcoded in (sort of) realtime by hardware or even software, but I'ts never a practical aproach since you are spending lot of processing power for each client for each video. This is madness. It's adviceable to cache pages... so videos are needed to be cached.
A simple way is to upload/place the video in a folder in the server, then trigger a transcoding (using FFMPEG) to a file which is the file to be served.

RTSP H.264 IP camera as video source/input in windows

I'm currently stuck with this problem and I hope somebody can help me out. I'm trying to create some sort of decoder that will convert a video stream that will act as a video input device so I can use it in Wirecast (video streaming program).
At this stage I use mjpeg IP cameras as video sources using this neat little program that allows me to convert a raw IP address:port into a input device, this works perfectly with unlimited cameras but does not support RTSP nor H.264, I have since upgraded a few cameras so I can get access to HD video.
I have tried a number of RTSP source filters from all over the net, and some programs like xpwebcam to get access to their H.264 filter but no luck as yet. I have tried to create my own filter using GraphStudio but it is beyond my understanding.
The IP cameras video feed URL looks like this..
Video Feed:
rtsp://xxx.xxx.xxx.xxx/0/video0
where videoX = 0,1,2 for resolution.
rtsp://user:pass#10.0.0.10/0/video0
or rtsp://#10.0.0.10/0/video0 for non-protected cameras, it's a private network so it does not matter, what ever will work.
I can successfully stream the video feed live using VLC but not much else, I'm not sure if there's a way to turn a stream into a input device.
I have been trying to do this for weeks now but had very little luck in getting it to work.
Please help me :)
As a professional photographer with many years in the field this question struck me as rather interesting. The answer you are looking for can be found at this site:
http://alax.info/blog/1416
The site lists the update you need for your equipment.
If you have no source filter can't you simply read from source and write to a file and have your other program read it from a file simultaneously. I have used such a trick many times on unix. Can't see why it cannot work here.

How is soundcloud player programmed?

This may be too broad a question, but how is soundcloud actually programmed?
To be more specific,
What language was used to program it?
How does it display the frequency data?
If user uploads a file in a format different from MP3, is it converted MP3 or gets played as is? If former, how does the conversion work?
How does it "graphically" appear on a browser as it does? Is it also HTML 5 thing which I don't know anything about?
I'm a big fan of the soundcloud and couldn't stop wondering how all of these work!
Please help me out :)
SoundCloud developer here,
The API and the current website are built with Rails. For information about the architecture/infrastructure and how it evolved over the last 5 years, check out Evolution of SoundCloud's Architecture. The "next" version of the website (still in private beta) is built entirely with Javascript, and just uses the API to get its data. There's a lot more detail available in Building The Next SoundCloud.
I'm not sure exactly what language/libraries are used to process the audio, but many audio libraries do provide frequency data, and we just extract that.
Users can upload AIFF, WAVE (WAV), FLAC, OGG, MP2, MP3, AAC, AMR or WMA files. The originals are kept exactly as is for the download option, but for streaming on the site, they're converted to 128kbps MP3 files. Again, I'm not sure of the software/libraries, but I'm pretty sure it'd be ffmpeg.
For displaying the waveforms, on the back-end when the audio files are processed when they're uploaded, the waveform data is saved into a PNG file. On the current version of the website, we simply load that file. On Next, the png is processed to get the original data back out, and then it is drawn to a canvas at the exact dimensions needed (which keeps the image crisp). We're currently experimenting with getting waveform data in a JSON format to speed up this process.
I am copying the following info posted by David Noël somewhere else in 2010.
Web tier: Varnish, nginx, haproxy, thin
Data Management: Cassandra, MongoDB, mySQL master/slave cluster, memcached
Web framework: Ruby on Rails
CDN: Akamai and Edgecast
Transcoding/storage: AWS EC2/S3

Post-processing captured video in AS3, creating slow motion

I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions