I want to play flash media server stream via gstreamer. My video is published from camera to FMS with h264 encoding (720x480 Main,3.0).
My command for ubuntu is:
gst-launch-1.0 rtmpsrc
location="rtmp://192.168.1.153:1935/appname/mp4:cameraFeed44.mp4
live=1" ! decodebin name=decoder decoder. ! queue ! videoconvert !
queue ! xvimagesink
For resolution 720x480 it throws:
ERROR: from element /GstPipeline:pipeline0/GstRTMPSrc:rtmpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2812): gst_base_src_loop (): /GstPipeline:pipeline0/ GstRTMPSrc:rtmpsrc0:
streaming task paused, reason error (-5)
ERROR: pipeline doesn't want to preroll.
However it works fine for low resolution, like: 320x240. But I need even more than FullHD.
Thanks,
Stan
Related
here is my question:
I use Android Apis "MediaMuxer" to combine h264 stream and aac stream to mp4 file,when I want to stop record,so I call this:mMediaMuxer.stop(),and the mp4 file can play well.
but sometimes,there is an unexpected happen,like kill power or power is gone suddendly,so there is no time to call "mMediaMuxer.stop()",finally this file can not play anyway....
Is anybody know now to fix this problem? I want to play this video event didn't call "mMediaMuxer.stop()" this method... or there is other Apis or sdk can combine h264+aac stream well?
I have one application based on webrtc. Currently I need to capture the system audio(by wasapi),but the mixed captured audio contains the audio stream which is my application's, if I send this audio stream to peer, he would listen echo.
The article Audio and Video / Core Audio APIs / Stream Management / Loopback Recording says
WASAPI provides loopback mode primarily to support acoustic echo cancellation (AEC).
How to understand it? How to clear the audio which is produced by my application?
In other words, I find that the chrome doesn't have this issue when I call the "getdisplaymedia", the captured audio stream doesn't contain audio which is produced by chrome.
The quoted statement means that in order to remove echo it is useful and necessary to have access to audio signal played through speakers because it will come back to michrophone or other audio recording hardware and would create an echo problem unless it is effectively subtracted.
Windows provides API to get this mixed audio signal going out to audio output devices: "Loopback Recording".
Also, Windows provides another software component and API: Voice Capture DSP:
The voice capture DMO includes the following DSP components:
Acoustic echo cancellation (AEC)
...
Currently the voice capture DMO supports only single-channel acoustic echo cancellation (AEC), so the output from the speaker line must be single-channel. If microphone array processing is disabled, multi-channel input is folded down to one channel for AEC processing. If both microphone array processing and AEC processing are enabled, AEC is performed on each microphone element before microphone array processing.
Together those can be used to capture audio and address echo cancellation challenge.
AECMicArray sample gives you some code and further information:
The sample supports acoustic echo cancellation (AEC) and microphone array processing by using the AEC DMO, also called the Voice capture DSP, provided by Microsoft .
I have created a simple multicast player using actinscript 3 which works fine and plays multicast stream well . Although when stream should fail . Or even if I stop Flash Media server it returns success codes : "NetConnection.Connect.Success" ,"NetStream.Connect.Success" and "NetStream.Play.Start" . I cant detect it when multicast stream fails in flash player. I need to switch to unicast stream if multicast fails and I cant detect a failure because of wrong status.
Why I am getting wrong status ?
I was finally able to figure this out from this link http://forums.adobe.com/message/4999537#4999537 . I now use NetStream.MulticastStream.Reset along with a timeout of 10 seconds .
This question is the follow up question to this thread: AR Drone 2 and ffserver + ffmpeg streaming
We are trying to get a stream from our AR Drone through a Debian server and into a flash application.
The big picture looks like this:
AR Drone --> Gstreamer --> CRTMPServer --> Flash Application
We are using the PaveParse plugin for Gstreamer found in this thread: https://projects.ardrone.org/boards/1/topics/show/4282
As seen in the thread the AR Drone is using PaVE, Parrot Video Ecapsulation, which is unrecognizable by most players like VLC. The PaVeParse plugin removes these.
We have used different pipelines and they all yield the same error.
Sample pipeline:
GST_DEBUG=3 gst-launch-0.10 tcpclientsrc host=192.168.1.1 port=5555 ! paveparse ! queue ! ffdec_h264 ! queue ! x264enc ! queue ! flvmux ! queue ! rtmpsink localtion='rtmp://0.0.0.0/live/drone --gst-plugin-path=.
The PaVEParse plugin needs to be located at the gst-plugin-path for it to work.
A sample error output from Gstreamer located in the ffdec_h264 element can be found at: http://pastebin.com/atK55QTn
The same thing will happen if the decoding is taking place in the player/dumper e.g. VLC, FFplay, RTMPDUMP.
The problem comes down to missing headers: PPS Reference is non-existent. We know that the PaVEParse plugin removes PaVE headers but we suspect that when these are removed there are no H264 headers for the decoder/player to identify the frame by.
Is it possible to "restore" these H264 headers either from scratch or by transforming the PaVE headers?
Can you please share a sample of the traffic between gstreamer and crtmpserver?
You can always use the LiveFLV support built inside crtmpserver. Here are more details:
Re-Stream a MPEG2 TS PAL Stream with crtmpserver
I'm trying to get a video stream RTP/RTCP using HTML5, the stream was generated by gstreamer. I used examples of gstreamer, so I can pass through RTP ports:5000, and RTCP:5001, and can receive streams using gstreamer. But using HTML5 could not receive them. So I tried to read a bit about HTML5 and saw that it can receive theora/ogg, webm/vp8, mp4/avc, and protocols may be, HTTP, RTP, RTCP, UDP, and others, but I could not use RTP, RTCP or UDP, HTTP only managed to receive. But I had a very satisfactory result using the VLC plugin for Mozilla Firefox, using the UDP protocol. I wonder if anyone has any tips, I do not want to use source files as src="/tmp/test.avi" needs to be a video stream that can be udp, RTP, RTCP. Thank you!
If you don't need to stream at low fps, you can use GStreamer to transcode your stream in MJPEG and stream it in TCP, and then use VLC to get this TCP stream and stream it to HTTP. It works very well (0.5 sec of latency), but if you decrease the fps (1 fps) VLC introduces a latency of around 11 sec.
Here are some test commands that should work out of the box, using the GStreamer videotestsrc :
GStreamer :
gst-launch -v videotestsrc horizontal-speed=1 ! deinterlace ! videorate ! videoscale ! video/x-raw-yuv, framerate=15/1, width=256,
height=144 ! jpegenc quality=20 ! multipartmux
boundary="--videoboundary" ! tcpserversink host=localhost port=3000
VLC :
vlc -vvv -I rc tcp://localhost:3000 --sout
'#standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=localhost:8081}'
then open a browser to http://localhost:8081 (or create an HTML page with an img tag whose "src" attribute is http://localhost:8081)