use live555 to stream H264 to VLC - h.264

I try to stream H264 to VLC via RTP without RTSP, that is to receive H.264 stream from IP camera, and send it to VLC on anther host. VLC opened URL “rtp://#:12345”.
Notice that openRTSP doing the same thing but output data into file using H264VideoFileSink class, I replace that part of code:
if (strcmp(subsession->mediumName(), "video") == 0) {
if (strcmp(subsession->codecName(), "H264") == 0) {
// For H.264 video stream, we use a special sink that adds 'start codes',
// and (at the start) the SPS and PPS NAL units:
//fileSink = H264VideoFileSink::createNew(*env, outFileName,
// subsession->fmtp_spropparametersets(),
// fileSinkBufferSize, oneFilePerFrame);
char const* outputAddressStr = "192.168.1.123"; // this could also be unicast
struct in_addr outputAddress;
outputAddress.s_addr = our_inet_addr(outputAddressStr);
const Port outputPort(12345);
unsigned char const outputTTL = 255;
Groupsock outputGroupsock(*env, outputAddress, outputPort, outputTTL);
rtpSink = H264VideoRTPSink::createNew(*env, &outputGroupsock, 96);
}
…
then,
subsession->sink = rtpSink;
subsession->sink->startPlaying(*(subsession->readSource()),
subsessionAfterPlaying,
subsession);
The result is that openRTSP is running but VLC received nothing. I used Wireshark to check, no packet sent to destination IP and port.
I also try testMP3Streamer, replace multicast address with the unicast address aboved. VLC could play it.
Could anybody give me some suggestions?

There is severals errors in your code, first the Groupsock scope is too narrow, next an H264 Framer is needed to feed an H264VideoRTPSink, as you can see in H264VideoRTPSink.cpp :
Boolean H264VideoRTPSink::sourceIsCompatibleWithUs(MediaSource& source) {
// Our source must be an appropriate framer:
return source.isH264VideoStreamFramer();
}
Then putting all together will give something like :
char const* outputAddressStr = "192.168.1.123";
struct in_addr outputAddress;
outputAddress.s_addr = our_inet_addr(outputAddressStr);
const Port outputPort(12345);
unsigned char const outputTTL = 255;
Groupsock* outputGroupsock = new Groupsock(*env, outputAddress, outputPort, outputTTL);
rtpSink = H264VideoRTPSink::createNew(*env, outputGroupsock, 96);
subsession->addFilter(H264VideoStreamDiscreteFramer::createNew(*env, subsession->readSource()));

Related

Reading YUY2 data from an IMFSample that appears to have improper data on Windows 10

I am developing an application that using IMFSourceReader to read data from video files. I am using DXVA for improved performance. I am having trouble with one specific full-HD H.264 encoded AVI file. Based on my investigation this far, I believe that the IMFSample contains incorrect data. My workflow is below:
Create a source reader with a D3D manager to enable hardware acceleration.
Set the current media type to YUY2 as DXVA does not
decode to any RGB colorspace.
Call ReadSample to get an IMFSample. Works fine.
Use the VideoProcessorBlt to perform YUY2 to BGRA32
conversion. For this specific file it errors out with an
E_INVALIDARGS error code. Decided to do the conversion myself.
Used IMFSample::ConvertToContiguousBuffer to receive an IMFMediaBuffer. When locking this buffer, the pitch is reported as 1280 bytes. This I believe is incorrect, because for a full HD video, the pitch should be (1920 + 960 + 960 = 3840 bytes).
I dumped the raw memory and extracted the Y, U and V components based on my understanding of the YUY2 layout. You can find it below. So, the data is there but I do not believe it is laid out as YUY2. Need some help in interpreting the data.
My code for reading is below:
// Direct3D surface that stores the result of the YUV2RGB conversion
CComPtr<IDirect3DSurface9> _pTargetSurface;
IDirectXVideoAccelerationService* vidAccelService;
initVidAccelerator(&vidAccelService); // Omitting the code for this.
// Create a new surface for doing the color conversion, set it up to store X8R8G8B8 data.
hr = vidAccelService->CreateSurface( static_cast<UINT>( 1920 ),
static_cast<UINT>( 1080 ),
0, // no back buffers
D3DFMT_X8R8G8B8, // data format
D3DPOOL_DEFAULT, // default memory pool
0, // reserved
DXVA2_VideoProcessorRenderTarget, // to use with the Blit operation
&_pTargetSurface, // surface used to store frame
NULL);
GUID processorGUID;
DXVA2_VideoDesc videoDescriptor;
D3DFORMAT processorFmt;
UINT numSubStreams;
IDirectXVideoProcessor* _vpd;
initVideoProcessor(&vpd); // Omitting the code for this
// We get the videoProcessor parameters on creation, and fill up the videoProcessBltParams accordingly.
_vpd->GetCreationParameters(&processorGUID, &videoDescriptor, &processorFmt, &numSubStreams);
RECT targetRECT; // { 0, 0, width, height } as left, top, right, bottom
targetRECT.left = 0;
targetRECT.right = videoDescriptor.SampleWidth;
targetRECT.top = 0;
targetRECT.bottom = videoDescriptor.SampleHeight;
SIZE targetSIZE; // { width, height }
targetSIZE.cx = videoDescriptor.SampleWidth;
targetSIZE.cy = videoDescriptor.SampleHeight;
// Parameters that are required to use the video processor to perform
// YUV2RGB and other video processing operations
DXVA2_VideoProcessBltParams _frameBltParams;
_frameBltParams.TargetRect = targetRECT;
_frameBltParams.ConstrictionSize = targetSIZE;
_frameBltParams.StreamingFlags = 0; // reserved.
_frameBltParams.BackgroundColor.Y = 0x0000;
_frameBltParams.BackgroundColor.Cb = 0x0000;
_frameBltParams.BackgroundColor.Cr = 0x0000;
_frameBltParams.BackgroundColor.Alpha = 0xFFFF;
// copy attributes from videoDescriptor obtained above.
_frameBltParams.DestFormat.VideoChromaSubsampling = videoDescriptor.SampleFormat.VideoChromaSubsampling;
_frameBltParams.DestFormat.NominalRange = videoDescriptor.SampleFormat.NominalRange;
_frameBltParams.DestFormat.VideoTransferMatrix = videoDescriptor.SampleFormat.VideoTransferMatrix;
_frameBltParams.DestFormat.VideoLighting = videoDescriptor.SampleFormat.VideoLighting;
_frameBltParams.DestFormat.VideoPrimaries = videoDescriptor.SampleFormat.VideoPrimaries;
_frameBltParams.DestFormat.VideoTransferFunction = videoDescriptor.SampleFormat.VideoTransferFunction;
_frameBltParams.DestFormat.SampleFormat = DXVA2_SampleProgressiveFrame;
// The default values are used for all these parameters.
DXVA2_ValueRange pRangePABrightness;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Brightness, &pRangePABrightness);
DXVA2_ValueRange pRangePAContrast;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Contrast, &pRangePAContrast);
DXVA2_ValueRange pRangePAHue;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Hue, &pRangePAHue);
DXVA2_ValueRange pRangePASaturation;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Saturation, &pRangePASaturation);
_frameBltParams.ProcAmpValues = { pRangePABrightness.DefaultValue, pRangePAContrast.DefaultValue,
pRangePAHue.DefaultValue, pRangePASaturation.DefaultValue };
_frameBltParams.Alpha = DXVA2_Fixed32OpaqueAlpha();
_frameBltParams.DestData = DXVA2_SampleData_TFF;
// Input video sample for the Blt operation
DXVA2_VideoSample _frameVideoSample;
_frameVideoSample.SampleFormat.VideoChromaSubsampling = videoDescriptor.SampleFormat.VideoChromaSubsampling;
_frameVideoSample.SampleFormat.NominalRange = videoDescriptor.SampleFormat.NominalRange;
_frameVideoSample.SampleFormat.VideoTransferMatrix = videoDescriptor.SampleFormat.VideoTransferMatrix;
_frameVideoSample.SampleFormat.VideoLighting = videoDescriptor.SampleFormat.VideoLighting;
_frameVideoSample.SampleFormat.VideoPrimaries = videoDescriptor.SampleFormat.VideoPrimaries;
_frameVideoSample.SampleFormat.VideoTransferFunction = videoDescriptor.SampleFormat.VideoTransferFunction;
_frameVideoSample.SrcRect = targetRECT;
_frameVideoSample.DstRect = targetRECT;
_frameVideoSample.PlanarAlpha = DXVA2_Fixed32OpaqueAlpha();
_frameVideoSample.SampleData = DXVA2_SampleData_TFF;
CComPtr<IMFSample> sample; // Assume that this was read in from a call to ReadSample
CComPtr<IMFMediaBuffer> buffer;
HRESULT hr = sample->GetBufferByIndex(0, &buffer);
CComPtr<IDirect3DSurface9> pSrcSurface;
// From the MediaBuffer, we get the Source Surface using MFGetService
hr = MFGetService( buffer, MR_BUFFER_SERVICE, __uuidof(IDirect3DSurface9), (void**)&pSrcSurface );
// Update the videoProcessBltParams with frame specific values.
LONGLONG sampleStartTime;
sample->GetSampleTime(&sampleStartTime);
_frameBltParams.TargetFrame = sampleStartTime;
LONGLONG sampleDuration;
sample->GetSampleDuration(&sampleDuration);
_frameVideoSample.Start = sampleStartTime;
_frameVideoSample.End = sampleStartTime + sampleDuration;
_frameVideoSample.SrcSurface = pSrcSurface;
// Run videoProcessBlt using the parameters setup (this is used for color conversion)
// The returned code is E_INVALIDARGS
hr = _vpd->VideoProcessBlt( _pTargetSurface, // target surface
&_frameBltParams, // parameters
&_frameVideoSample, // video sample structure
1, // one sample
NULL); // reserved
After a call to ReadSample of IMFSourceReader or inside OnReadSample callback function of the IMFSourceReaderCallback implementation, you might receive the MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED flag. It means that the current media type has changed for one or more streams. To get the current media type call the IMFSourceReader::GetCurrentMediaType method.
In your case you need to query (again) the IMFMediaType's MF_MT_FRAME_SIZE attribute for the video stream, in order to obtain the new correct video resolution. You should use the new video resolution to set the "width" and "height" values of the VideoProcessorBlt parameters' source and destination rectangles.

UVM- Using my own configuration files vs using config db

I wrote a sequence which can be generic to a variety of tests. I want to do it by adding configuration files for each test.
The code for the sequnce:
//----------------------------------------------------------------------
//Sequence
//----------------------------------------------------------------------
class axi_sequence extends uvm_sequence#(axi_transaction);
`uvm_object_utils(axi_sequence)
//new
function new (string name = "axi_sequence");
super.new(name);
endfunction: new
//main task
task body();
int file_p, temp, len;
byte mode;
bit [31:0] addr;
string str;
axi_transaction axi_trx;
bit [31:0] transfers [$];
bit [31:0] data;
//open file
file_p = $fopen("./sv/write_only.txt", "r"); //the name of the file should be same as the name of the test
//in case file doesn't exist
`my_fatal(file_p != 0, "FILE OPENED FAILED")
//read file
while ($feof(file_p) == 0)
begin
temp = $fgets(str, file_p);
axi_trx = axi_transaction::type_id::create(.name("axi_trx"), .contxt(get_full_name()));
// ~start_item~ and <finish_item> together will initiate operation of
// a sequence item.
start_item(axi_trx);
transfers = {};
$sscanf(str, "%c %d %h", mode, len, addr);
//assign the data to str
str = str.substr(12,str.len()-1);
//create and assign to transfers queue
if(mode == "w")
begin
for (int i = 0; i <= len; i++) begin
temp = $sscanf(str, "%h", data);
`my_fatal(temp > 0, "THE LENGHT PARAM IS WRONG- too big")
transfers. push_back(data);
str = str.substr(13+(i+1)*8,str.len()-1);
end//end for
`my_fatal($sscanf(str, "%h", temp) <= 0, "THE LENGHT PARAM IS WRONG- too small")
end//if
axi_trx.init(mode,len,addr,transfers);
if (to_random == 1) to_random should be a part of the configuration file.
trx.my_random(); //trx is transaction instance
else
trx.delay = const_config; //const_delay should be a part of the configuration file.
//contains the send_request which send the request item to the sequencer, which will forward
// it to the driver.
finish_item(axi_trx);
end//begin
endtask: body
endclass: axi_sequence
Should I do it by using different configuration file, or can I do it by values that will be passed from the test to the agent through the config db?
And how can I pass different path (for the file_p = $fopen()) for each test?
You shouldn't need a separate configuration file for each test. Ideally, you would just pass down the configuration from the test level down into the env through the config_db (or through a separate configuration object for your agent)
When you create your sequence in your test (or virtual sequencer), you should be able to set your variables as needed.

CGI wont display variables through HTML in c (Eclipse)

I have used a fifo pipe to read in some data (weather data) into a char variable. The console will display this variable correctly. However, when I try to display it through HTML on the CGI page, it simply does not display. Code below -
int main(void) {
int fd;
char *myfifo = "pressure.txt";
char buff[BUFFER];
long fTemp;
//open and read message
fd = open(myfifo, O_RDONLY);
read(fd, buff, BUFFER);
printf("Received: %s\n", buff);
close(fd);
printf("Content-type: text/html\n\n");
puts("<HTML>");
puts("<BODY>");
printf("Data is: %s", buff);
puts("</BODY>");
puts("</HTML>");
return EXIT_SUCCESS;
}
As you can see in the console is displays correctly -
Received: 2014-08-13 16:54:57
25.0 DegC, 1018.7 mBar
Content-type: text/html
<HTML>
<BODY>
Data is 2014-08-13 16:54:57
25.0 DegC, 1018.7 mBar
</BODY>
</HTML>
logout
But on the CGI webpage it does not display the weather data, but it does display "data is".
Two important things when writing a CGI program:
the program will be run by the webserver, which is normally
started as a different user (the 'www' user for example).
it's possible that the program is started from within another
directory, which can cause different behaviour if you don't
specify the full path of a file you want to open.
Since both these things can cause problems, it can be helpful
to add some debug information. Of course, it's always a good idea
to check return values of functions you use.
To make it easier to display debug or error messages, I'd first
move the following code up, so that all output that comes after
it will be rendered by the browser:
printf("Content-type: text/html\r\n\r\n");
puts("<HTML>");
puts("<BODY>");
It may be useful to know what the webserver uses as the directory
from which the program is started. The getcwd
call can help here. Let's use a buffer of size BUFFER to store
the result in, and check if it worked:
char curpath[BUFFER];
if (getcwd(curpath, BUFFER) == NULL)
printf("Can't get current path: %s<BR>\n", strerror(errno));
else
printf("Current path is: %s<BR>\n", curpath);
The getcwd function returns NULL in case of an error, and sets the value
of errno to a number which indicates what went wrong. To convert this
value to something readable, the strerror
function is used. For example, if BUFFER was not large enough to be
able to store the path, you'll see something like
Can't get current path: Numerical result out of range
The open call returns a negative number
if it didn't work, and sets errno again. So, to check if this worked:
fd = open(myfifo, O_RDONLY);
if (fd < 0)
printf("Can't open file: %s<BR>\n", strerror(errno));
In case the file can be found, but the webserver does not have permission
to open it, you'll see
Can't open file: Permission denied
If the program is started from another directory than you think, and
it's unable to locate the file, you would get:
Can't open file: No such file or directory
Adding such debug info should make it more clear what's going on, and more
importantly, what's going wrong.
To make sure the actual data is read without problems as well, the return
value of the read function should be
checked and appropriate actions should be taken. If read fails,
a negative number is returned. To handle this:
numread = read(fd, buff, BUFFER);
if (numread < 0)
printf("Error reading from file: %s<BR>\n", strerror(errno));
Another value indicates success, and returns the number of bytes that were
read. If really BUFFER bytes were read, it's not at all certain that the
last byte in buff is a 0, which is needed for printf to know when the
string ended. To make sure it is in fact null-terminated, the last byte in
buff is set to 0:
if (numread == BUFFER)
buff[BUFFER-1] = 0;
Note that this actually overwrites one of the bytes that were read in this
case.
If fewer bytes were read, it's still not certain that the last byte that was
read was a 0, but now we can place our own 0 after the bytes that were read
so none of them are overwritten:
else
buff[numread] = 0;
To make everything work, you may need the following additional include files:
#include <unistd.h>
#include <string.h>
#include <errno.h>
The complete code of what I described is shown below:
int main(void)
{
int fd, numread;
char *myfifo = "pressure.txt";
char buff[BUFFER];
char curpath[BUFFER];
long fTemp;
// Let's make sure all text output (even error/debug messages)
// will be visible in the web page
printf("Content-type: text/html\r\n\r\n");
puts("<HTML>");
puts("<BODY>");
// Some debug info: print the current path
if (getcwd(curpath, BUFFER) == NULL)
printf("Can't get current path: %s<BR>\n", strerror(errno));
else
printf("Current path is: %s<BR>\n", curpath);
// Open the file
fd = open(myfifo, O_RDONLY);
if (fd < 0)
{
// An error occurs, let's see what it is
printf("Can't open file: %s<BR>\n", strerror(errno));
}
else
{
// Try to read 'BUFFER' bytes from the file
numread = read(fd, buff, BUFFER);
if (numread < 0)
{
printf("Error reading from file: %s<BR>\n", strerror(errno));
}
else
{
if (numread == BUFFER)
{
// Make sure the last byte in 'buff' is 0, so that the
// string is null-terminated
buff[BUFFER-1] = 0;
}
else
{
// Fewer bytes were read, make sure a 0 is placed after
// them
buff[numread] = 0;
}
printf("Data is: %s<BR>\n", buff);
}
close(fd);
}
puts("</BODY>");
puts("</HTML>");
return EXIT_SUCCESS;
}

Video4linux2 , get/set properties of images which are encoded by Camera

I am trying to set properties of captured image in linux.
For example:
format, width, height, that could be achieved by:
VIDIOC_S_FMT/VIDIOC_G_FMT + struct v4l2_format fmt;
But, I am blocked in getting/setting more detail parameters:
like H264 key-frame period.
I found there are api to reach the goal.
that are v4l2_ext_controls, v4l2-ext-control and VIDIOC_G_EXT_CTRLS.
I have tried that, but that did not work in my example code.
My code is like this :
struct v4l2_ext_control extCtrl;
memset(&extCtrl, 0, sizeof(struct v4l2_ext_control));
extCtrl.id = V4L2_CID_MPEG_VIDEO_H264_I_PERIOD;
extCtrl.size = 0;
extCtrl.value = 2;
struct v4l2_ext_controls extCtrls;
extCtrls.controls = &extCtrl;
extCtrls.count = 1;
extCtrls.ctrl_class = V4L2_CTRL_CLASS_MPEG;
ret = ioctl(fd, VIDIOC_S_EXT_CTRLS, &extCtrls);
if (0 < ret)
{
printf("VIDIOC_S_EXT_CTRLS setting (%s)\n", strerror(errno));
return -3;
}/*if*/
ret = ioctl(fd, VIDIOC_G_EXT_CTRLS, &extCtrls);
if (0 < ret)
{
printf("VIDIOC_G_EXT_CTRLS setting (%s)\n", strerror(errno));
return -4;
}/*if*/
printf("extCtrl.value = %d\n", extCtrl.value );
That seems well, the key frame period be 2 (extCtrl.value).
But when I used
ffplay -skip_frame nokey -i saved_raw_h264
The key frame period is obviously much greater than 2.
May anyone help me?
By the way: The Logitech C920, is the only one camera I know, supports h264 output in consumer market.
Does any know other camera supporting h264?
Assuming you are setting the parameters correctly, it's very possible that the Logitech C920 Linux driver is ignoring some, if not many, of the control parameters you are passing in via V4L2. Do you have the driver source for the C920? Or is it using a generic Linux USB camera driver? You could at least see which V4L2 controls are supported by the driver.
edit:
Have you seen these threads which talk about adding C920 support to gstreamer?
http://sourceforge.net/p/linux-uvc/mailman/linux-uvc-devel/thread/505D0DAE.7020907#collabora.co.uk/
http://kakaroto.homelinux.net/2012/09/uvc-h264-encoding-cameras-support-in-gstreamer/

AS3 / AIR readObject() from socket - How do you check all data has been received?

If you write a simple object to a socket:
var o:Object = new Object();
o.type = e.type;
o.params = e.params;
_socket.writeObject(o);
_socket.flush();
Then on the client do you simply use:
private function onData(e:ProgressEvent):void
{
var o:Object = _clientSocket.readObject();
}
Or do you have to implement some way of checking all of the data has been received recieved before calling .readObject()
There's 2 approaches:
If you're confident that your object will fit into one packet, you can do something like:
var fromServer:ByteArray = new ByteArray;
while( socket.bytesAvailable )
socket.readBytes( fromServer );
fromServer.position = 0;
var myObj:* = fromServer.readObject();
If you have the possibility of having multiple packet messages, then a common usage is to prepend the message with the length of the message. Something like (pseudo code):
var fromServer:ByteArray = new ByteArray();
var msgLen:int = 0;
while ( socket.bytesAvailable > 0 )
{
// if we don't have a message length, read it from the stream
if ( msgLen == 0 )
msgLen = socket.readInt();
// if our message is too big for one push
var toRead:int = ( msgLen > socket.bytesAvailable ) ? socket.bytesAvailable : msgLen;
msgLen -= toRead; // msgLen will now be 0 if it's a full message
// read the number of bytes that we want.
// fromServer.length will be 0 if it's a new message, or if we're adding more
// to a previous message, it'll be appended to the end
socket.readBytes( fromServer, fromServer.length, toRead );
// if we still have some message to come, just break
if ( msgLen != 0 )
break;
// it's a full message, create your object, then clear fromServer
}
Having your socket able to read like this will mean that multiple packet messages will be read properly as well as the fact that you won't miss any messages where 2 small messages are sent almost simultaneously (as the first message will treat it all as one message, thereby missing the second one)
Rule # 1 when dealing with TCP: it is an octet stream transfer protocol. You may never ever assume anything about how many octets (8 bit long values, commonly called bytes) you get in one go, always write code that can deal with any amount, both too few and too many. There is no gurantee that the write will not be split into multiple reads. There is also no gurantee that a single read will be from a single write.
The way I handled it was to make a call back that the server tell the client that the null bit was received.
The null bit is appended to the end of the data string you are sending to the server.
String.fromCharCode(0)
Also in your case you are doing
_socket.writeObject(o);
You should be sending a string not an object.
So Like this.
_socket.writeUTFBytes( 'Hellow World" + String.fromCharCode(0) );
NOTE *************
And one thing that most first time socket creators over look is the
fact that the first request to from the client to the server over the
port that the socket is connected on is a request for the
crossdomainpolicy.xml
If you only wish to send Objects, the simplest solution is if you send an int(size) before every object. Its not important to send the exact size, you can send a bit less. In my case, I've sent a bitmapdata, and the width and height of the object. obviously the bitmapdata's size is so big, its okay if you send only that, and ignore the rest.
var toRead=0;
protected function onSocketData(event:ProgressEvent):void{
if(toRead==0) {
toRead=socket.readInt()
}
if(socket.bytesAvailable>toRead){
var recieved:Object=socket.readObject()
/*do stuff with recieved object*/
toRead=0
}