Windows Phone 8.1 Media Foundation H264 max resolution - windows-phone-8.1

I'm trying to encode a video in Windows Phone 8.1 using Media Foundation library and sink writer.
I have been able to achieve this by setting MFVideoFormat_H264 as MF_MT_SUBTYPE for my media output and using resolutions like 720p and 480p..
But when I change the resolution to 1920x1080 (or 1920x1088) I get an Incorrect Parameter error. so I guess my maximum resolution for H.264 codec is 1280x720.
I tried changing the codec to HVEC or MPEG2, etc... but no luck.
This is the cpp code where I setup the output type and Add it to stream:
// Setup the output video type
ComPtr<IMFMediaType> spvideoTypeOut;
CHK(MFCreateMediaType(&spvideoTypeOut));
CHK(spvideoTypeOut->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
GUID _vformat = MFVideoFormat_H264;
CHK(spvideoTypeOut->SetGUID(MF_MT_SUBTYPE, _vformat));
CHK(spvideoTypeOut->SetUINT32(MF_MT_AVG_BITRATE, _bitrate));
CHK(spvideoTypeOut->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
CHK(MFSetAttributeSize(spvideoTypeOut.Get(), MF_MT_FRAME_SIZE, _width, _height));
CHK(MFSetAttributeRatio(spvideoTypeOut.Get(), MF_MT_FRAME_RATE, framerate, 1));
CHK(MFSetAttributeRatio(spvideoTypeOut.Get(), MF_MT_PIXEL_ASPECT_RATIO, ASPECT_NUM, ASPECT_DENOM));
CHK(_spSinkWriter->AddStream(spvideoTypeOut.Get(), &_streamIndex));
And this is where I setup the input type:
// Setup the input video type
ComPtr<IMFMediaType> spvideoTypeIn;
CHK(MFCreateMediaType(&spvideoTypeIn));
CHK(spvideoTypeIn->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
CHK(spvideoTypeIn->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32));
CHK(spvideoTypeIn->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
CHK(MFSetAttributeSize(spvideoTypeIn.Get(), MF_MT_FRAME_SIZE, _width, _height));
CHK(MFSetAttributeRatio(spvideoTypeIn.Get(), MF_MT_FRAME_RATE, framerate, 1));
CHK(MFSetAttributeRatio(spvideoTypeIn.Get(), MF_MT_PIXEL_ASPECT_RATIO, ASPECT_NUM, ASPECT_DENOM));
CHK(_spSinkWriter->SetInputMediaType(_streamIndex, spvideoTypeIn.Get(), nullptr));
CHK(_spSinkWriter->BeginWriting());
To add samples to sink writer I am using this function, and this is where the exception occurs:
void PictureWriter::AddFrame(const Platform::Array<uint8>^ videoFrameBuffer, int imageWidth, int imageHeight)
{
// Create a media sample
ComPtr<IMFSample> spSample;
CHK(MFCreateSample(&spSample));
CHK(spSample->SetSampleDuration(_duration));
CHK(spSample->SetSampleTime(_hnsSampleTime));
_hnsSampleTime += _duration;
// Add a media buffer
ComPtr<IMFMediaBuffer> spBuffer;
CHK(MFCreateMemoryBuffer(_bufferLength, &spBuffer));
CHK(spBuffer->SetCurrentLength(_bufferLength));
CHK(spSample->AddBuffer(spBuffer.Get()));
// Copy the picture into the buffer
unsigned char *pbBuffer = nullptr;
CHK(spBuffer->Lock(&pbBuffer, nullptr, nullptr));
BYTE* buffer = (BYTE*)videoFrameBuffer->begin() + 4 * imageWidth * (imageHeight - 1);
CHK(MFCopyImage(pbBuffer + 4 * _width * (_height - imageHeight),
4 * _width, buffer, -4 * imageWidth, 4 * imageWidth, imageHeight));
CHK(spBuffer->Unlock());
// Write the media sample
CHK(_spSinkWriter->WriteSample(_streamIndex, spSample.Get()));
}
Why do you think I get the exception and how can I fix this?
Thank you.

Found the solution by searching for default bitrates for each resolution,
1080p Works with bitrate of 5.0 Mbps,
1600x900 works with bitrate of 2.5 Mbps,
720p works with bitrate of 1.25 Mbps...

Related

Nvidia NVDEC - copy decoded frame to D3D11 NV12 texture

I'm trying to copy the NV12 NVDEC decoded buffer directly into an NV12 d3d11 texture. No luck so far. What I've managed to do is a double shot copy using 2 d3d11 textures (luma + chroma), 2 cuGraphicsMapResources, 2 cuGraphicsSubResourceGetMappedArray, 2 CUDA_MEMCPY2D and a pixel shader to merge all....no way to perform a single shot copy, and no response from NVidia forum so far.
I've found this old question facing a very similar problem, no solution there either.
Perhaps you need something like this. This code snipped taken from FFmpeg Project (opensource), libavutil/hwcontext_cude.c file:
for (i = 0; i < FF_ARRAY_ELEMS(src->data) && src->data[i]; i++) {
CUDA_MEMCPY2D cpy = {
.srcMemoryType = CU_MEMORYTYPE_HOST,
.dstMemoryType = CU_MEMORYTYPE_DEVICE,
.srcHost = src->data[i],
.dstDevice = (CUdeviceptr)dst->data[i],
.srcPitch = src->linesize[i],
.dstPitch = dst->linesize[i],
.WidthInBytes = FFMIN(src->linesize[i], dst->linesize[i]),
.Height = src->height >> (i ? priv->shift_height : 0),
};
ret = CHECK_CU(cu->cuMemcpy2DAsync(&cpy, hwctx->stream));
if (ret < 0)
goto exit;
}
Not sure how this can be done with NVidia/Cuda as I'm not familiar with. But this is how I managed to do it with Direct3D (D3D11va) that might help you to translate it to your situation:-
(NV12 NDEC Device).CopySubresourceRegion(src NV12 NVDEC texture, srcSubresourceArrayIndex, dst NV12 shared texture)
(Get Shared Handle for the newly created NV12 shared texture)
(Your Device).OpenSharedResource(NV12 shared handle)
(Prepare VideoProcessorInputView, VideoProcessorOutputView and Streams)
(Your Device).VideoProcessorBlt(src NV12 shared handle, dst Your RGBA/BGRA Render Texture)
This process is Video Acceleration and it happens only in your GPU (no CPU/RAM involved). You should also ensure that the GPU adapter supports that.

Cuda, two streams created by a NPP function

I'm working on an image processing project with Cuda 7.5 and a GeForce GTX 650 Ti. I decided to use 2 stream, one where I apply the algorithms responsible to enhance the image and another stream where I apply an independent algorithm from the rest of the processing.
I wrote an example to show my problem. In this example I created a stream and then I used nppSetStream.
I invoked the function nppiThreshold_LTValGTVal_32f_C1R but 2 stream are used when the function is executed.
Here there's a code example:
#include <npp.h>
#include <cuda_runtime.h>
#include <cuda_profiler_api.h>
int main(void) {
int srcWidth = 1344;
int srcHeight = 1344;
int paddStride = 0;
float* srcArrayDevice;
float* srcArrayDevice2;
unsigned char* dstArrayDevice;
int status = cudaMalloc((void**)&srcArrayDevice, srcWidth * srcHeight * 4);
status = cudaMalloc((void**)&srcArrayDevice2, srcWidth * srcHeight * 4);
status = cudaMalloc((void**)&dstArrayDevice, srcWidth * srcHeight );
cudaStream_t testStream;
cudaStreamCreateWithFlags(&testStream, cudaStreamNonBlocking);
nppSetStream(testStream);
NppiSize roiSize = { srcWidth,srcHeight };
//status = cudaMemcpyAsync(srcArrayDevice, &srcArrayHost, srcWidth*srcHeight*4, cudaMemcpyHostToDevice, testStream);
int yRect = 100;
int xRect = 60;
float thrL = 50;
float thrH = 1500;
NppiSize sz = { 200, 400 };
for (int i = 0; i < 10; i++) {
int status3 = nppiThreshold_LTValGTVal_32f_C1R(srcArrayDevice + (srcWidth*yRect + xRect)
, srcWidth * 4
, srcArrayDevice2 + (srcWidth*yRect + xRect)
, srcWidth * 4
, sz
, thrL
, thrL
, thrH
, thrH);
}
int length = (srcWidth + paddStride)*srcHeight;
int status6 = nppiScale_32f8u_C1R(srcArrayDevice, srcWidth * 4, dstArrayDevice + paddStride, srcWidth + paddStride, roiSize, 0, 65535);
//int status7 = cudaMemcpyAsync(dstPinPtr, dstTest, length, cudaMemcpyDeviceToHost, testStream);
cudaFree(srcArrayDevice);
cudaFree(srcArrayDevice2);
cudaFree(dstArrayDevice);
cudaStreamDestroy(testStream);
cudaProfilerStop();
return 0;
}
This what I got from the Nvidia Visual Profiler: image_width1344
Why are there two streams if I set only one stream? This causes errors in my original project so I'm thinking to switch to a single stream.
I noticed that this behaviour is dependent from the size of the image, if srcWidth and srcHeight are set to 1500 the result is this:image_width1500.
Why changing the size of the image produces another stream?
Why are there two streams if I setted [sic] only one stream?
It appears that nppiThreshold_LTValGTVal_32f_C1R creates its own internal stream for executing one of the kernels it uses. The other is launched either into the default stream, or the stream you specified with nppSetStream.
I think this is really a documentation oversight/user expectation problem. nppSetStream is doing what it says, but nowhere is it stated that the library is limited to using one stream. It probably should be more explicit in the documentation about how many streams the library uses internally, and how nppSetStream interacts with the library. If this is a problem for your application, I suggest you raise a bug report with NVIDIA.
Why changing the size of the image produces another stream?
My guess would be that there are some performance heuristics at work, and whether the second stream is used depends in image size. The library is closed source, however, so I can't say for sure.

How can I record sound with 16 bits per sample (16 bit depth)?

I try to record PCM sound from flash (using Microphone class). I use org.bytearray.micrecorder.MicRecorder helper class.
In Microphone class I cannot find property like bitDepth or bitsPerSample.
I always get 32 bits.
Is it possible to do?
UPDATE: The asker John812 was able to solve this by using..
bit16_bytes.writeShort( data.readFloat() * 32767 ); see comments below for context
METHOD #2: Based on my experience with using the LoadPCMfromByteArray method
I have something you could try but I've only used it with an actual 32bit WAVE file and played via the LoadPCMFromByteArray command.
The AS3 Microphone Class records 32 bits. You have to write the conversion of samples to a different bit-depth by yourself. I have no idea how many samples you are processing but the general code below shows you how to convert. Note: * 512 means use your actual samples amount (example: * 4096? or * 8192?) If you get the numbers wrong there'll be hiss/distortion so either experiment from small or provide the full details in your question for a more helpful edit/answer.
CONVERT: Assuming your recorded byteArray is called data
public var bit16_bytes : ByteArray; //will hold the 16bit version
public function convert_to16Bit () : void
{
bit16_bytes = new ByteArray(); data.position = 0;
while (bit16_bytes.position < data.length - 4)
//if you get noise/distortion try either: 256, 512, 1024, 2048, 4096 or 8192
{ bit16_bytes.writeShort( data.readInt() * 512 ); } //multiply by samples amount
data = new ByteArray(); //recycle for re-use
bit16_bytes.position = 0; //reset or else E-O-File error
bit16_bytes.readBytes( data ); //copy 16bit back into Data byte-array
}
To run the above function whenever you're ready just add the line convert_to16Bit(); inside whatever function deals with your "recording complete" situation.

Multiple Resolution Support in Cocos2d-x v2.2.5 in Portrait mode

I am working on a game in cocos2d-x which is in portrait mode.
Now, for a long time now, I've been working on how to properly achieve multi resolution in cocos2d-x but failed. I followed this great tutorial on forum, but it wasn't enough, I also searched a lot but couldn't find a solution.
I also tried with different-different policies which are available with cocos-x.
I went through all the following links & tutorials
Using these links I could achieve for all ios resolutions but not for all android screens.
http://becomingindiedev.blogspot.in/2014/05/multi-resolution-support-in-ios-with.html
http://discuss.cocos2d-x.org/t/porting-ios-game-to-android-multi-resolution-suppor/5260/5
https://www.youtube.com/watch?v=CH9Ct4R0nBM
https://github.com/SonarSystems/Cocos2d-x-v3-C---Tutorial-4---Multi-Resolution-Support
I even tried with newer version of cocos2d-x, but they also not providing anything which can support both ios and android screens.
I use the following bit of code in my AppDelegate:
void AppDelegate::multiresolutionSupport()
{
auto director = Director::getInstance();
auto glview = director->getOpenGLView();
cocos2d::Size designSize = cocos2d::Size(320, 480);
cocos2d::Size resourceSize = cocos2d::Size(320, 480);
cocos2d::Size screenSize = glview->getFrameSize();
float margin1 = (320 + 640) / 2;
float margin2 = (768 + 1536) / 2;
if (screenSize.width < margin1) {
FileUtils::getInstance()->addSearchResolutionsOrder("SD");
} else if (480 <= screenSize.width && screenSize.width < margin2) {
FileUtils::getInstance()->addSearchResolutionsOrder("HD");
designSize = cocos2d::Size(screenSize.width / 2, screenSize.height / 2);
} else {
FileUtils::getInstance()->addSearchResolutionsOrder("HDR");
designSize = cocos2d::Size(screenSize.width / 4, screenSize.height / 4);
}
resourceSize = screenSize;
director->setContentScaleFactor(resourceSize.width / designSize.width);
glview->setDesignResolutionSize(designSize.width, designSize.height, ResolutionPolicy::NO_BORDER);
}
Call it before loading any assets and you should have it working properly. I call it right after my glview is created in AppDelegate::applicationDidFinishLaunching().
How it works:
it uses a bunch of magic numbers to determine (roughly) what texture
resolution should we use: SD(#1), HD(#2) or HDR(#4) and then adjusts
design size accordingly, by dividing it on the content scale factor
it adds appropriate search paths to FileUtils it sets design
resolution and content scale factor for the engine
We are the team who made the multi resolution tutorial for the github link, it does support Android but the folders are just named using iOS naming conventions thats all.
Hope this helps.
Regards
Sonar Systems
It's working for me this way for Portrait mode:
Below the header :
typedef struct tagResource
{
cocos2d::CCSize size;
char directory[100];
} Resource;
static Resource smallResource = { cocos2d::CCSizeMake(640, 960),"iPhone" };
static Resource mediumResource = { cocos2d::CCSizeMake(768, 1024),"iPad"};
static Resource largeResource = { cocos2d::CCSizeMake(1536, 2048),"iPadhd" };
static cocos2d::CCSize designResolutionSize = cocos2d::CCSizeMake(768, 1024);
in applicationDidFinishLaunching() method :
// initialize director
CCDirector* pDirector = CCDirector::sharedDirector();
CCEGLView* pEGLView = CCEGLView::sharedOpenGLView();
pDirector->setOpenGLView(pEGLView);
CCSize frameSize = pEGLView->getFrameSize();
std::vector<std::string> searchPaths;
// Set the design resolution
pEGLView->setDesignResolutionSize(designResolutionSize.width, designResolutionSize.height, kResolutionExactFit);
if (frameSize.height <= smallResource.size.height)
{
searchPaths.push_back(mediumResource.directory);
CCFileUtils::sharedFileUtils()->setSearchPaths(searchPaths);
pDirector->setContentScaleFactor(mediumResource.size.height/designResolutionSize.height);
}
else if (frameSize.height <= mediumResource.size.height)
{
searchPaths.push_back(mediumResource.directory);
CCFileUtils::sharedFileUtils()->setSearchPaths(searchPaths);
pDirector->setContentScaleFactor(mediumResource.size.height/designResolutionSize.height);
}
else
{
searchPaths.push_back(largeResource.directory);
CCFileUtils::sharedFileUtils()->setSearchPaths(searchPaths);
pDirector->setContentScaleFactor(largeResource.size.height/designResolutionSize.height);
}

Incorrect buffer length gstreamer

I have the following function that processes a buffer object containing a video frame supplied by GStreamer
def __handle_videoframe(self, appsink):
"""
Callback method for handling a video frame
Arguments:
appsink -- the sink to which gst supplies the frame (not used)
"""
buffer = self._videosink.emit('pull-buffer')
(w,h) = buffer.get_caps[0]["width"],buffer.get_caps[0]["height"]
reqBufferLength = w * h * 3 #Required buffer lenght for a raw rgb image of these dimensions
print "Buffer length: " + str(len(buffer.data))
print "Needed length: " + str(reqBufferLength)
img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")
self.screen.blit(img, self.vidPos)
pygame.display.flip()
When running this code however, pygame crashes because the supplied buffer is larger than required and this size needs to match. I know this is probably caused by a faulty encoding of the movie that is played (as most movies do run fine), but is there a way to account for this contingency? Is there a way to resize the buffer on the go to a correct size? I have tried to just cut-off the tail of the buffer at the required length and then the movie does play, but the output is corrupted.
ok, a better solution was to use bufferproxies. They are less fuzzy about the length of the buffer.
img_sfc = pygame.Surface(video_dimensions, pygame.SWSURFACE, 24, (255, 65280, 16711680, 0))
img_buffer = img_sfc.get_buffer()
Then for each new frame:
img_buffer.write(buffer.data, 0)
pygame.display.get_surface().blit(img_sfc.copy(), vid_pos)
And voila, even incorrectly formatted buffers appear on screen without problems