How to get NV12 file in the Nvidia CUDA 5.0 SDK's project cudaDecodeGL? - cuda

Lately, I've been reading the cudaDecodeGL project of Nvidia cuda5.0 SDK.This project convert a MPEG2 file to NV12 file, then the NV12 file is converted to ARGB file in the kernel function, finally this ARGB file will be rendered and displayed in the OpenGL window. Actually, the middle-produced NV12 file is not output, while I want to get the NV12 file.
I would be so appreciated if someone could tell me what to do.

Referring to the whitepaper:
Post processing on a frame is done by mapping the frame through cudaPostProcessFrame(). This returns a pointer to a NV12 decoded frame.
This function is contained (and used) in the source file videoDecodeGL.cpp which is contained in the sample project.
There is only one actual use (function call) for this function. It is called out of the copyDecodedFrameToTexture function. The decoded frame in this function is what you want. If you look through this function prior to the call to cudaPostProcessFrame you'll see the following code:
// If streams are enabled, we can perform the readback to the host while the kernel is executing
if (g_bReadback && g_ReadbackSID)
{
CUresult result = cuMemcpyDtoHAsync(g_bFrameData[active_field], pDecodedFrame[active_field], (nDecodedPitch * nHeight * 3 / 2), g_ReadbackSID);
This shows how/where/when to grab the decoded frame back to the host if you want to. At that point you will have to queue up the frames and save to a file if that is what you want to do.

Related

Can't write into iomem region in qemu using gdb

I'm trying to add an new device in the qemu.
In the respective cpu file, used sysbus_mmio_map to set the base address.
sysbus_mmio_map(SYS_BUS_DEVICE(&s->brif), 0, BASE_ADDRESS);
In the newly created device file,
memory_region_init_io(&s->iomem, obj, &ops, s, "brif", SIZE);
sysbus_init_mmio((SYS_BUS_DEVICE(obj), &s->iomem);
The ops has the corresponding read and write handlers.
My read handler is getting called when I access the IO memory region using gdb, but my write handler is not getting called when I write to the IO memory region using gdb.
What am I missing?
Update: I do get the write handlers if I write to the IO memory region from the code running inside the guest, the problem is only when I try to access from the gdb.
I belive it's just a bug. Se this bugreport (with a patch included).

How to use the standard output by Flash Air?

I want to make a command line tool by Flash Air, but there is not any api of AS3 to output content to standard output.
Then, I try to use ANE to solve my problem(By making a windows ane and use C's printf function to output content), but it doesn't work.
Is there any methods to use the standard output by Flash air, or to make a command line tool by Flash Air?
The code of dll written by c++ is:
FREObject add(FREContext ctx, void* functionData, uint32_t argc, FREObject argv[])
{
int32_t x,y;
FREGetObjectAsInt32(argv[0], &x);
FREGetObjectAsInt32(argv[1], &y);
int32_t result = x + y;
FREObject resObj;
FRENewObjectFromInt32(result, &resObj);
//I want to use the "printf" to print content to the console
printf("print by dll: the result is %d\n", result);
return resObj;
}
but there is not any api of AS3 to output content to standard output.
Only a running OS process can give back Standard Output (also called stdio in C).
The best you can do is create an app that looks like commandline tool but in reality it just runs & passes data to the actual (native) OS commandline tool. Meaning in your tool you capture the user command to a string and then run a nativeProcess involving that (parsed) string as your process arguments.
Example in your app user types : calc. Your AIR runs: c:\Windows\System32\calc.exe
Anyways, on to your real question...
I try to use C's printf function to output content, but it doesn't
work.
If you mean you made some test.exe with C and when you get AIR to run it you want to capture the printf output then you can try:
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, CTest_OutputData);
or
process.addEventListener(ProgressEvent.STANDARD_ERROR_DATA, CTest_ErrorData);
To catch the output (it will be sent as bytes) make sure you have some public byteArray and String created. Here's an example for STANDARD_ERROR_DATA (it's likely the output goes here too, since you claim that STANDARD_OUTPUT_DATA is not working).
The code shown below inside that function works same whichever type of progressEvent you choose. Just put in the "right" one. temp_BA is the byteArray variable you setup earlier.
public function CTest_ErrorData (event:ProgressEvent):void
{
process.standardOutput.readBytes( temp_BA, temp_BA.length, process.standardOutput.bytesAvailable );
if ( temp_BA.length > 0 )
{
temp_String = temp_BA.readUTFBytes(temp_BA.length);
trace( "temp_String is : " + temp_String ); //if you want to check it
}
}
Final TIP: You can get traces inside Flash IDE by disabling "desktop" and keeping "extended desktop" ticked. Both must be ticked later when you make installing app.

Making a custom reporter for JSCS results in Gulp4

Please correct me where I'm wrong (still learning Gulp, Streams, etc.) I'd like to create a custom reporter for my gulp-jscs results. For example, let's say I have 3 files in my gulp.src() stream. To my knowledge, each is piped one at a time through jscs, which attaches a .jscs object onto the file with its results, one such variable in that object is .errorCount.
What I'd like to do is have a variable I create, ie: maxErrors which I set to, say 5. Since we're processing 3 files, let's say the first file passes with 0 errors, but the next has 3 errors. I don't want to prematurely stop processing since the maxErrors tally has not been reached (3/5 currently). So it should continue to process the next file which lets say has 3 errors as well, putting us over our max, so that we interrupt jscs from continuing to process more files and instead fail out and then let our custom reporter function gain access to the files that have been processed so I can look at their .jscs objects and customize some output.
My problem here is that I don't understand the docs when they say: .pipe(jscs.reporter('name-of-reporter')) How does a string value invoke my reporter (which currently exists as a function I've imported called libs.reporters.myJSCSReporter. I know pipe() expects Stream objects, so I can't just put a function in the .pipe() call.
I hope I've explained myself well enough (please ask for clarifications otherwise).

(cocos2d-x 3.1 + VS2012) TextureCache::addImageAsync causes a crash occasionally

I load some textures in asynchronously at the beginning of my game, about 40-50 of them.
vector<string> textureFileNames;
textureFileNames.push_back("textures/particle.png");
textureFileNames.push_back("textures/menu_title.png");
...
textureFileNames.push_back("textures/timer_bar.png");
for (auto fileName: textureFileNames)
{
Director::getInstance()->getTextureCache()
->addImageAsync(fileName, CC_CALLBACK_1(LoadingLayer::textureLoadedCallback, this));
}
My textureLoadedCallback method does nothing funky; at this stage it simply increments a value and updates a progress timer. The callback is called from the main thread by cocos2d-x design, so I don't suspect any problems arise from there.
90% of the time this works fine. But sometimes it crashes in VS2012 midway through loading the textures:
Debug Assertion Failed!
Program: C:\Windows\system32\MSVCP110D.dll
File: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include\vector
Line: 1140
Expression: vector subscript out of range
Breaking at this point, I can see that it dies in the internals of vector, specifically the [] operator, and traces back through _Hash to the TextureCache::loadImage() method: auto it = _textures.find(asyncStruct->filename) on line 174 of CCTextureCache.cpp. _textures is defined as std::unordered_map<std::string, Texture2D*> _textures, a standard library unordered map. asyncStruct->filename resolves to the full path and filename of a texture to load.
Using the debugger, I can see that the filename is fine. I can see that _textures already contains the 19 textures before this one that it has processed just fine.
The fact that it seems to just be dying in the midst of the standard library doesn't strike me as right... but I'm unable to determine where CCTextureCache goes wrong. Only that it doesn't always fail, and that it's failing in an asynchronous thread. There's no concurrency bollocks going on with my code (as far as I know).
Is this a cocos2d-x bug, a VS2012 bug or a bug with my code I pasted above?
I think a potential cause could be that the for loop issues like 19 async image loads all at once, which may or may not be supported by that method. Try issuing the next async load only after your texture callback is called. That way no two async loads are performed simultaneously.

How to save BitmapData to Bitmap *.bmp file, or much faster JPE Encoding method

I have a Flash / Actionscript 3 based desktop app wrapped in an *.exe using Zinc 4.0. I am using Flash Pro CS5.
I need to start saving very large image files locally. I have messed around with JPG Encoding these images before saving them to a local file via Zinc. I solved the actionscirpt timeout issue using This "asyncronous like" method. Encoding a 1.5 MP image takes about 5 seconds which is alright, but encoding an 8 MP image file takes about 40 seconds, which is not acceptable.
One idea I had is to save the BitmapData locally to a temporary Bitmap file (*.bmp), without having the end user to wait for JPG Encoding in Flash, and then use my already existing image processor (written in C#) to read the bitmap file and encode it without waiting on Flash to do it, effectively offloading the task away from the user.
I have used BitmapData.getPixels() to try and write the byte array directly to the file, using the same Zinc method as I do successfully with encoded JPGs, but the *.bmp file is unreadable. Are there some file headers that would need to be included in addition to the BitmapData getPixel()'s byte array to successfully save a bitmap image? If so how could I successfully add them to the byte array before writing to file?
Any guidance, clarification or other solutions much appreciated.
I've found a solution for my needs, and just in case others have similar needs:
To save an actual Bitmap (*.bmp) file, Engineer's suggested Btimap encoder class was awesome. Very fast on the actual encoding; however, since my file writing call in Zinc is synchronous and bitmap files are a lot larger than JPGs it really just moved my bottle neck from encoding to the file saving, so I decided to look elsewhere. If Zinc had an asynchronous binary file writing method that would not lock up the GUI I would have been happy, but until then this is not the solution for me.
I stumbled across a Flash Alchemy solution, with great results. Instead of waitint abour 40 seconds to encode an 8 MP image, it now only takes a few seconds. This is what I did:
Downloaded the jpegencoder.swc from this page and saved it in my project directory
Added the swc: Publish Settings > Flash (tab) > Script: Actionscript 3.0 "Settings..." button > Library path (tab)> and added that .swc with Link Type = "Merged into code"
Then used it :
(below is my modified code with just the basics)
import flash.utils.ByteArray;
import flash.display.BitmapData;
import cmodule.aircall.CLibInit; //Important: This namespace changed from previous versions
var byteArrayResults:ByteArray; //Holds the encoded byte array results
public static function startEncoding(bitmapData:BitmapData):void {
var jpeginit:CLibInit = new CLibInit(); // get library
var jpeglib:Object = jpeginit.init(); // initialize library exported class to an object
var imageBA:ByteArray = bitmapData.getPixels(bitmapData.rect); //Getpixels of bitmapData
byteArrayResults = new ByteArray();
imageBA.position = 0;
jpeglib.encodeAsync(encodeComplete, imageBA, byteArrayResults, bitmapData.width, bitmapData.height, 80);
}
private static function encodeComplete(thing:*):void
{
// Do stuff with byteArrayResults
}
You may find this link useful as well:
http://last.instinct.se/graphics-and-effects/using-the-fast-asynchronous-alchemy-jpeg-encoder-in-flash/640
my answer is late but maby it helps.
i developed a AIR mobile app to save imgs from the device camera on the device and upload it to the server.
since air 3.3 you have this bitmapdata encode functionality:
var ba:ByteArray = new ByteArray();
var bd:BitmapData = new BitmapData(_lastCameraPhotoTmpBmp.width, _lastCameraPhotoTmpBmp.height);
bd.draw(_lastCameraPhotoTmpBmp);
bd.encode(new Rectangle(0, 0, 1024, 768), new JPEGEncoderOptions(80), ba);
var localFile:File = File.applicationStorageDirectory.resolvePath("bild.jpg");
var fileAccess:FileStream = new FileStream();
fileAccess.open(localFile, FileMode.WRITE);
fileAccess.writeBytes(ba, 0, ba.length);
fileAccess.close();
the encode to jpg takes ~100ms on mobile devices in my tests.
greetings stefan