MFT Encoder (h264) High CPU utilization - h.264

I am able successfully to encode the data by H264 using Media Foundation Transform (MFT) but unfortunately I got a very high CPU(when I comment in the program the calling of this function I got low CPU).It is few steps followed to get the encoding so I can't do anything to improve it?Any idea can help
HRESULT MFTransform::EncodeSample(IMFSample *videosample, LONGLONG llVideoTimeStamp, MFT_OUTPUT_STREAM_INFO &StreamInfo, MFT_OUTPUT_DATA_BUFFER &encDataBuffer)
{
HRESULT hr;
LONGLONG llSampleDuration;
DWORD mftEncFlags, processOutputStatus;
//used to set the output sample
IMFSample *mftEncodedSample;
//used to set the output sample
IMFMediaBuffer *mftEncodedBuffer = NULL;
memset(&encDataBuffer, 0, sizeof encDataBuffer);
if (videosample)
{
//1=set the time stamp for the sample
hr = videosample->SetSampleTime(llVideoTimeStamp);
#ifdef _DEBUG
printf("Passing sample to the H264 encoder with sample time %i.\n", llVideoTimeStamp);
#endif
if (SUCCEEDED(hr))
{
hr = MFT_encoder->ProcessInput(0, videosample, 0);
}
if (SUCCEEDED(hr))
{
MFT_encoder->GetOutputStatus(&mftEncFlags);
}
if (mftEncFlags == MFT_OUTPUT_STATUS_SAMPLE_READY)
{
hr = MFT_encoder->GetOutputStreamInfo(0, &StreamInfo);
//create empty encoded sample
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&mftEncodedSample);
}
if (SUCCEEDED(hr))
{
hr = MFCreateMemoryBuffer(StreamInfo.cbSize, &mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
hr = mftEncodedSample->AddBuffer(mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
encDataBuffer.dwStatus = 0;
encDataBuffer.pEvents = 0;
encDataBuffer.dwStreamID = 0;
//Two shall after this step points on the same address
encDataBuffer.pSample = mftEncodedSample;
hr = MFT_encoder->ProcessOutput(0, 1, &encDataBuffer, &processOutputStatus);
}
}
}
SafeRelease(&mftEncodedBuffer);
return hr;
}

The first key is to ensure you have configured the sink with MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS. I also set the MF_LOW_LATENCY attribute.
// error checking omitted for brevity
hr = attributes->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
hr = attributes->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, TRUE);
hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
The other key is to ensure you are selecting the native format for the output of the source. Otherwise, you will remain very disappointed. I describe this in detail here.
I should also mention that you should consider creating the transform sample and memory buffer once at the beginning, instead of recreating them on each sample received.
Good luck. I hope this helps.

Related

i am very new to micro controller world. i am using pic16f877, please solve simple error in below description

i am getting error in this code ....can any body tell this solution
void interrupt ISR (void)
{
if (RCIF == 1) //*error: expected ';' after top level declarator***
{
UART_Buffer = RCREG; // Read The Received Data Buffer
PORTB = UART_Buffer; // Display The Received Data On LEDs
RCIF = 0; // Clear The Flag
}
}

How to hide library source code in Google way?

For instance, I have a library and I would like to protect the source code to being viewed. The first method that comes to mind is to create public wrappers for private functions like the following
function executeMyCoolFunction(param1, param2, param3) {
return executeMyCoolFunction_(param1, param2, param3);
}
Only public part of the code will be visible in this way. It is fine, but all Google Service functions look like function abs() {/* */}. I am curious, is there an approach to hide library source code like Google does?
Edit 00: Do not "hide" a library code by using another library, i.e. the LibA with known project key uses the LibB with unknown project key. The public functions code of LibB is possible to get and even execute them. The code is
function exploreLib_(lib, libName) {
if (libName == null) {
for (var name in this) {
if (this[name] == lib) {
libName = name;
}
}
}
var res = [];
for (var entity in lib) {
var obj = lib[entity];
var code;
if (obj["toSource"] != null) {
code = obj.toSource();
}
else if (obj["toString"] != null) {
code = obj.toString();
}
else {
var nextLibCode = exploreLib_(obj, libName + "." + entity);
res = res.concat(nextLibCode);
}
if (code != null) {
res.push({ libraryName: libName, functionCode: code });
}
}
return res;
}
function explorerLibPublicFunctionsCode() {
var lstPublicFunctions = exploreLib_(LibA);
var password = LibA.LibB.getPassword();
}
I don't know what google does, but you could do something like this (not tested! just an idea):
function declarations:
var myApp = {
foo: function { /**/ },
bar: function { /**/ }
};
and then, in another place, an anonymous function writes foo() and bar():
(function(a) {
a['\u0066\u006F\u006F'] = function(){
// here code for foo
};
a['\u0062\u0061\u0072'] = function(){
// here code for bar
};
})(myApp);
You can pack or minify to obfuscate even more.
Edit: changed my answer to reflect the fact that an exception's stacktrace will contain the library project key.
In this example, MyLibraryB is a library included by MyLibraryA. Both are shared publicly to view (access controls) but only MyLibraryA's project key is made known. It appears it would be very difficult for an attacker to see the code in MyLibraryB:
//this function is in your MyLibraryA, and you share its project key
function executeMyCoolFunction(param1, param2, param3) {
for (var i = 0; i < 1000000; i++) {
debugger; //forces a breakpoint that the IDE cannot? step over
}
//... your code goes here
//don't share MyLibraryB project key
MyLibraryB.doSomething(args...);
}
but as per the #megabyte1024's comments, if you were to cause an exception in MyLibraryB.doSomething(), the stacktrace would contain the project key to MyLibraryB.

net.rim.device.api.io.file.FileIOException: File system out of resources in blackberry

Below code throws net.rim.device.api.io.file.FileIOException: File system out of resources this exception.
Can anyone tell me how it happens?
public Bitmap loadIconFromSDcard(int index) {
FileConnection fcon = null;
Bitmap icon = null;
InputStream is=null;
try {
fcon = (FileConnection) Connector.open(Shikshapatri.filepath + "i"
+ index + ".jpg", Connector.READ);
if (fcon.exists()) {
byte[] content = new byte[(int) fcon.fileSize()];
int readOffset = 0;
int readBytes = 0;
int bytesToRead = content.length - readOffset;
is = fcon.openInputStream();
while (bytesToRead > 0) {
readBytes = is.read(content, readOffset, bytesToRead);
if (readBytes < 0) {
break;
}
readOffset += readBytes;
bytesToRead -= readBytes;
}
EncodedImage image = EncodedImage.createEncodedImage(content,
0, content.length);
image = resizeImage(image, 360, 450);
icon = image.getBitmap();
}
} catch (Exception e) {
System.out.println("Error:" + e.toString());
} finally {
// Close the connections
try {
if (fcon != null)
fcon.close();
} catch (Exception e) {
}
try {
if (is != null)
is.close();
is = null;
} catch (Exception e) {
}
}
return icon;
}
Thanks in advance...
Check this BB dev forum post - http://supportforums.blackberry.com/t5/Java-Development/File-System-Out-of-Resources/m-p/105597#M11927
Basically you should guaranteedly close all connections/streams as soon as you don't need them, because there is a limited number of connection (be it a file connection or http connection) handles in OS. If you execute several loadIconFromSDcard() calls at the same time (from different threads) consider redesign the code to call them sequentially.
UPDATE:
To avoid errors while reading the content just use the following:
byte[] content = IOUtilities.streamToBytes(is);
And since you don't need file connection and input stream any longer just close them right after reading the content (before creating EncodedImage):
is.close();
is = null; // let the finally block know there is no need to try closing it
fcon.close();
fcon = null; // let the finally block know there is no need to try closing it
Minor points:
Also in the finally block it is worth set fcon = null; explicitly after you close it, I believe this can help old JVMs (BB uses Java 1.3 - rather old one) to decide quicker that the object is ready to be garbage collected.
I also believe that the order you close streams in the finally block may be important - I'd change to close is first and then fcon.

Objective-C and MySQL

I've been able to connect to a MySQL database in my app and use the C API which is almost exactly like PHP commands (mysql_real_connect(), mysql_query(), mysql_fetch_array(), etc.) and which I am pretty comfortable with I'm just not sure how the data query is returned. Do I use an array or dictionary and then how would I parse it. For example, in PHP I would do something like so (after the connection):
$results = mysql_query("SELECT * FROM theDatabase");
if (mysql_num_rows($results) > 0) {
while($row = mysql_fetch_array($results)) {
print $row;
}
}
What would the objective-c equivalent be? Thanks.
Edit:
OK, so I made some progress - I can make the query and get the number of fields/rows returned, just can't seem to access the data itself. Here's my code, which I stitched together from the MySQL docs and a few other sites:
- (IBAction)dbConnect:(id)sender {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
MYSQL mysql;
mysql_init(&mysql);
if (!mysql_real_connect(&mysql, "10.1.1.99", "******", "******", "oldphotoarchive", 0, NULL, 0)) {
NSLog(#"%#", [NSString stringWithUTF8String:mysql_error(&mysql)]);
} else {
MYSQL_RES *result;
MYSQL_ROW row;
unsigned int num_fields;
unsigned int num_rows;
unsigned long *lengths;
if (mysql_query(&mysql,"SELECT * FROM photorecord")) {
// error
} else { // query succeeded, process any data returned by it
result = mysql_store_result(&mysql);
if (result) {
num_fields = mysql_num_fields(result);
while ((row = mysql_fetch_row(result))) {
lengths = mysql_fetch_lengths(result);
for(int i = 0; i < num_fields; i++) {
//the line below is my problem, printing row[i] fails, I get the GNU gdb error...
row[i] ? NSLog(#"%#", row[i]) : NSLog(#"wtf");
}
}
} else {// mysql_store_result() returned nothing; should it have?
if (mysql_errno(&mysql)) {
NSLog(# "Error: %s\n", mysql_error(&mysql));
} else if (mysql_field_count(&mysql) == 0) {
// query does not return data
// (it was not a SELECT)
num_rows = mysql_affected_rows(&mysql);
}
}
}
}
[pool release];
}
There's no Apple-supplied Objective-C API for MySQL. There are a few third-party wrappers of the C API, though. Take a look at the MySQL-Cocoa Framework, for example.
Given your familiarity with the PHP and C API, it may be more straightforward for you simply to use the C API. You'll need to handle conversion between objects and C data types, but this isn't much work.
Edit
You're crashing because the row value returned by the mysql API isn't an object, and your format string is telling NSLog to treat it as one. The %# is a format-string placeholder for an object, not a C data type.
It's not clear what the value is in this case. The context seems to imply that it's image data. If that's the case, you'll likely want to create an NSData object from the blob returned by the query, e.g.:
NSData *imageData;
imageData = [[ NSData alloc ] initWithBytes: row[ i ] length: lengths[ i ]];
NSLog( #"imageData: %#", imageData );
/* ...create NSImage, CGImage, etc... */
[ imageData release ];
If your result fields are just strings, use NSString's -initWithBytes:length:encoding: method:
NSString *s;
s = [[ NSString alloc ] initWithBytes: row[ i ] length: lengths[ i ]
encoding: NSUTF8StringEncoding ];
NSLog( #"result column %d: %#", i, s );
[ s release ];

how to get RAW Data when i use AudioQueue to Record voice?

when i use AudioQueue to Record voice to file, this is ok.
i try at MyInputBufferHandler function use
AudioQueueBufferRef->mAudioData
can get raw data, but in this MyInputBufferHandler function
can't call other object , like oStream .
i want get AudioQueue Buffer's raw data , and send this raw data to internet ,how to do ?
You need to set the format the way you want to receive data to AudioQueue, refer following function,
http://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/CoreAudioDataTypesRef/Reference/reference.html
One example,
FillOutASBDForLPCM (sRecordFormat,
16000,
1,
8,
8,
false,
false
);
See the answer to this question, which gives you raw data. You can then bundle it as NSData or whatever, zip and upload.
You need to modify some codes in myInputBufferHandler, I had created a obj-c object to adopt the cpp code from Apple SpeakHere sample.
Please feel free to use it:
MIP_StreamAudioRecorder.h
//
// MIP_StreamAudioRecorder.h
//
// Created by Dennies Chang on 12/10/3.
// Copyright (c) 2012年 Dennies Chang. All rights reserved.
//
#import <Foundation/Foundation.h>
#include <AudioToolbox/AudioToolbox.h>
#include <Foundation/Foundation.h>
#include <libkern/OSAtomic.h>
#include "CAStreamBasicDescription.h"
#include "CAXException.h"
#define kNumberRecordBuffers 3
#protocol MIP_StreamAudioRecorderDelegate;
#interface MIP_StreamAudioRecorder : NSObject {
CAStreamBasicDescription mRecordFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kNumberRecordBuffers];
BOOL mIsRunning;
id <MIP_StreamAudioRecorderDelegate> delegate;
}
#property (nonatomic, assign) id <MIP_StreamAudioRecorderDelegate> delegate;
#property (nonatomic, readonly) BOOL mIsRunning;
- (void)SetupAudioFormat:(UInt32) inFormatID;
- (void)startRecord;
- (void)stopRecord;
- (int)computeRecordBufferSize:(AudioStreamBasicDescription *)format duration:(float)second;
#end
#protocol MIP_StreamAudioRecorderDelegate <NSObject>
#optional
- (void)gotAudioData:(NSData *)audioData;
#end
And .mm file : MIP_StreamAudioRecorder.mm
//
// MIP_StreamAudioRecorder.mm
//
// Created by Dennies Chang on 12/10/3.
// Copyright (c) 2012年 Dennies Chang. All rights reserved.
//
#import "MIP_StreamAudioRecorder.h"
#implementation MIP_StreamAudioRecorder
#synthesize delegate;
#synthesize mIsRunning;
- (id)init {
self = [super init];
return self;
}
- (void)dealloc {
[super dealloc];
}
- (void)SetupAudioFormat:(UInt32) inFormatID {
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mSampleRate = 8000;
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}
}
- (int)computeRecordBufferSize:(AudioStreamBasicDescription *)format duration:(float)second {
int packets, frames, bytes = 0;
try {
frames = (int)ceil(second * format->mSampleRate);
if (format->mBytesPerFrame > 0)
bytes = frames * format->mBytesPerFrame;
else {
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0)
maxPacketSize = format->mBytesPerPacket; // constant packet size
else {
UInt32 propertySize = sizeof(maxPacketSize);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
&propertySize), "couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
packets = frames; // worst-case scenario: 1 frame in a packet
if (packets == 0) // sanity check
packets = 1;
bytes = packets * maxPacketSize;
}
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
return 0;
}
return bytes;
}
/*
- (void)myInputBufferHandler:(id)inUserData AudioQueue:(AudioQueueRef) inAQ BufferRef:(AudioQueueBufferRef)inBuffer withAudioTS:(AudioTimeStamp *)inStartTime andNumPackets:(UInt32)inNumPackets andDescription:(AudioStreamPacketDescription *)inPacketDesc {
*/
void MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
MIP_StreamAudioRecorder *THIS = (MIP_StreamAudioRecorder *)inUserData;
try {
if (inNumPackets > 0) {
//use delegate to handle;
if (THIS.delegate) {
NSMutableData *data = [[NSMutableData alloc] init];
if ([THIS.delegate respondsToSelector:#selector(gotAudioData:)]) {
[data appendBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize];
[THIS.delegate gotAudioData:data];
}
[data release];
}
/*
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
"AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
*/
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (THIS->mIsRunning)
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
- (void)startRecord {
int i, bufferByteSize;
try {
[self SetupAudioFormat:kAudioFormatLinearPCM];
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
self /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
UInt32 size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");
// allocate and enqueue buffers
bufferByteSize = [self computeRecordBufferSize:&mRecordFormat duration:kBufferDurationSeconds]; // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException &e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");
}
}
- (void)stopRecord {
XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
AudioQueueDispose(mQueue, true);
}
#end
Please get informed, you should change the sampleRate and relative condition,
I set it as mono (1 channel), 16 bit, 8Khz to record.
And you can get the raw data in the obj-c code which implement MIP_StreamAudioRecorderDelegate, you can send the raw data with internet channel,
or save it to file.
Best Regard,
Dennies.