Empty values in return for ctypes pointer to int array - ctypes

I'm currently trying to interface the following library (http://sol.gfxile.net/escapi/) using ctypes but I"m unsure if I'm doing something wrong or the library isn't working as I expect (sample c applications seem to work)
There is a struct that you are meant to pass to initCapture that looks like this
struct SimpleCapParams
{
/* Target buffer.
* Must be at least mWidth * mHeight * sizeof(int) of size!
*/
int * mTargetBuf;
/* Buffer width */
int mWidth;
/* Buffer height */
int mHeight;
};
This is my current code:
from ctypes import cdll, Structure, c_int, POINTER, cast, c_long, pointer
class SimpleCapParams(Structure):
_fields_ = [
("mTargetBuf", POINTER(c_int)),
("mWidth", c_int),
("mHeight", c_int)
]
width, height = 512, 512
array = (width * height * c_int)()
options = SimpleCapParams()
options.mWidth = width
options.height = height
options.mTargetBuf = array
lib = cdll.LoadLibrary('escapi.dll')
lib.initCOM()
lib.initCapture(0, options)
lib.doCapture(0)
while lib.isCaptureDone(0) == 0:
pass
print options.mTargetBuf
lib.deinitCapture(0)
However all the values in mTargetBuf are 0. Am I calling this wrong or something else going on?
This is a C++ example of what I need to do (without the ASCII): https://github.com/jarikomppa/escapi/blob/master/simplest/main.cpp

So seems I should check my code :)
options.height = height should be options.mHeight = height as per my structure.
The byref also helped.
Working code:
from ctypes import *
width, height = 512, 512
class SimpleCapParms(Structure):
_fields_ = [
("mTargetBuf", POINTER(c_int)),
("mWidth", c_int),
("mHeight", c_int),
]
array_type = (width * height * c_int)
array = array_type()
options = SimpleCapParms()
options.mWidth = width
options.mHeight = height
options.mTargetBuf = array
lib = cdll.LoadLibrary('escapi.dll')
lib.initCapture.argtypes = [c_int, POINTER(SimpleCapParms)]
lib.initCapture.restype = c_int
lib.initCOM()
lib.initCapture(0, byref(options))
lib.doCapture(0)
while lib.isCaptureDone(0) == 0:
pass
print(array[100])
lib.deinitCapture(0)

Related

Looping through a circular buffer in Chisel

Let's say I have implemented a circular buffer with a head and a tail. Is there an elegant scala-way of looping through this buffer starting from the head ending at the tail (and doing a possible wrap-around at the end)
class MyBuffer() extends Module
{
val data = Reg(Vec(NUM_ELEMENTS, Bool())
val head = RegInit(0.U(NUM_ELEMENTS_W.W))
val tail = RegInit(0.U(NUM_ELEMENTS_W.W))
}
I'm not sure what your looping goal is. But consider the following code example (with a few details left out). This example exposes the contents of a RingBuffer as Vec view with ViewLength valid elements. I think this demonstrates a modestly elegant method for this definition of looping, the emitted hardware (or the view idea) may not be elegant. Let me know if is not quite the notion of looping you had in mind.
import chisel3._
import chisel3.util.log2Ceil
/**
* This ring buffer presents its current contents through view
*
* #param depth
* #param bitWidth
*/
class RingBuffer(depth: Int, bitWidth: Int) extends MultiIOModule {
/*
You need a bunch of IO's here to push and pop and get full status
*/
val view = IO(Output(Vec(depth, UInt(bitWidth.W))))
val viewLength = IO(Output(UInt(log2Ceil(depth).W)))
val data = Reg(Vec(depth, Bool()))
val head = RegInit(0.U(bitWidth.W))
val tail = RegInit(0.U(bitWidth.W))
/* Need some code here to push and pop elements */
// this constructs a mapping between the indices between current head and tail
// to the 0 to n indices of the view
def mappedIndex(i: Int): UInt = {
val out = Wire(UInt(log2Ceil(depth).W))
when((i.U + head) <= depth.U) {
out := i.U + head
}.otherwise {
out := (i.U + head) - depth.U
}
out
}
// This creates the complicated Mux structure to map betweem
// the ring buffer elements and 0 to n style view
view.zipWithIndex.foreach { case (viewElement, index) =>
viewElement := data(mappedIndex(index))
}
// This presents the number of valid elements in the current view
val difference = tail - head
when((difference) < 0.U) {
viewLength := (difference) + depth.U
}.otherwise {
viewLength := (difference)
}
}

3D binary image to 3D mesh using itk

I'm trying to generate a 3d mesh using 3d RLE binary mask.
In itk, I find a class named itkBinaryMask3DMeshSource
it's based on MarchingCubes algorithm
some example, use this class, ExtractIsoSurface et ExtractIsoSurface
in my case, I have a rle 3D binary mask but represented in 1d vector format.
I'm writing a function for this task.
My function takes as parameters :
Inputs : crle 1d vector ( computed rle), dimension Int3
Output : coord + coord indices ( or generate a single file contain both of those array; and next I can use them to visualize the mesh )
as a first step, I decoded this computed rle.
next, I use imageIterator to create an image compatible to BinaryMask3DMeshSource.
I'm blocked, in the last step.
This is my code :
void GenerateMeshFromCrle(const std::vector<int>& crle, const Int3 & dim,
std::vector<float>* coords, std::vector<int>*coord_indices, int* nodes,
int* cells, const char* outputmeshfile) {
std::vector<int> mask(crle.back());
CrleDecode(crle, mask.data());
// here we define our itk Image type with a 3 dimension
using ImageType = itk::Image< unsigned char, 3 >;
ImageType::Pointer image = ImageType::New();
// an Image is defined by start index and size for each axes
// By default, we set the first start index from x=0,y=0,z=0
ImageType::IndexType start;
start[0] = 0; // first index on X
start[1] = 0; // first index on Y
start[2] = 0; // first index on Z
// until here, no problem
// We set the image size on x,y,z from the dim input parameters
// itk takes Z Y X
ImageType::SizeType size;
size[0] = dim.z; // size along X
size[1] = dim.y; // size along Y
size[2] = dim.x; // size along Z
ImageType::RegionType region;
region.SetSize(size);
region.SetIndex(start);
image->SetRegions(region);
image->Allocate();
// Set the pixels to value from rle
// This is a fast way
itk::ImageRegionIterator<ImageType> imageIterator(image, region);
int n = 0;
while (!imageIterator.IsAtEnd() && n < mask.size()) {
// Set the current pixel to the value from rle
imageIterator.Set(mask[n]);
++imageIterator;
++n;
}
// In this step, we launch itkBinaryMask3DMeshSource
using BinaryThresholdFilterType = itk::BinaryThresholdImageFilter< ImageType, ImageType >;
BinaryThresholdFilterType::Pointer threshold =
BinaryThresholdFilterType::New();
threshold->SetInput(image->GetOutput()); // here it's an error, since no GetOutput member for image
threshold->SetLowerThreshold(0);
threshold->SetUpperThreshold(1);
threshold->SetOutsideValue(0);
using MeshType = itk::Mesh< double, 3 >;
using FilterType = itk::BinaryMask3DMeshSource< ImageType, MeshType >;
FilterType::Pointer filter = FilterType::New();
filter->SetInput(threshold->GetOutput());
filter->SetObjectValue(1);
using WriterType = itk::MeshFileWriter< MeshType >;
WriterType::Pointer writer = WriterType::New();
writer->SetFileName(outputmeshfile);
writer->SetInput(filter->GetOutput());
}
any idea
I appreciate your time.
Since image is not a filter you can plug it in directly: threshold->SetInput(image);. At the end of this function, you also need writer->Update();. The rest looks good.
Side-note: it looks like you might benefit from usage of import filter instead of manually iterating the buffer and copying values one at a time.

Load/init Button with in memory imagery like Texture2D

I can only find methods to create Button:s from disk-based image files.
How would I set/change a button's normal/highlighted states from in-memory data like Texture2D, Image etc?
Cocos2D-x v3.5
Update 20160408:
I Googled this again, and Google suggested this page... and I have to thank my past self :-)
This time I wrote a handy tool function:
void Mjcocoshelper::rendernodetospriteframecache(Node* node, std::string nameincache) {
const Size SIZE = node->getBoundingBox().size;
auto render = RenderTexture::create(SIZE.width, SIZE.height);
render->begin();
const Vec2 POS_BEFORE = node->getPosition();
const bool IGNORE_BEFORE = node->isIgnoreAnchorPointForPosition();
const float SCALEY_BEFORE = node->getScaleY();
node->ignoreAnchorPointForPosition(false);
node->setPosition(SIZE.width * 0.5f, SIZE.height * 0.5f);
node->setScaleY(-1.0f); // Or it gets upside down?
node->visit();
node->ignoreAnchorPointForPosition(IGNORE_BEFORE);
node->setPosition(POS_BEFORE);
node->setScaleY(SCALEY_BEFORE);
render->end();
auto spriteNew = render->getSprite();
Texture2D* texture2d = spriteNew->getTexture();
SpriteFrame* spriteframeOff = SpriteFrame::createWithTexture(texture2d, Rect(0, 0, texture2d->getContentSize().width, texture2d->getContentSize().height));
SpriteFrameCache::getInstance()->addSpriteFrame(spriteframeOff, nameincache);
}
Old:
I figured out this kinda neat workaround:
std::string framenameOff;
{
Texture2D* texture2d = mjpromobuttonx.textureOff();
SpriteFrame* spriteframeOff = SpriteFrame::createWithTexture(texture2d, Rect(0, 0, texture2d->getContentSize().width, texture2d->getContentSize().height));
framenameOff = "autobutton_off_" + std::to_string(++buttonsaddedtocacheoff);
SpriteFrameCache::getInstance()->addSpriteFrame(spriteframeOff, framenameOff);
}
std::string framenameOn;
{
Texture2D* texture2d = mjpromobuttonx.textureOff();
SpriteFrame* spriteframeOn = SpriteFrame::createWithTexture(texture2d, Rect(0, 0, texture2d->getContentSize().width, texture2d->getContentSize().height));
framenameOn = "autobutton_on_" + std::to_string(++buttonsaddedtocacheon);
SpriteFrameCache::getInstance()->addSpriteFrame(spriteframeOn, framenameOn);
}
Button* item = Button::create(framenameOff, framenameOn, framenameOff, TextureResType::PLIST);
So what it does, it adds spriteframes on the fly to the sprite frame cache. We have to make sure that the names we store them with don't overwrite anything else, and we also have to keep creating new names for each image we add to the cache. On some extreme sad occasion maybe this would overwrite some existing image, so I'll give this some more thouhght... maybe... sometime...
Bonus:
I did have a ObjC framework I created earlier that fetches images from a remote server... um for now I reused it, and also needed to convert the UIImage:s the framework ended up with to something I could pass to C++ and then Cocos2D-x... so I used this, it works pretty well:
Texture2D* getTexture2DFromUIImage(UIImage *photo) {
#warning TODO: Make use of this baby later? Does it work? Do we need to free after malloc?
// https://stackoverflow.com/a/15134000/129202
Image *imf = new Image();
NSData *imgData = UIImagePNGRepresentation(photo);
NSUInteger len = [imgData length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [imgData bytes], len);
imf->initWithImageData(byteData,imgData.length);
imf->autorelease();
Texture2D* pTexture = new Texture2D();
pTexture->initWithImage(imf);
pTexture->autorelease();
return pTexture;
}
Now I'm just guessing that the malloc in that function should really have a free somewhere after it... but that's another issue.

Trying to understand the workings of com.adobe.net.URIEncodingBitmap

I'm examining the URIEncodingBitmap class of the com.adobe.net package, and I'm having a hard time understanding the internal workings, exactly. Here's the code:
package com.adobe.net
{
import flash.utils.ByteArray;
/**
* This class implements an efficient lookup table for URI
* character escaping. This class is only needed if you
* create a derived class of URI to handle custom URI
* syntax. This class is used internally by URI.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0*
*/
public class URIEncodingBitmap extends ByteArray
{
/**
* Constructor. Creates an encoding bitmap using the given
* string of characters as the set of characters that need
* to be URI escaped.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function URIEncodingBitmap(charsToEscape:String) : void
{
var i:int;
var data:ByteArray = new ByteArray();
// Initialize our 128 bits (16 bytes) to zero
for (i = 0; i < 16; i++)
this.writeByte(0);
data.writeUTFBytes(charsToEscape);
data.position = 0;
while (data.bytesAvailable)
{
var c:int = data.readByte();
if (c > 0x7f)
continue; // only escape low bytes
var enc:int;
this.position = (c >> 3);
enc = this.readByte();
enc |= 1 << (c & 0x7);
this.position = (c >> 3);
this.writeByte(enc);
}
}
/**
* Based on the data table contained in this object, check
* if the given character should be escaped.
*
* #param char the character to be escaped. Only the first
* character in the string is used. Any other characters
* are ignored.
*
* #return the integer value of the raw UTF8 character. For
* example, if '%' is given, the return value is 37 (0x25).
* If the character given does not need to be escaped, the
* return value is zero.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function ShouldEscape(char:String) : int
{
var data:ByteArray = new ByteArray();
var c:int, mask:int;
// write the character into a ByteArray so
// we can pull it out as a raw byte value.
data.writeUTFBytes(char);
data.position = 0;
c = data.readByte();
if (c & 0x80)
{
// don't escape high byte characters. It can make international
// URI's unreadable. We just want to escape characters that would
// make URI syntax ambiguous.
return 0;
}
else if ((c < 0x1f) || (c == 0x7f))
{
// control characters must be escaped.
return c;
}
this.position = (c >> 3);
mask = this.readByte();
if (mask & (1 << (c & 0x7)))
{
// we need to escape this, return the numeric value
// of the character
return c;
}
else
{
return 0;
}
}
}
}
Although I understand the workings of ByteArray and the workings of various (bitwise) operators (>>, <<, &, |=, etc.), I'm almost at a complete loss of what this class does exactly (or rather: why it does things the way it does).
Could somebody give a run down on what the purpose of all the bit-shifting and masking is in this class? Particularly:
What is the constructor initializing exactly, and why?
a. What is this.position = (c >> 3); doing, or rather, why?
b. What is enc |= 1 << (c & 0x7); doing?
What is the mask doing exactly in ShouldEscape()?
ad 1. Constructor creates an escape definition array (length 16 bytes = 128 bits). One bit per character. Position of the bit corresponds to the ordinal value of character and its value means whether character should be escaped or not.
ad a. This row calculates appropriate byte in escape definition array for given character.
ad b. Sets bit corresponding to character within the byte.
ad 2. Mask contains appropriate byte for given character and is used to check whether the corresponding bit is set or not.

thrust doesn't provided the expected result using thrust::minimum

consider the following code, when p is a pointer allocated GPU-side.
thrust::device_ptr<float> pWrapper(p);
thrust::device_ptr<float> fDevPos = thrust::min_element(pWrapper, pWrapper + MAXX * MAXY, thrust::minimum<float>());
fRes = *fDevPos;
*fDicVal = fRes;
after applying the same thing on cpu side.
float *hVec = new float[MAXX * MAXY];
cudaMemcpy(hVec, p, MAXX*MAXY*sizeof(float), cudaMemcpyDeviceToHost);
float min = 999;
int index = -1;
for(int i = 0 ; i < MAXX* MAXY; i++)
{
if(min > hVec[i])
{
min = hVec[i];
index = i;
}
}
printf("index :%d a wrapper : %f, as vectorDevice : %f\n",index, fRes, min);
delete hVec;
i get that min != fRes. what am i doing wrong here?
thrust::minimum_element requires the user to supply a comparison predicate. That is, a function which answers the yes-or-no question "is x smaller than y?"
thrust::minimum is not a predicate; it answers the question "which of x or y is smaller?".
To find the smallest element using minimum_element, pass the thrust::less predicate:
ptr_to_smallest_value = thrust::min_element(first, last, thrust::less<T>());
Alternatively, don't pass anything. thrust::less is the default:
ptr_to_smallest_value = thrust::min_element(first, last);
If all you're interested in is the value of the smallest element (not an iterator pointing to the smallest element), you can combine thrust::minimum with thrust::reduce:
smallest_value = thrust::reduce(first, last, std::numeric_limits<T>::max(), thrust::minimum<T>());