how to get GIF Transparency color in vc++ 6.0 and vc++ 2005 - gif

how to get GIF Transparency color in vc++ 6.0 and vc++ 2005 ?

See the GIF specification. GIFs have a palette of up to 256 possible colors. The palette index of the background color can be found at offset 11 from the beginning of the file, and consists of a single byte (value 0-255). To find the actual color that this corresponds to, look up that color in the Global Color Table. See the spec for more information on how to parse the Global Color Table.

I just implemented the GIF decoder. Here are the details -
In case the
if(Graphics_Render_Block->transperencyflag)
FrameInfo->transperencyindex=Graph_Renderblk.Transp_Clr_Index;
else
FrameInfo->transperencyindex='\0';
The logic is simple. While rendering on to the display, if transperencyindex == the color at the point, then don't render it. In face, move to the next location.
Here is the snippet of my display code - I am using Linux Framebuffer, but the logic will work for Microsoft VC also. Note, here I am ignoring the logical screen descriptor.
void Display(FrameData *FrameInfo)
{
/*short int ImageStartX = 0;
short int ImageStartY = 0; */
unsigned int ImageStartX = 0;
unsigned int ImageStartY = 0;
int Index = 0;
printf("\r\n INFO: Display Called.\r\n");
while(1)
{
Index = 0;
ImageStartX = (FrameInfo->frameScreenInfo.LeftPosition);
ImageStartY = (FrameInfo->frameScreenInfo.TopPosition);
while(ImageStartY < ((FrameInfo->frameScreenInfo.ImageHeight)+(FrameInfo->frameScreenInfo.TopPosition)))
{
while(ImageStartX < ((FrameInfo->frameScreenInfo.ImageWidth)+(FrameInfo->frameScreenInfo.LeftPosition)))
{
if(FrameInfo->frame[Index] != FrameInfo->transperencyindex)
{
#ifndef __DISPLAY_DISABLE
SetPixel(local_display_mem,ImageStartX,ImageStartY,((FrameInfo->CMAP)+(FrameInfo->frame[Index]))->Red,((FrameInfo->CMAP)+(FrameInfo->frame[Index]))->Green,((FrameInfo->CMAP)+(FrameInfo->frame[Index]))->Blue);
#endif
#ifdef DEBUG
count++;
#endif
}
Index++;
ImageStartX++;
}
ImageStartY++;
ImageStartX=(FrameInfo->frameScreenInfo.LeftPosition);
}
#ifdef DEBUG
printf("INFO:..Dumping Framebuffer\r\n");
printf("Pixel hit=%d\r\n",count);
count = 0;
printf("the Frameinfo.leftposition=%d FrameInfo->frameScreenInfo.topposition=%d\r\n",FrameInfo->frameScreenInfo.LeftPosition,FrameInfo->frameScreenInfo.TopPosition);
printf("the Frameinfo.ImageWidth=%d FrameInfo->frameScreenInfo.ImageHeight=%d\r\n",FrameInfo->frameScreenInfo.ImageWidth,FrameInfo->frameScreenInfo.ImageHeight);
#endif
#ifndef __DISPLAY_DISABLE
memcpy(fbp,local_display_mem,screensize);
#endif
/** Tune this multiplication to meet the right output on the display **/
usleep((FrameInfo->InterFrameDelay)*10000);
if( FrameInfo->DisposalMethod == 2)
{
printf("set the Background\r\n");
#ifndef __DISPLAY_DISABLE
SetBackground(FrameInfo);
#endif
}
FrameInfo = FrameInfo->Next;
}
}
The design I use is that decode all the frames, and make a singly circular link list. Now, keep displaying the frames. You can download the decoder logic and details from the following link - http://www.tune2wizard.com/gif-decoder/

Related

Esp32 and JsonArduino library reading analog Values in a Array

I'm trying to understand how to use Json with the ESP32 or Arduino.
In the following code example the idea is to read the values from a potentiometer and display it on the Serial Monitor. I was expecting to see something like this when I am turning the potentiometer.
"Reading: 0,54,140,175,480,782"
"Reading: 600,523,320,175,48,2"
But I get this
"Reading: 54,54,54,54,54,54"
"Reading: 140,140,140,140,140,140"
#include <ArduinoJson.h>
void setup() {
Serial.begin(9600);
}
void loop() {
StaticJsonDocument<500> doc;
JsonArray analogValues = doc.createNestedArray("analog");
for (int pin = 0; pin < 6; pin++) {
int value = analogRead(35);
analogValues.add(value);
}
Serial.print(F("Reading: "));
serializeJson(doc, Serial);
Serial.println();
}
Your code will take 7 samples from the input pin very quickly - faster than it's likely you'll be able to change the potentiometer. You need to add a delay between the samples to give the potentiometer time to change. So:
for (int pin = 0; pin < 6; pin++) {
int value = analogRead(35);
analogValues.add(value);
delay(200);
}
would wait 2 tenths of a second between taking samples.
To do some very basic debugging on this you could also confirm that the issues is the samples themselves and not the way you're handling JSON by outputting the sample values as you take them. In your original code this would be:
for (int pin = 0; pin < 6; pin++) {
int value = analogRead(35);
Serial.println(value);
analogValues.add(value);
}
It's also possible that the act of outputting the samples might slow things down enough that you might start to see variation.

How to create out-of-tree QEMU devices?

Two possible mechanisms come to mind:
IPC like the existing QMP and QAPI
QEMU loads a shared library plugin that contains the model
Required capabilities (of course all possible through the C API, but not necessarily IPC APIs):
inject interrupts
register callbacks for register access
modify main memory
Why I want this:
use QEMU as a submodule and leave its source untouched
additional advantages only present for IPC methods:
write the models in any language I want
use a non-GPL license for my device
I'm aware of in-tree devices as explained at: How to add a new device in QEMU source code? which are the traditional way of doing things.
What I've found so far:
interrupts: could only find NMI generation with the nmi monitor command
IO ports: IO possible with i and o monitor commands, so I'm fine there
main memory:
the ideal solution would be to map memory to host directly, but that seems hard:
http://kvm.vger.kernel.narkive.com/rto1dDqn/sharing-variables-memory-between-host-and-guest
https://www.linux-kvm.org/images/e/e8/0.11.Nahanni-CamMacdonell.pdf
http://www.fp7-save.eu/papers/SCALCOM2016.pdf
memory read is possible through the x and xp monitor commands
could not find how to write to memory with monitor commands. But I think the GDB API supports, so it should not be too hard to implement.
The closest working piece of code I could find was: https://github.com/texane/vpcie , which serializes PCI on both sides, and sends it through QEMU's TCP API. But this is more inefficient and intrusive, as it requires extra setup on both guest and host.
This create out of tree PCI device , it just display device in lspci..
It will ease faster PCI driver implementation as it will act as module,
can we extend this to to have similar functionality as edu-pci of QEMU.?
https://github.com/alokprasad/pci-hacking/blob/master/ksrc/virtual_pcinet/virtual_pci.c
/*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/sysfs.h>
#include <linux/fs.h>
#include <linux/kobject.h>
#include <linux/device.h>
#include <linux/proc_fs.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/version.h>
#include<linux/kernel.h>
#define PCI_VENDOR_ID_XTREME 0x15b3
#define PCI_DEVICE_ID_XTREME_VNIC 0x1450
static struct pci_bus *vbus;
static struct pci_sysdata *sysdata;
static DEFINE_PCI_DEVICE_TABLE( vpci_dev_table) = {
{PCI_DEVICE(PCI_VENDOR_ID_XTREME, PCI_DEVICE_ID_XTREME_VNIC)},
{0}
};
MODULE_DEVICE_TABLE(pci, vpci_dev_table);
int vpci_read(struct pci_bus *bus, unsigned int devfn, int where,
int size, u32 *val)
{
switch (where) {
case PCI_VENDOR_ID:
*val = PCI_VENDOR_ID_XTREME | PCI_DEVICE_ID_XTREME_VNIC << 16;
/* our id */
break;
case PCI_COMMAND:
*val = 0;
break;
case PCI_HEADER_TYPE:
*val = PCI_HEADER_TYPE_NORMAL;
break;
case PCI_STATUS:
*val = 0;
break;
case PCI_CLASS_REVISION:
*val = (4 << 24) | (0 << 16) | 1;
/* network class, ethernet controller, revision 1 */ /*2 or 4*/
break;
case PCI_INTERRUPT_PIN:
*val = 0;
break;
case PCI_SUBSYSTEM_VENDOR_ID:
*val = 0;
break;
case PCI_SUBSYSTEM_ID:
*val = 0;
break;
default:
*val = 0;
/* sensible default */
}
return 0;
}
int vpci_write(struct pci_bus *bus, unsigned int devfn, int where,
int size, u32 val)
{
switch (where) {
case PCI_BASE_ADDRESS_0:
case PCI_BASE_ADDRESS_1:
case PCI_BASE_ADDRESS_2:
case PCI_BASE_ADDRESS_3:
case PCI_BASE_ADDRESS_4:
case PCI_BASE_ADDRESS_5:
break;
}
return 0;
}
struct pci_ops vpci_ops = {
.read = vpci_read,
.write = vpci_write
};
void vpci_remove_vnic()
{
struct pci_dev *pcidev = NULL;
if (vbus == NULL)
return;
pci_remove_bus_device(pcidev);
pci_dev_put(pcidev);
}
EXPORT_SYMBOL( vpci_remove_vnic);
void vpci_vdev_remove(struct pci_dev *dev)
{
}
static struct pci_driver vpci_vdev_driver = {
.name = "Xtreme-Virtual-NIC1",
.id_table = vpci_dev_table,
.remove = vpci_vdev_remove
};
int vpci_bus_init(void)
{
struct pci_dev *pcidev = NULL;
sysdata = kzalloc(sizeof(void *), GFP_KERNEL);
vbus = pci_scan_bus_parented(NULL, 2, & vpci_ops, sysdata);
//vbus = pci_create_root_bus(NULL,i,& vpci_ops, sysdata,NULL);
//if (vbus != NULL)
//break;
memset(sysdata, 0, sizeof(void *));
if (vbus == NULL) {
kfree(sysdata);
return -EINVAL;
}
if (pci_register_driver(& vpci_vdev_driver) < 0) {
pci_remove_bus(vbus);
vbus = NULL;
return -EINVAL;
}
pcidev = pci_scan_single_device(vbus, 0);
if (pcidev == NULL)
return 0;
else
pci_dev_get(pcidev);
pci_bus_add_devices(vbus);
return 0;
}
void vpci_bus_remove(void)
{
if (vbus) {
pci_unregister_driver(&vpci_vdev_driver);
device_unregister(vbus->bridge);
pci_remove_bus(vbus);
kfree(sysdata);
vbus = NULL;
}
}
static int __init pci_init(void)
{
printk( "module loaded");
vpci_bus_init();
return 0;
}
static void __exit pci_exit(void)
{
printk(KERN_ALERT "unregister PCI Device\n");
pci_unregister_driver(&vpci_vdev_driver);
}
module_init(pci_init);
module_exit(pci_exit);
MODULE_LICENSE("GPL");
There is at least one fork of QEMU I'm aware of that offers shared library plugins for QEMU... but it's a fork of QEMU 4.0.
https://github.com/cromulencellc/qemu-shoggoth
It is possible to build out of tree plugins with this fork, though it's not documented.
On Nov 11 2019 Peter Maydell, a major QEMU contributor, commented on another Stack Overflow question that:
Device plugins are specifically off the menu, because upstream does not want to provide a nice easy mechanism for people to use to have out-of-tree non-GPL/closed-source devices.
So it seems that QEMU devs oppose this idea at that point in time. It is worth learning about the QEMU plugin system though which might come handy for related applications in any case: How to count the number of guest instructions QEMU executed from the beginning to the end of a run?
This is a shame. Imagine if the Linux kernel didn't have a kernel module interface! I suggest QEMU expose this interface, but just don't make it stable, so that it won't impose a developer burden, and which gives the upside that those who merge won't have as painful rebases.

Image not rendering on web browser

I got a kaleidoscope code from Gary George at openprocessing. I tried to modify it to meet my needs and export it to web. But I'm having trouble rendering the image on the browser. It runs well on the desktop but not on browser. I've been trying to fix the error but...no luck (yet, I hope).
Here is the code:
/**
* Kaleidoscope by Gary George.
*
*Load an image.
*Move around the mouse to explore other parts of the image.
*Press the up and down arrows to add slices.
*Press s to save.
*
*I had wanted to do a Kaleidoscope and was inspired with the by Devon Eckstein's Hexagon Stitchery
*and his use of Mask. His sketch can be found at http://www.openprocessing.org/visuals/?visualID=1288
*/
PImage a;
int totalSlices=8; // the number of slices the image will start with... should be divisable by 4
int previousMouseX, previousMouseY; //store previous mouse coordinates
void setup()
{
size(500,500, JAVA2D);
background(0,0,0);
smooth(); //helps with gaps inbetween slices
fill(255);
frameRate(30);
a=loadImage("pattern.jpg");
}
void draw() {
if(totalSlices==0){
background(0,0,0);
image(a,0,0);
}
else{
if(mouseButton == LEFT){
background(0,0,0);
//the width and height parameters for the mask
int w =int(width/3.2);
int h = int(height/3.2);
//create a mask of a slice of the original image.
PGraphics selection_mask;
selection_mask = createGraphics(w, h, JAVA2D);
selection_mask.beginDraw();
selection_mask.smooth();
selection_mask.arc(0,0, 2*w, 2*h, 0, radians(360/totalSlices+.1)); //using 369 to reduce lines on arc edges
selection_mask.endDraw();
float wRatio = float(a.width-w)/float(width);
float hRatio = float(a.height-h)/float(height);
//println("ratio: "+hRatio+"x"+wRatio);
PImage slice = createImage(w, h, RGB);
slice = a.get(int((mouseX)*wRatio), int((mouseY)*hRatio), w, h);
slice.mask(selection_mask);
translate(width/2,height/2);
float scaleAmt =1.5;
scale(scaleAmt);
for(int k = 0; k<=totalSlices ;k++){
rotate(k*radians(360/(totalSlices/2)));
image(slice, 0, 0);
scale(-1.0, 1.0);
image(slice,0,0);
}
}
resetMatrix();
}
}
You need to change two things for loading a local image in JS mode:
Your images must be in a folder called data inside your sketch folder. You can just make the folder yourself and put the image in it. You still load the image as before, no need to specify the data folder.
In JS mode, you need to preload the image using this command: /* #pjs preload="pattern.jpg"; */
So your full image loading code would be:
/* #pjs preload="pattern.jpg"; */
a = loadImage("pattern.jpg");

How to select a GPU with CUDA?

I have a computer with 2 GPUs; I wrote a CUDA C program and I need to tell it somehow that I want to run it on just 1 out of the 2 graphic cards; what is the command I need to type and how should I use it? I believe somehow that is related to the cudaSetDevice but I can't really find out how to use it.
It should be pretty much clear from documentation of cudaSetDevice, but let me provide following code snippet.
bool IsGpuAvailable()
{
int devicesCount;
cudaGetDeviceCount(&devicesCount);
for(int deviceIndex = 0; deviceIndex < devicesCount; ++deviceIndex)
{
cudaDeviceProp deviceProperties;
cudaGetDeviceProperties(&deviceProperties, deviceIndex);
if (deviceProperties.major >= 2
&& deviceProperties.minor >= 0)
{
cudaSetDevice(deviceIndex);
return true;
}
}
return false;
}
This is how I iterated through all available GPUs (cudaGetDeviceCount) looking for the first one of Compute Capability of at least 2.0. If such device was found, then I used cudaSetDevice so all the CUDA computations were executed on that particular device. Without executing the cudaSetDevice your CUDA app would execute on the first GPU, i.e. the one with deviceIndex == 0 but which particular GPU is that depends on which GPU is in which PCIe slot.
EDIT:
After clarifying your question in comments, it seems to me that it should be suitable for you to choose the device based on its name. If you are unsure about your actual GPU names, then run this code which will print names of all your GPUs into console:
int devicesCount;
cudaGetDeviceCount(&devicesCount);
for(int deviceIndex = 0; deviceIndex < devicesCount; ++deviceIndex)
{
cudaDeviceProp deviceProperties;
cudaGetDeviceProperties(&deviceProperties, deviceIndex);
cout << deviceProperties.name << endl;
}
After that, choose the name of the GPU that you want to use for computations, lets say it is "GTX XYZ". Call the following method from your main method, thanks to it, all the CUDA kernels will be executed on the device with name "GTX XYZ". You should also check the return value - true if device with such name is found, false otherwise:
bool SetGPU()
{
int devicesCount;
cudaGetDeviceCount(&devicesCount);
string desiredDeviceName = "GTX XYZ";
for(int deviceIndex = 0; deviceIndex < devicesCount; ++deviceIndex)
{
cudaDeviceProp deviceProperties;
cudaGetDeviceProperties(&deviceProperties, deviceIndex);
if (deviceProperties.name == desiredDeviceName)
{
cudaSetDevice(deviceIndex);
return true;
}
}
return false;
}
Of course you have to change the value of desiredDeviceName variable to desired value.
Searching more carefully in the internet I found this lines of code that select the GPU with more cores among all the devices installed in the Pc.
int num_devices, device;
cudaGetDeviceCount(&num_devices);
if (num_devices > 1) {
int max_multiprocessors = 0, max_device = 0;
for (device = 0; device < num_devices; device++) {
cudaDeviceProp properties;
cudaGetDeviceProperties(&properties, device);
if (max_multiprocessors < properties.multiProcessorCount) {
max_multiprocessors = properties.multiProcessorCount;
max_device = device;
}
}
cudaSetDevice(max_device);
}

Bullet Physics Library switching rigidbody between static_object and dynamic_object

I am fairly new to Bullet and my goal here is to be able to switch a btRigidBody between static and dynamic. To initialize my rigidbody I start out by doing this:
btGImpactMeshShape* triMesh=new btGImpactMeshShape(mTriMesh);
triMesh->setLocalScaling(btVector3(1,1,1));
triMesh->updateBound();
meshInfos[currentMesh].shape=triMesh;
meshInfos[currentMesh].motionState=new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1), btVector3(position.x,position.y,position.z)));
meshInfos[currentMesh].mass=mass;
btVector3 inertia;
meshInfos[currentMesh].shape->calculateLocalInertia(mass, inertia);
btRigidBody::btRigidBodyConstructionInfo rigidBodyCI(0, meshInfos[currentMesh].motionState, meshInfos[currentMesh].shape, inertia);
meshInfos[currentMesh].rigidBody=new btRigidBody(rigidBodyCI);
that sets it up as static_object because the "mass" variable I have is 0 to start. Later on I have a function that checks if a boolean was switched on and it switches the rigidbody to a dynamic object like this:
if(gravityOn && !addedToWorld)
{
if(mass>0)
{
world->removeRigidBody(body);
btVector3 inertia;
body->getCollisionShape()->calculateLocalInertia(mass, inertia);
body->setMassProps(mass, inertia);
body->setLinearFactor(btVector3(1,1,1));
body->setAngularFactor(btVector3(1,1,1));
body->updateInertiaTensor();
world->addRigidBody(body);
addedToWorld=true;
}
else
{
std::cout << "Mass must be set to a number greater than 0" << std::endl;
}
}
else if(!gravityOn && addedToWorld)
{
world->removeRigidBody(body);
body->setMassProps(0, btVector3(0,0,0));
body->setLinearFactor(btVector3(0,0,0));
body->setAngularFactor(btVector3(0,0,0));
body->updateInertiaTensor();
world->addRigidBody(body);
addedToWorld=false;
}
the addedToWorld boolean just makes sure that the if statement doesn't keep running through the code block every update.
From what I have researched this should work but instead does nothing. Am I missing something? From what I've seen the best practice is to remove the rigidbody first before you do any changes to it. And then setMassProps to change inertia, setLinearFactor and setAngularFactor allows the object to not move or move depending on the vector you pass into it when collided into, updateInertiaTensor allows the inertia to update properly, and then I add the rigidbody back. I might have misunderstood some of this, any help would be greatly appreciated.
After a long night I figured it out. First of all for an unknown reason at this time using a triangle mesh (the mesh of the object) will have the possibility of crashing the application. So instead I used a convex collision shape. In addition, you will need to call some flags in the switch to properly switch between static and dynamic. The code is as such:
btConvexShape* tmpConvexShape=new btConvexTriangleMeshShape(mTriMesh);
btShapeHull* hull=new btShapeHull(tmpConvexShape);
btScalar margin=tmpConvexShape->getMargin();
hull->buildHull(margin);
tmpConvexShape->setUserPointer(hull);
btConvexHullShape* convexShape=new btConvexHullShape();
bool updateLocalAabb=false;
for(int i=0; i<hull->numVertices(); i++)
{
convexShape->addPoint(hull->getVertexPointer()[i],updateLocalAabb);
}
convexShape->recalcLocalAabb();
convexShape->setMargin(0.001f);
delete tmpConvexShape;
delete hull;
meshInfos[currentMesh].shape=convexShape;
meshInfos[currentMesh].motionState=new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1), btVector3(position.x,position.y,position.z)));
meshInfos[currentMesh].mass=mass;
btVector3 inertia;
meshInfos[currentMesh].shape->calculateLocalInertia(mass, inertia);
btRigidBody::btRigidBodyConstructionInfo rigidBodyCI(0, meshInfos[currentMesh].motionState, meshInfos[currentMesh].shape, inertia);
meshInfos[currentMesh].rigidBody=new btRigidBody(rigidBodyCI);
and switching:
if(gravityOn && !addedToWorld)
{
if(mass>0)
{
world->removeRigidBody(body);
btVector3 inertia(0,0,0);
body->getCollisionShape()->calculateLocalInertia(mass, inertia);
body->setActivationState(DISABLE_DEACTIVATION);
body->setMassProps(mass, inertia);
body->setLinearFactor(btVector3(1,1,1));
body->setAngularFactor(btVector3(1,1,1));
body->updateInertiaTensor();
body->clearForces();
btTransform transform;
transform.setIdentity();
float x=position.x;
float y=position.y;
float z=position.z;
transform.setOrigin(btVector3(x, y,z));
body->getCollisionShape()->setLocalScaling(btVector3(1,1,1));
body->setWorldTransform(transform);
world->addRigidBody(body);
addedToWorld=true;
}
else
{
std::cout << "Mass must be set to a number greater than 0" << std::endl;
}
}
else if(!gravityOn && addedToWorld)
{
world->removeRigidBody(body);
btVector3 inertia(0,0,0);
body->getCollisionShape()->calculateLocalInertia(0, inertia);
body->setCollisionFlags(btCollisionObject::CF_STATIC_OBJECT);
body->setMassProps(0, inertia);
body->setLinearFactor(btVector3(0,0,0));
body->setAngularFactor(btVector3(0,0,0));
body->setGravity(btVector3(0,0,0));
body->updateInertiaTensor();
body->setAngularVelocity(btVector3(0,0,0));
body->setLinearVelocity(btVector3(0,0,0));
body->clearForces();
body->setActivationState(WANTS_DEACTIVATION);
btTransform transform;
transform.setIdentity();
float x=position.x;
float y=position.y;
float z=position.z;
transform.setOrigin(btVector3(x, y,z));
body->getCollisionShape()->setLocalScaling(btVector3(1,1,1));
body->setWorldTransform(transform);
world->addRigidBody(body);
addedToWorld=false;
}
Use a short function to change the body's mass (static if 0.0f, dynamic if otherwise):
void InceptionPhysics::changeMassObject(btRigidBody* rigidBody, float mass) {
logFileStderr(VERBOSE, "mass... %5.2f\n", mass);
m_dynamicsWorld->removeRigidBody(rigidBody);
btVector3 inertia;
rigidBody->getCollisionShape()->calculateLocalInertia(mass, inertia);
rigidBody->setMassProps(mass, inertia);
m_dynamicsWorld->addRigidBody(rigidBody);
}