Get size of visible area inside browser window - firebreath

this is a continuation of the question (http://goo.gl/a61CG).
I'm trying to retrieve the visible size of the DOM window or document ( not sure which term is correct ) which contains the plugin. I have been studying the Firebreath's reference but I'm coming up short of an answer.
For instance inside onWindowAttached I do this:
m_host->htmlLog("Attaching window.");
FB::DOM::ElementPtr element_ptr(m_host->getDOMElement());
if( element_ptr )
{
int Width = element_ptr->getWidth();
int Height = element_ptr->getHeight();
std::stringstream ss;
ss << "width: " << Width << "; height: " << Height << std::endl;
m_host->htmlLog(ss.str());
}
FB::DOM::ElementPtr parent_element_ptr = element_ptr->getParentNode();
if( parent_element_ptr )
{
int Width = parent_element_ptr->getWidth();
int Height = parent_element_ptr->getHeight();
std::stringstream ss;
ss << "parent props: width: " << Width << "; height: " << Height << std::endl;
m_host->htmlLog(ss.str());
}
m_host->htmlLog("Finished attaching window.");
Google Chrome ( v.23 ) give me this now:
Attaching window.
width: 300; height: 300
Finished attaching window.
The 300x300 pixels refers to the size of the hmtl object which orders the browser the load the plugin.
So, what is the way to retrieve the visible area of the browser window that contains a plugin?
I'm using a recent firebreath trunk version on Windows 7 and Visual Studio 2010.
Thanks,
Christian

Basically what you should be looking for is not actually how to do this with firebreath specifically, but how to do it with javascript. Then you just do the same thing using the DOM element / window / document abstractions.
Many people don't realize that the best browser plugins developers are the ones who understand javascript really well also.
See screen width vs visible portion
Now, you'll want to make sure you test this on all browsers; some properties IE doesn't expose through IDispatch (which is what FireBreath uses by default) in which case a custom handler may need to be added to the DOM abstraction; talk to me on IRC if that's the case (http://npapi.com/chat) and I'll help you.

Related

How does a computer resize an image?

Image resizing is nearly universal in any GUI framework. In fact, one of the first things you learn when starting out in web development is how to scale images using CSS or HTML's img attributes. But how does this work?
When I tell the computer to scale a 500x500 img to 100x50, or the reverse, how does the computer know which pixels to draw from the original image? Lastly, is it reasonably easy for me to write my own "image transformer" in another programming language without significant drops in performance?
Based on a bit of research, I can conclude that most web browser will use nearest neighbor or linear interpolation for image resizing. I've written a concept nearest neighbor algorithm that successfully resizes images, albeit VERY SLOWLY.
using System;
using System.Drawing;
using System.Timers;
namespace Image_Resize
{
class ImageResizer
{
public static Image Resize(Image baseImage, int newHeight, int newWidth)
{
var baseBitmap = new Bitmap(baseImage);
int baseHeight = baseBitmap.Height;
int baseWidth = baseBitmap.Width;
//Nearest neighbor interpolation converts pixels in the resized image to pixels closest to the old image. We have a 2x2 image, and want to make it a 9x9.
//Step 1. Take a 9x9 image and shrink it back to old value. To do this, divide the new width by old width (i.e. 9/2 = 4.5)
float widthRatio = (float)baseWidth/newWidth;
float heightRatio = (float)baseHeight/newHeight;
//Step 2. Perform an integer comparison for each pixel in old I believe. If we have a pixel in the new located at (4,5), then the proportional will be
//(.8888, 1.11111) which SHOULD GO DOWN to (0,1) coordinates on a 2x2. Seems counter intuitive, but imagining a 2x2 grid, (4.5) is on the left-bottom coordinate
//so it makes sense the to be on the (0,1) pixel.
var watch = new System.Diagnostics.Stopwatch();
watch.Start();
Bitmap resized = new Bitmap(newWidth, newHeight);
int oldX = 0; int oldY = 0;
for (int i = 0; i < newWidth; i++)
{
oldX = (int)(i*widthRatio);
for (int j = 0; j < newHeight; j++)
{
oldY = (int)(j*heightRatio);
Color newColor = baseBitmap.GetPixel(oldX,oldY);
resized.SetPixel(i,j, newColor);
}
}
//This works, but is 100x slower than standard library methods due to GetPixel() and SetPixel() methods. The average time to set a 1920x1080 image is a second.
watch.Stop();
Console.WriteLine("Resizing the image took " + watch.Elapsed.TotalMilliseconds + "ms.");
return resized;
}
}
class Program
{
static void Main(string[] args)
{
var img = Image.FromFile(#"C:\Users\kpsin\Pictures\codeimage.jpg");
img = ImageResizer.Resize(img, 1000, 1500);
img.Save(#"C:\Users\kpsin\Pictures\codeimage1.jpg");
}
}
}
I do hope that someone else can come along and provide either a) a faster algorithm for nearest neighbor because I'm overlooking something silly, or b) another way that image scalers work that I'm not aware of. Otherwise, question... answered?

Image not rendering on web browser

I got a kaleidoscope code from Gary George at openprocessing. I tried to modify it to meet my needs and export it to web. But I'm having trouble rendering the image on the browser. It runs well on the desktop but not on browser. I've been trying to fix the error but...no luck (yet, I hope).
Here is the code:
/**
* Kaleidoscope by Gary George.
*
*Load an image.
*Move around the mouse to explore other parts of the image.
*Press the up and down arrows to add slices.
*Press s to save.
*
*I had wanted to do a Kaleidoscope and was inspired with the by Devon Eckstein's Hexagon Stitchery
*and his use of Mask. His sketch can be found at http://www.openprocessing.org/visuals/?visualID=1288
*/
PImage a;
int totalSlices=8; // the number of slices the image will start with... should be divisable by 4
int previousMouseX, previousMouseY; //store previous mouse coordinates
void setup()
{
size(500,500, JAVA2D);
background(0,0,0);
smooth(); //helps with gaps inbetween slices
fill(255);
frameRate(30);
a=loadImage("pattern.jpg");
}
void draw() {
if(totalSlices==0){
background(0,0,0);
image(a,0,0);
}
else{
if(mouseButton == LEFT){
background(0,0,0);
//the width and height parameters for the mask
int w =int(width/3.2);
int h = int(height/3.2);
//create a mask of a slice of the original image.
PGraphics selection_mask;
selection_mask = createGraphics(w, h, JAVA2D);
selection_mask.beginDraw();
selection_mask.smooth();
selection_mask.arc(0,0, 2*w, 2*h, 0, radians(360/totalSlices+.1)); //using 369 to reduce lines on arc edges
selection_mask.endDraw();
float wRatio = float(a.width-w)/float(width);
float hRatio = float(a.height-h)/float(height);
//println("ratio: "+hRatio+"x"+wRatio);
PImage slice = createImage(w, h, RGB);
slice = a.get(int((mouseX)*wRatio), int((mouseY)*hRatio), w, h);
slice.mask(selection_mask);
translate(width/2,height/2);
float scaleAmt =1.5;
scale(scaleAmt);
for(int k = 0; k<=totalSlices ;k++){
rotate(k*radians(360/(totalSlices/2)));
image(slice, 0, 0);
scale(-1.0, 1.0);
image(slice,0,0);
}
}
resetMatrix();
}
}
You need to change two things for loading a local image in JS mode:
Your images must be in a folder called data inside your sketch folder. You can just make the folder yourself and put the image in it. You still load the image as before, no need to specify the data folder.
In JS mode, you need to preload the image using this command: /* #pjs preload="pattern.jpg"; */
So your full image loading code would be:
/* #pjs preload="pattern.jpg"; */
a = loadImage("pattern.jpg");

Understanding heisenbug example: different precision in registers vs main memory

I read the wiki page about heisenbug, but don't understand this example. Can
anyone explain it in detail?
One common example
of a heisenbug is a bug that appears when the program is compiled with an
optimizing compiler, but not when the same program is compiled without
optimization (as is often done for the purpose of examining it with a debugger).
While debugging, values that an optimized program would normally keep in
registers are often pushed to main memory. This may affect, for instance, the
result of floating-point comparisons, since the value in memory may have smaller
range and accuracy than the value in the register.
Here's a concrete example recently posted:
Infinite loop heisenbug: it exits if I add a printout
It's a really nice specimen because we can all reproduce it: http://ideone.com/rjY5kQ
These bugs are so dependent on very precise features of the platform that people also find them very difficult to reproduce.
In this case when the 'print-out' is omitted the program performs a high precision comparison inside the CPU registers (higher than stored in a double).
But to print out the value the compiler decides to move the result to main memory which results in an implicit truncation of the precision. When it uses that truncated value for the comparison it succeeds.
#include <iostream>
#include <cmath>
double up = 19.0 + (61.0/125.0);
double down = -32.0 - (2.0/3.0);
double rectangle = (up - down) * 8.0;
double f(double x) {
return (pow(x, 4.0)/500.0) - (pow(x, 2.0)/200.0) - 0.012;
}
double g(double x) {
return -(pow(x, 3.0)/30.0) + (x/20.0) + (1.0/6.0);
}
double area_upper(double x, double step) {
return (((up - f(x)) + (up - f(x + step))) * step) / 2.0;
}
double area_lower(double x, double step) {
return (((g(x) - down) + (g(x + step) - down)) * step) / 2.0;
}
double area(double x, double step) {
return area_upper(x, step) + area_lower(x, step);
}
int main() {
double current = 0, last = 0, step = 1.0;
do {
last = current;
step /= 10.0;
current = 0;
for(double x = 2.0; x < 10.0; x += step) current += area(x, step);
current = rectangle - current;
current = round(current * 1000.0) / 1000.0;
//std::cout << current << std::endl; //<-- COMMENT BACK IN TO "FIX" BUG
} while(current != last);
std::cout << current << std::endl;
return 0;
}
Edit: Verified bug and fix still exhibit: 03-FEB-22, 20-Feb-17
It comes from Uncertainty Principle which basically states that there is a fundamental limit to the precision with which certain pairs of physical properties of a particle can be known simultaneously. If you start observing some particle too closely,(i.e., you know its position precisely) then you can't measure its momentum precisely. (And if you have precise speed, then you can't tell its exact position)
So following this, Heisenbug is a bug which disappears when you are watching closely.
In your example, if you need the program to perform well, you will compile it with optimization and there will be a bug. But as soon as you enter in debugging mode, you will not compile it with optimization which will remove the bug.
So if you start observing the bug too closely, you will be uncertain to know its properties(or unable to find it), which resembles the Heisenberg's Uncertainty Principle and hence called Heisenbug.
The idea is that code is compiled to two states - one is normal or debug mode and other is optimised or production mode.
Just as it is important to know what happens to matter at quantum level, we should also know what happens to our code at compiler level!

openCV / unhandled exception or msvcp100d.dll

I do realise that this problem is pretty common, but I have spent around 4 days so far, trying to fix it on my own, using all the smart advice I found on the Internet and - unfortunately - I've failed.
I managed to make openCV2.4.6 work with my VisualStudio 2012, or at least that's what I assumed after I was able to stream a video from my webcam with this example:
#include "stdafx.h"
#include "opencv2/opencv.hpp"
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* newImg;
while (true)
{
capture = cvCaptureFromCAM(-1);
newImg = cvQueryFrame( capture );
cvNamedWindow("Window1", CV_WINDOW_AUTOSIZE);
cvShowImage("Window1", newImg);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
cvReleaseImage(&newImg);
return 0;
}
Everything worked fine, so I decided to play around with it and I made an attempt to use a simple image processing operation such as converting rgb to grayscale. I modified my code to following:
#include "stdafx.h"
#include "opencv2/opencv.hpp"
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* img1;
IplImage* img2;
while (true)
{
capture = cvCaptureFromCAM(-1);
img1 = cvQueryFrame( capture );
img2 = cvCreateImage(cvGetSize(img1),IPL_DEPTH_8U,1);
cvCvtColor(img1,img2,CV_RGB2GRAY);
cvNamedWindow("Window1", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Window2", CV_WINDOW_AUTOSIZE);
cvShowImage("Window1", img1);
cvNamedWindow("Window2", CV_WINDOW_AUTOSIZE);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
cvReleaseImage(&img1);
cvReleaseImage(&img2);
return 0;
}
And that's the place where the nightmare started. I keep getting the
Unhandled exception at at 0x000007FEFD57AA7D in opencvbegginer.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000000030F920.
I did some research and tried few solutions, such as exchanging opencv_core246.lib to opencv_core246d.lib, etc. For a second I hoped it might work, but the reality punched me again with msvcp100d.dll missing. I tried to update all redistributable packages, but it didn't change the fact I keep getting this error. I tried to find out how to fix it another way and I found some forum on which they tell to go to C/C++ properties and change the Runtime Library to MTd, so... I tried this as well, but - as you can notice by now - it didn't work.
At this current moment I just ran out of ideas on how to fix this, so I would be really grateful for any help.
Cheers
PS. Important thing to add: when I got the unhandled exception, opencv 'spoke to me', saying
OpenCV Error: Bad argument in
unknown function, file ......\scr\opencv\modules\core\src\array.cpp,
line 1238
However, I already assumed way back then that I'm just not clever enough with my idiot-resistant code and I tried few other pieces of code that were written by competent people - unfortunately, I keep getting exactly the same error (the rest is the same as well, after I change the things mentioned above).
If img1 == NULL, then it crashes on cvGetSize(img1). Try enclosing the code after cvQueryFrame in an if (img1 != NULL).
However, if it returns NULL for every frame, it means there is something wrong with your camera/drivers/way you capture the frames.
You should also move the cvNamedWindow outside of the loop, since there is no need for it to be recreated for every frame.

Scaling Imagemaps in HTML

I have an existing HTML page which contains an image with corresponding image map.
I want to use the image in a different page, but need to scale it as the available space is smaller. The problem is that if I scale the image then the image map no longer works with it.
How can I achieve the same effect as an image and image map combination, but allow the image to be scaled?
Here's an image map resizer:
http://blog.outsharked.com/p/image-map-resizer.html
Simplified version of the essential code it uses looks like this:
map.find('area').each(function() {
var j,coordx,coordy,
area = $(this),
coords = area.attr('coords').split(','),
isVisible=true,
newCoords = '';
for (j = 0; j < coords.length; j+=2) {
coordx = parseInt(coords[j],10);
coordy = parseInt( coords[j+1],10);
// ignore last coord if uneven numbers (meaning it's a circle)
newCoords += (!newCoords ? '' : ',') + Math.round(coordx * xAmount) +
(j+1<coords.length ? ','+ Math.round(coordy * yAmount) : '');
}
}
It basically iterates through each area, recalculating the coordinate data.
You should be able to coordinate this with a window.resize event to change the size of your imagemap and image simultaneously. Because window.resize fires many times as a user scales the window, you probably don't want to try to resize the imagemap every single time the event fires. Since it's computationally intensive, it could cause a slow user experience. Instead, buffer it and only do a resize every once in a while.
Here is a fiddle showing how to buffer window.resize events to do the same thing using the ImageMapster jquery plugin: http://jsfiddle.net/jamietre/jQG48/