Get Flex app's position on a web page? - html

Is it possible to get the x,y coordinates of a Flex app within an HTML page? I know you can use ExternalInterface.ObjecID to get the "id attribute of the object tag in Internet Explorer, or the name attribute of the embed tag in Netscape" but I can't seem to get past that step. It seems like it should be possible to get a handle on that embed object. Any suggestions?
Thanks.

I think the easiest thing to do is to include some kind of JavaScript library on the HTML page, say jQuery, and use it's functions for determining the position and size of DOM nodes. I would do it more or less like this:
var jsCode : String = "function( id ) { return $('#' + id).offset(); }";
var offset : Object = ExternalInterface.call(jsCode, ExternalObject.objectID);
trace(offset.left, offset.top);
Notice that this is ActionScript code, but it runs JavaScript code through ExternalInterface. It uses jQuery and in particular its offset method that returns the left and top offset of a DOM node.
You could do without jQuery if you looked at how the offset method is implemented and included that code in place of the call to jQuery. That way you wouldn't need to load jQuery in the HTML and the Flex app would be self-contained. The reason I suggest to use a library like jQuery is that browsers do these things differently. I'm not sure if calculating offsets is very different from browser to browser, but it doesn't hurt to be insulated from browser differences.
The JavaScript in my example is an anonymous function so that the ID of the embed/object tag can be passed in to it as a parameter to ExternalInterface.call, but you could just use string concatenation if you want:
var jsCode : String = "$('#' +" + ExternalInterface.objectID + ").offset()";
var offset : Object = ExternalInterface.call(jsCode);
That would work too, I just think the first version is more elegant.

If you are trying just to measure where it's at within a page as the external user the only thing that pops into my mind is a Firefox extension called MeasureIt I've used it occasionally for various measuring on web pages.
Are you trying to do this programmatically from within the embedded page itself and if so which langauge?

Related

html div nesting? using google fetchurl

I'm trying to grab a table from the following webpage
http://www.bloomberg.com/markets/companies/country/hong-kong/
I have some sample code which was kindly provided by Phil Bozak here:
grabbing table from html using Google script
which grabs the table for this website:
http://www.airchina.com.cn/www/en/html/index/ir/traffic/
As you can see from Phil's code, there is alot of "getElement()" in the code. If i look at the html code for the Air China website. It looks like it's nested four times? that's why the string of .getElement?
Now I look at the source code for the Bloomberg page and its is load with "div"...
the question is can someone show me how to grab the table from this the Bloomberg page?
and just a brief explanation of the theory also would be useful. Thanks a bunch.
Let's flip your question upside down, and start with the theory. Methodology might be a better word for it.
You want to get at something specific in a structured page. To do that, you either need a way to zap right to the element (which can be done if it's labeled in a unique way that we can access), OR you need to navigate the structure more-or-less manually. You already know how to look at the source of a page, so you're familiar with this step. Here's a screenshot of Firefox Inspector, highlighting the element we're interested in.
We can see the hierarchy of elements that lead to the table: html, body, div, div, div.ticker, table.ticker_data. We can also see the source:
<table class="ticker_data">
Neat! It's labeled! Unfortunately, that class info gets dropped when we process the HTML in our script. Bummer. If it was id="ticker_data" instead, we could use the getElementByVal() utility from this answer to reach it, and give ourselves some immunity from future restructuring of the page. Put a pin in that - we'll come back to it.
It can help to visualize this in the debugger. Here's a utility script for that - run it in debug mode, and you'll have your HTML document laid out to explore:
/**
* Debug-run this in the editor to be able to explore the structure of web pages.
*
* Set target to the page you're interested in.
*/
function pageExplorer() {
var target = "http://www.bloomberg.com/markets/companies/country/hong-kong/";
var pageTxt = UrlFetchApp.fetch(target).getContentText();
var pageDoc = Xml.parse(pageTxt,true);
debugger; // Pause in debugger - explore pageDoc
}
This is what our page looks like in the debugger:
You might be wondering what the numbered elements are, since you don't see them in the source. When there are multiples of an element type at the same level in an XML document, the parser presents them as an array, numbered 0..n. Thus, when we see 0 under a div in the debugger, that's telling us that there are multiple <div> tags in the HTML source at that level, and we can access them as an array, for example .div[0].
Ok, theory behind us, let's go ahead and see how we can access the table by brute-force.
Knowing the hierarchy, including the div arrays shown in the debugger, we could do this, ala Phil's previous answer. I'll do some weird indenting to illustrate the document structure:
...
var target = "http://www.bloomberg.com/markets/companies/country/hong-kong/";
var pageTxt = UrlFetchApp.fetch(target).getContentText();
var pageDoc = Xml.parse(pageTxt,true);
var table = pageDoc.getElement()
.getElement("body")
.getElements("div")[0] // 0-th div under body, shown in debugger
.getElements("div")[5] // 5-th div under there
.getElement("div") // another div
.getElement("table"); // finally, our table
As a much more compact alternative to all those .getElement() calls, we can navigate using dot notation.
var table = pageDoc.getElement().body.div[0].div[5].div.table;
And that's that.
Let's go back to that pinned idea. In the debugger, we can see that there are various attributes attached to elements. In particular, there's an "id" on that div[5] that contains the div that contains the table. Remember, in the source we saw "class" attributes, but note that they don't make it this far.
Still, the fact that a kindly programmer put this "id" in place means we can do this, with getDivById() from that earlier question:
var contentDiv = getDivById( pageDoc.getElement().body, 'content' );
var table = contentDiv.div.table;
If they move things around, we might still be able to find that table, without changing our code.
You already know what to do once you have the table element, so we're done here!

How can I automate test HTML5 elements in the UIAWebView in Instrument?

I am trying to automate the test of a mobile app, which is HTML5 embedded in Native app frame. I used the following code to get the elements in Instrument
UIALogger.logStart("Log elements in the landing page");
UIATarget.localTarget().logElementTree();
UIALogger.logPass("done");
And it will show the HTML5 component as UIAWebView. But for example, if there is a link in the HTML, and I want to click, I can only know it by the position. Is there any method that I can call to get the tags in HTML5?
Thanks a lot!
There is no such menthod, you will get elements with target.logElementTree(); as you used. If elements are native elements, you can use them with predefined methods like buttons() or links() else you will have to use them specifying position (xy coordinates) only. This problem always exists with Hybrid apps.

PhantomJS / CasperJS Canvas selector

Using PhantomJS V 1.8.1
Thanks in advance.
I am trying to run some tests on a website that I am developing which is using backbone.js.
One of my tests involve checking to see if a Canvas is present and clicking on it. My problem is that whatever selector I use to get the Canvas Element I cannot get the selector to find it. I use the same CSS selector in Google Chrome when viewing the page and all is OK. At first I thought that the issue may have been due to the element not being present on the page but other elements which are inserted with the canvas are present so I am 99% sure that this is not the problem.
The selectors I have tried to use are:
document.querySelectorAll('#idOfCanvas');
document.querySelectorAll('canvas#idOfCanvas');
Also if I use .classClassName:nth(1) to select the tyre selector, it still fails to work (works in Google Chrome though as does the other examples provided)
The canvas has a class name which is picked up by the selector by I would rather not use a class selector.
Any help would be much appreciated.
Cheers :)
Also
Like I mentioned I am almost certain that the Canvas exists as the container div for it exists. Also I have four elements on the page with the same className (two of which are canvases) and four elements are being returned when I run
return document.querySelectorAll('.className').length = 4;
Assuming you have something like this:
<canvas id="idOfCanvas"></canvas>
This should work:
canvas = document.getElementById("idOfCanvas");
// or
canvas = document.querySelector("#idOfCanvas"); // Only get the first match, ID's should be unique, any way.;
// or
canvas = document.querySelectorAll("#idOfCanvas")[0];
// or
canvas = document.getElementsByTagName("canvas")[0]; // Get the first <canvas> element.
However, you'll have to make sure your canvas element is actually loaded when the script is executed. Have a look at this onload tutorial, for example.
Try this :
canvas = document.getElementById(#IdOfCanvas:nth-child(1));

Retrieving all address information along Google Maps route

I am developing an Windows Forms application using VB.NET that offers the user to lookup addresses on Google Maps through a Web Browser. I can also successfully show the directions between two points to the user, as well as allow the user to drag the route as he/she pleases. My question now is - is it possible for me to get the lattitude/longitude information of the route, i.e. the overview_polyline array of encoded lattitude/longitude points and save it to e.g. a text file on my computer? Or is it possible to get a list of all the addresses located both sides of the route over the entire length of the route, and then save the data to a file on my computer? I'm using HTML files to access and display the Google Maps data in the Web Browser item.
Thank you
This is actually pretty simple if your just looking for the screen coordinates.
// this probably should be in your form initialization
this.MouseClick += new MouseEventHandler(MouseClickEvent);
void MouseClickEvent(object sender, MouseEventArgs e)
{
// do whatever you need with e.Location
}
if your strictly looking for the point in the browser, you need to consider the functions
browser.PointToClient();
browser.PointToScreen();
So, this method is usable if you know exactly where your form is (easy to get its coords) and where you webbrowser control is (easy to get coords of this as well since it's just a control in your form) and then, as long as you know how many pixels from the left or right, and from the top or bottom the image will be displayed, once you get the global mouse click coords (which is easy) you can predict where it was clicked on the image.
Alternatively, there are some scarier or uglier ways to do it here...
You can use the ObjectForScripting property to embed code to do this in the webbrowser. It's ugly to say the least. MSDN has some documentation on the process here: http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser.objectforscripting.aspx
Because its really ugly, maybe a better solution is to use AxWebBrowser - it's ugly too but not so scary.
In addition, I found this post of someone wanting to do it on a pdf document, and a MSFT person saying its not possible, but really what he is trying to say is that it isn't built in, even with a pdf document its still possible to predict with high to certain accuracy where it was clicked if you use the first method i described. Here is the post anyway: http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/2c41b74a-d140-4533-9009-9fcb382dcb60
However, it is possible, and there are a few ways to do it, so don't get scared from that last link I gave ya.
Also, this post may help if you want to do it in javascript:
http://www.devx.com/tips/Tip/29285
Basically, you can add an attribute on the image through methods available in the webbrowser control, you can add something like onclick="GetCoords();" so when it is clicked, the JavaScript function will get the coords, and then you can use javascript to place the values in a hidden input field (input type="hidden") which you can add through the webbrowser control, or if there is one already on the page, you can use that. So, once you place the coords using javacript into that input field, you can easily grab the value in that using the webbrowser control, eg:
webbrowser1.document.getElementById("myHiddenInputField").value
That will get the value in that field, which you've set through JavaScript. Also, the "GetCoords()" function i mentioned is called SetValues() in the javascript method link i provided above (in the devx.com site) but I named it GetCoords because it makes more sense and didn't want to confuse you with the actual name they used, you can change this to any name you want of course. Here is the javascript they were using, this only gets the coords into a variable, doesn't put it into a hidden input field, we will need to do that in addition (at the end of the javascript SetValues/GetCoords function).
function SetValues()
{
var s = 'X=' + window.event.clientX + ' Y=' + window.event.clientY ;
document.getElementById('divCoord').innerText = s;
}
These guys are just saving it inside a div element, which is visible to users, but you can make the div invisible if you want to use a div field, there is no advantage or disadvantage in doing that, you would just need to set the visible property to false using javascript or css, but still, it is easier to use a hidden input field so you don't need to mess with any of that.
Let me know how you get along.

Building HtmlElement object trees

I'm using the MSIE WebBrowser control in a C# desktop application and am looking for a way to build and maintain trees of HtmlElement objects outside of this control. I am trying to quickly switch between multiple complex pages without incurring the overhead of re-parsing the HTML each time (and I don't want to maintain multiple controls that are shown/hidden as needed).
I discovered that a) I can only create HtmlElement objects via the control's HtmlDocument and b) once I remove a "trunk" of HtmlElement objects from the control's HtmlDocument, it "dies off," even though I keep maintaining a strong reference to the root element.
How can I do this?
P.S. I am willing to consider alternative browser controls (e.g. Gecko) if they allow me to accomplish the above.
This will do it
// On screen webbrowser control
webBrowserControl.Navigate("about:blank");
webBrowserControl.Document.Write("<div id=\"div1\">This will change</div>");
var elementToReplace = webBrowserControl.Document.GetElementById("div1");
var nodeToReplace = elementToReplace.DomElement as mshtml.IHTMLDOMNode;
// In memory webbrowser control to load fragement into
// It needs this base object as it is a COM control
var webBrowserFragement = new WebBrowser();
webBrowserFragement.Navigate("about:blank");
webBrowserFragement.Document.Write("<div id=\"div1\">Hello World!</div>");
var elementReplacement = webBrowserFragement.Document.GetElementById("div1");
var nodeReplacement = elementReplacement.DomElement as mshtml.IHTMLDOMNode;
// The magic happens here!
nodeToReplace.replaceNode(nodeReplacement);
I doubt this will improve performce as the text renderer is fast, and the memory consumed will still be the same if you have one large page with hidden div's or have multiple div's in memory in other objects?
You can use the MSHTML library (mshtml.dll) to achieve this. Basically you would use a single about:blank page and then dynamically write and remove content from it.
See this blog post on this subject
You can also write a custom interface wrapper that exposes the functionality you need from mshtml rather than referencing the whole thing (Nearly 8MB) and it is really easy to do using f12 in VS.
Do you really need to remove them enturely? How about leaving your "branch" in the DOM as the child of a DIV whose style="display:none". That way they're real, live DOM objects but not visible.
I think you could also use the htmlagilitypack
It allows you to parse once, querying the HTML tree using XPath or via iterators and re-writing the tree with a save method when done.
Depending on your structure, you might just create an adapter around the classes, because it only works on an entire html document and you want it on elements only, but this should be not too hard.