Can anyone give an advice how to implement using Sikuli analog of Selenium chain element search like driver.findElement().findElement();
first step: finding area
second step find element in this area
Victor Iurkov,
1. You should create a Region and assign to a variable (use Region button from Sikuli IDE panel)
2. Use the variable and apply find() method
It should look like:
yourRegion = Region(2157,169,1049,148)
yourRegion.find("desired-ui-element.png").highlight(2)
Note: Region(2157,169,1049,148) can be created manualy or via Region button in Sikuli IDE
Related
I have started making an advent calendar using the adobe flash professional software. I have drawn each 'door' individually with the draw tool on separate layers. I need to know how to use action script to wait for a click on one of the doors and then goto a specific layer and stop. I have tried different methods such as
button.addEventListener(MouseEvent.MOUSE_DOWN, mouseDownHandler);
but it throws up errors.
Any ideas would be appreciated.
Thanks
If you drew the door it's still just a vector drawing that you can't really do anything with yet - you'll need to convert it to a MovieClip or a Sprite or a Button. The simple way to do this in the interface is:
Use the pointer to select everything you want to be contained in your MovieClip.
Press F8 OR choose "Modify" from the top menu and then "Convert to Symbol."
From there, you'll get a dialog box that looks like this:
You'll need to give it a name. This name will be the Class Name so call it something like "Door" or something descriptive like that. Leave the type as MovieClip and click "Ok."
Now you have the class so you'll have to give it an instance name. Select the object that you just created on the stage. In your properties it should look something like this:
Where it shows "instance name" delete that and give your object a name. In your code example you called it button so call it button here. Now you have an object that can listen for event listeners. Inside your handler you can write something like this.gotoAndStop(2) to get where you need to go.
Hope this helps!
i'm new to the realtime api and i want to implement a trivial test project that will help me understand the structure of the api. Sorry if my question sound stupid but i don't think i really understand how to use the api.
What i want to do is create a button and when you press the button on the one app the same button will be pressed on the other apps.
What i would like to know is how to model this button in the collaborative data model.
Could i use variables like isPressed or isRollOver to pass the buttons state or i can create a hole button object somehow?
i want to use java or javascript.
thank you!
The realtime data model can consists of a combination of CollaborativeMap, CollaborativeList, CollaborativeString, and primitive javascript types.
I think for your use case you'd probably want a CollaborativeMap where you map a button name to a boolean value. You can then add an event listener to the CollaborativeMap to find out when the button's value changes.
I have a Windows Store (Metro) application. I need to add support for scanning barcodes.
I tried using ZXing first. From what I was able to get working, you actually need to click and save an image for it to do the processing. There's no nice overlay of a red line "scanner" nor does it process a live feed. This isn't a very elegant solution. It works far better on Android. Basically, this won't work as I need a constant video and a constant search for a barcode to be in focus.
This blog (http://www.soulier.ch/?p=1275&lang=en) mentions that extrapolating a frame out of a WinRT video stream is not allowed in managed code which means I'd need to use C++.
So, are there any components out there that do this? Anything free or paid that I can get that would be written in C++ and can find and extrapolate a barcode? Learning C++ is not on my bucket list.
You can capture frames while displaying a preview with C# only. Here's an example control that does it:
https://winrtxamltoolkit.codeplex.com/SourceControl/latest#WinRTXamlToolkit/Controls/CameraCaptureControl/CameraCaptureControl.cs
Basically you need to create a MediaCapture object and associate it with a CaptureElement control to display the preview. Then you can use CapturePhotoToStreamAsync() to capture a frame to a stream of your selected encoding format and then have a go at it with your bar code reading code.
I made a lib for WinRT using ZXing & Imaging SDK.
It works well (but does not include any additional focus feature).
There is a lib and a sample app that you can try.
It works for barcodes and QRCode (barcode by default but just change the optional parameter in the scan function code to use QRCode)
Currently, I am working on automating maps. I wanted to select the region using mouse pointer.
find region -> drag mouse pointer -> Drop. Please suggest sikuli webdriver script for this.
There are a couple of built in Sikuli functions: dragDrop() will encompass both the drag and the drop (like the name suggests) Or, you can do the steps separately, if needed (drag(), mouseMove(), dropAt()). These are all in the documentation here.
I don't know much about webdriver or how it interacts with Sikuli, but hopefully its a starting place...
Here is my solution for enlarge a application window. I tested on both windows and linux OS and it work.
corner = find(Pattern('test.png' ).targetOffset(-36,-22))
drop_point = corner.getTarget().offset(dx, dy)
dragDrop(corner, drop_point)
The -36,-22 in function targetOffset(-36,-22)) can be adjust by sikuli IDE.
Here another example:
region1 = find("1429562753142.png")
dropRegion = Location(104,800)
dragDrop(region1, dropRegion)
keyUp()
I defined the reigion where the image is located.
Then I defined the drop region.
By using dragDrop() the image is moved.
And keyUp() is releasing the keys that where being hold down.
I'm using Sikuli IDE. I'd like to know what the command to take a screenshot is, so I can capture the screen at the end of a test.
Something like this
try :
if bla bla bla:
print("blablabla")
else:
TAKESCREENSHOT() #------------------> What command do I put here?
print("TEST_FAILED")
The function is capture, as in
screen = Screen()
file = screen.capture(screen.getBounds())
print("Saved screen as "+file)
It takes a screen-shot, saves it in a file, and gives you the path to that file back.
See the Sikuli documentation on it for full details.
Cheap Sikuli trick for screencaps is to have a defined region, then capture the region.
So if you've got a Chrome browser you want to cap, just set it up something like this:
App.focus('Chrome.app')
ChromeWindow = App('Chrome.app').window()
That will both focus the computer to the target application, and define a region composed of the window parameters of the application. Then run this:
capture(ChromeWindow)
Then use shutil (import shutil) to move the file around to wherever you need it in your local directories. I usually put that code pile into a function I can call when needed TakePicture(Name) where Name is what I want to call the screencap when called in a particular test. Sikuli is both powerful and easy!
To make a screenshot of the window that has focus you simple can use:
focusWindow = App.focusedWindow()
regionImage = capture(focusWindow)
shutil.move(regionImage, os.path.join(r'C:\Screenshots', 'Dummy1.png'))