We are creating an experimental version of a desktop application with the new office ribbon interface and want to do some usability tests. I am sure there are other engineers here who have faced the problem. Could you share your experience on what tests you did for that transition. My idea is to capture users' interaction with the application via usage clicks and build heat maps. What sort of tools can I use to do that?
Usability testing analysis software will help you capture users' clicks on an experimental web app that is not yet ready to be published externally. I have used OvoLogger, by Ovo Studios. Their Event Tracker tool, a component of OvoLogger, captures user clicks, highlights them, and generates post-test reports in a variety of formats. It also highlights the user's cursor and mouse clicks, so observers watching the usability test can clearly see whether the user is just hovering over a link or actually clicking it.
Live-site analytics tools like Google Analytics, Clicktale and CrazyEgg all provide heatmaps, but your mileage will vary. CrazyEgg claims to have more accurate heat maps than Google Analytics or ClickTale, because they capture the precise location of the user's click rather than capturing page-level clicks with multiple possible targets. If you are looking at doing multivariate analysis in the future, you will want to know exactly where the user clicked (target A, or target B), so you will be able to discern which experimental option garners more clicks.
Related
I've written a simple web app to factory-reset bluetooth devices that were accidentally turned on during shipping. The app scans for a class of bluetooth devices (those made by the company I work for), renders a list of devices found, and, when I click a button next to a device in the list, sends a reset message to the device.
This is a very manual process and I'd like to automate it. The problem is the Chrome dialog that asks for permissions to pair with a device. I am trying automate the app with Puppeteer, but I can't find a way to either (a) programmatically grant permissions to pair with a device or (b) to select the device in the dialog and click the "pair" button via Puppeteer. Anyone know if what I'm trying to do is possible, or if there's a better way to achieve the goal? Thanks!
This is not possible in Chrome. (I work on chrome.) The automation that does exist for Chrome's testing is layered such that actual Bluetooth connections aren't made.
Eventually we would like to enable this workflow via Enterprise configuration controls. But that is not started yet and there is no date commitment.
One alternative is to use node.js, though you lose the easy interface. You might build the reset backend in a node server and have it serve a web page interface.
I have done some research for xamarin.forms to see what is the best way to navigate to a driving destination (longitude, Latitude) with excellent user controls. It seems there is no point in re-inventing the wheel, especially when its as complicated as googles fantastic maps solution.. So it seems there are two legal and simple ways: a embedded browser control / webview, or using Device.OpenUri.
So, Device.OpenUri is the most responsive, and easy way to get someone to a destination but how can I return the user to my app after they reach their destination? I would prefer not to rely upon them pressing the 'back' button to exit maps as this might be not intuitive to the user.
What do you think?
EDIT: ok so I realized my question is not maps related at all, because its not hard to track the phones GPS location. All I need to accomplish this, is: to re-activate (or 'Show') my app. So after I call Device.OpenUri, I think my app will still be running, so regardless of what program is currently 'on top' or active, what code can I run to bring back the current app? Or, would this need to be a 'notification' which would then allow the user to manually switch back?
I am following the video from the google keynote (https://www.youtube.com/watch?v=3nYyApSiSLQ). I also have the same beacon in the demo (iBKS 105) and managed to provision it to serve UID. By using Google's Beacon Tools, I am able to detect and register the beacon in Google Beacon Dashboard and add in my attachments and URLs.
However, the moment I am done with the procedure, I am not able to see any nearby messages/notifications on my device. The guy who presented the demo did it with ease and I am wondering where I went wrong. What a I missing? I have done pretty much what the guy told in his keynote.
I have tried serving for Eddystone URL and successfully broadcasted the URL. I would really like to get the UID to work also.
I'm the guy in the video who did it with ease.
App-free solutions work with -UID, -EID, -TLM, -URL. On Android, you don't need an app to make your beacons useful.
If you do have an app, be sure to use Nearby Messages so that you get the most efficient possible scanning. (Also, no bluetooth permission required -- only location.)
The TLM frame will provide things like low battery alerts on the dashboard. You don't need an app to see these; the battery level is reported to the service with any Nearby request (including for Nearby Notifications).
Choose an interleaving ratio of -UID to -TLM of about 10:1, depending on how much traffic you're expecting your beacon to get. (If it's in a busy place, and you only want updates once/wk, you can go much less frequently than 10:1 with your -TLM frame.)
There was a question about iOS. There's a Nearby Messages Cocoapod that you can use with your iOS app just here. There's currently no equivalent to Nearby Notifications on iOS.
HTH!
I have two system wide keyboards pre-installed on my Tizen Wearable device, the first one is a stock Samsumg's keyboard, the second one - Custom. The first one is a user's default selected in Settings.
I don't want to change the system's default, but I want my application to use the Custom keyboard.
In native API I've seen Tizen::Ui::InputConnection object that can be used as a property in Edit or TextArea controls, but I didn't see anything like this in HTML5 API. Searching Tizen's forum didn't help.
I've also seen in Tizen's SDK IME's WebHelperClient example a number of undocumented commands used to talk to a Tizen's service through a websocket. Probably there is a command to select an active keyboard, but I didn't find it.
Any leads are appreciated.
IMO that is not possible for either web apps or native apps.
Reason:
1. In gear, simultaneously two Keyboard can't be active at the same time.
Also, suppose there is an API available which you can use to change to custom keyboard while your app is running, but what if you close your application not using the normal hardware exit(i.e. Swiping down), rather you close it from "Recent Applications" then the custom keyboard that you activated for your app will be set for other applications as well.
Also the documentation available here doesn't explain anything which you are asking
https://developer.tizen.org/documentation/guides/web-application/tizen-features/ime-application
I would like to publish an app in google play, but I want to restrict the downloading under password or something like that. Is it possible? Is there any alternative?
Many thanks in advance,
Short Answer:
No.
Slightly Long Answer:
Applications available on Google Play cannot be downloaded only after a user authenticates. Any such feature will have to be implemented withing your application. At best, if this feature is needed for monetary reasons, publish a paid application.
Since the OP isn't descriptive beyond what you are looking for as a feature, it will be difficult to suggest an option / alternative that might suit your requirement. However, if you have a server setup that can communicate with the app, you can implement a feature which requires users, upon installation and running your app, to Sign Up for a new account and/or Login if already registered.
Again, I will circle back to the original point. Any such feature will have to implemented within the application itself. Google Play does not have such a feature.