Reliably detect if a user is using a mouse on a webpage? - html

On our page we have some iframes in a long horizontally-scrolling <div>. If the user is using a mouse they can scroll with the scrollbars and we would like them to be able to select text within the iframes. If they are using touch only however, the scrollbars are a hassle and I would like to overlay a transparent element over the whole thing to give them the ability to scroll easily by dragging, this of course sacrifices the select-text feature but makes sense in that scenario.
Which gets me to my question, is there a way to reliably detect if a user is interacting with a webpage via a mouse?
Everything I've seen on detecting touch or mouse is that touch will broadcast mouse events so it is very difficult to detect touch OR mouse (not to mention that you can have both). My problem is simpler - it is whether the user has interacted with the page via a mouse.
Can anyone think of a way to check this reliably?

A mouse can do one single thing a touch device can never do - move without having any buttons pressed. So I'd just install an onMouseMove event on page load, and if it triggers without buttons pressed mark the user as a mouse user. You could then persist this through a cookie or LocalStorage since the flag will not change within the same environment, and remove the event right away. The precise way of implementing the single-fire event depends on which library you use (if any at all), but it should be easy with Mootools/JQuery docs in hand.
In general I'd recommend the easier route of just checking for a touch interface in most cases :
if('ontouchstart'in window)
{
/* it's a touch device */
}

Related

Click does not work on iPad when pressed for too long

I have developed a webapp which we usually run on an ipads browser, usually Chrome. There is a problem for a older users when they need to press on a button. They usually press hard and longer than average users, thus the click never gets registered.
The app is an angular app and I've tried to bind a (mousedown) event in hopes of making it fire the event when first touch it. But it seems like that when you hold it down longer on an ipad, it just starts to focus on the text.
Any ideas on how to improve the UX in this case? A lot of older users gets frustrated because to them it does not work.
When you hold your finger on the button you may never reach the mouse events, especially if you move your finger a little. You may check the order of events here https://patrickhlauke.github.io/touch/tests/event-listener_all-no-timings.html
To handle your issue you may handle either pointerdown or touchstart event.

LibGDX Input from previous screen registering

I am using isTouched to setScreen from my menu screen to my main game screen. (Tap to continue).
In the constructor of the main game screen, I set the input processor. The input processor then immediately fires from the touch on the previous screen.
What is the proper way to handle this?
EDIT: If I tap my finger on an Android device, the tap triggers the isTouched/justTouched. Then the next screen loads faster than I lift my finger and the finger up event triggers my input processor.
I don't think there is any built-in way to prevent this sort of event leakage. One way to avoid the problem is to trigger your transition on release, not press.
Switch your main menu to use an InputProcessor. Use the end-of-touch event to trigger your transition, so that event won't be around to pollute your new InputProcessor. This will avoid mixing polling and event-based input, which seems cleaner, too.
Set a flag when isTouched is true, then in later a render iteration when isTouched is false, and the flag is true, you know it is safe to proceed (this is a hacky polling version of waiting for the touch-up event).
In many UIs button events trigger on the touch-up (or its equivalent). E.g., in this stackoverflow UI, click down on the "Post Your Answer" button, then drag the mouse off the button and release. The button doesn't "click". (Similarly if you click outside the button, drag into it, and then release, it still doesn't "click".)

How to tell if a control is no longer visible to the user?

I have a control in which I repeatedly run some animations (e.g. DoubleAnimation). Can I detect if my control is no longer visible to the user? E.g it gets scrolled away from, the user navigates forward to another page, or it gets obscured behind other controls.
I don't want to run those animations unless at least some part of my control is visible for the user.
You could analyze the visual tree or get a transform from control coordinates to screen coordinates to see if its positioning is within the view port and also check things like opacities, visibilities etc. of controls down the visual tree path, but that is so processing intensive that it is not worth doing all the time for a general solution.
The only thing that would make sense is to handle the ScrollViewer.ViewChanged event and check if the offsets make it visible or not while limiting the TransformToVisual or VisualTreeHelper calls only to times when the actual layout within your ScrollViewer changes.

Why does adding a ManipulationDelta event handler to an Image prevent scrolling?

I'm encountering a problem while attempting to add functionality like pinch-zoom to an application that features an Image control inside of a ScrollView, which is inside a FlipView. The Image control and ScrollView control are in the ItemTemplate of the FlipView.
The idea is that if the user pinch-zooms on the Image it will activate code which will create and display an enlarged version of the image in the Image control. (The Image control in this case contains a PDF page, so we want a bigger version of the PDF page, instead of just an enlarged and fuzzier view of the PDF page).
If I attach a ManipulationDelta event handler to the Image, it will catch ManipulationDelta events produced by the pinch-zoom gesture, which I can then use to create the PDF zoom effect. However...now it will not catch scroll (drag?) gestures. Or rather, these too get caught by the ManipulationDelta event handler. I'd rather avoid having to implement code at this point to handle scrolling programmatically. Do I have any options for somehow bubbling up (or "over"?) the ManipulationDelta events to whatever would handle the scrolling? I would think this would happen already, the event would bubble up to the ScrollView which would then handle scrolling. But it appears to not be happening that way.
I have e.Handled set to false in the ManipulationDelta event handler. And the ManipulationMode on the IMage control is set to "All". I've tried "Scale" but this didn't help.
Thank you!
The ScrollViewer in WinRT is optimized for performance and uses DirectManpulation under the hood. That's why it's tricky to have both scrolling from the ScrollViewer and gestures inside it.
This blog post from Rob Caplan (MS employee) gives more information:
http://blogs.msdn.com/b/wsdevsol/archive/2013/02/16/where-did-all-my-gestures-go.aspx
Unfortunately there is no good solution if the app needs both scrolling and gestures (for example, to detect CrossSlides against the scrolling). In this case the only option to get the Pointer messages everywhere is to disable Direct Manipulation everywhere, but that disables scrolling as well. To get that back the app will need to detect the scrolling gestures itself and then navigate the ScrollViewer to the new location with ScrollToHorizontalOffset or ScrollToVerticalOffset or by updating the SelectedIndex. This is tricky and will be noticeably slower than letting the ScrollViewer do its thing. It should be avoided if at all possible.
Hope this helps

What is the best approach to building a popup inside of Flash AS3

I am trying to accomplish an "imagemap" in flash where you click on different areas in the image and when you click on it, a popup (within flash) comes up showing more information about the object that was clicked on. The popup has a close button that can will then close the popup.
My biggest trouble is the way I have my code right now is when you click on a region of the map, it creates a popup on the fly, and then I use addChild(_myPopup) to add it to the display list. The problem with this approach for me, is that the Popup is now a Child of the button I just pressed, but this object organization doesn't really make sense to me. I'd like to have the popup not be a child of the button and it be on it's own layer or a child of the stage directly.
What is a good approach and code architecture for building such an organization of objects? I'm fairly new to AS3 and I've built some small applications but my knowledge is limited.
Thanks
UPDATE
ok looks like calling stage.addChild(myPopup) from inside the button works pretty well. Is this good practice?
Assuming you have a hierarchy that looks something like this:
stage
Main class
Image class
Button
It's good practice to never call upwards in the displaylist, every object only deals with it's children. Events however, are a nice way of communicating upwards. Have the Button dispatch an event, preferrably a custom one, then handle that using a listener in the main class that then deals with creating a popup on top of everything.
An often encountered practise to organize the layers of the visible application is:
stage
main class with all children
popup container
tooltip container
mouse cursor container (apparently not longer necessary since player 10 supports custom cursors)
So you create your popups always in the popup container above the main class. If you would have tooltips, they should go into the tooltip container. This approach guarantees that popups are always visible above the main app and tooltips are always visible on top of everything.