Clear undo/redo history from a callback in Plotly Dash - plotly-dash

Using Plotly Dash with show_undo_redo=True, the undo and redo buttons seems to work as expected most of the time.
However, at specific actions it becomes necessary to "clear history", i.e. to prevent users from going back past that specific action.
Can this be done?
One obvious way would be to reload the page, but I'm trying to avoid this.

Related

Does preventing a modal from being hidden by clicking the background violate accessibility requirements?

I'm adding a blocking modal (ie one that covers the screen and prevents interaction while an API call is processing) to my company's design library.
As part of that, I modified our modal so that clicking on the grey backdrop will NOT hide the blocking modal, but I want to make sure that doesn't violate accessibility guidelines. I haven't been able to find anything online about this. Does anyone know if this this violates accessibility requirements?
Short Answer
The answer is 'it depends'. Basically if the modal is not dismissable in any way it becomes a 'keyboard trap' and so would violate WCAG.
However if you structure it correctly a modal that blocks the page while an API loads is perfectly valid (and can't be dismissed while the page is loading), but there are a few things you need to do to make sure this is accessible.
1. Make sure that when this modal loads, nothing else on the page is focusable.
The biggest issue I see on most modals is that they allow focus outside of them.
You can't just stop users using the tab key as that is not how most screen reader users navigate the page (they use shortcuts for headings (h1-h6), hyperlinks etc.).
For this reason make sure your modal sits outside of your <main> and the hide your <main> and other major landmarks that contain information with aria-hidden="true" and by adding tabindex="-1" to them so nothing is focusable.
Obviously this depends on your document structure so you would need to test it, but a properly structured HTML document will work with the above method.
2. Make sure that a screen reader user knows that the page is busy and something is loading.
There are a couple of ways to do this. The best is to use an aria-live region
Adding aria-live="polite" and aria-busy="true" to the section you are updating is one way (if you are updating one part of the page).
However in your circumstances I would make a section within the modal aria-live="assertive" and not use the aria-busy (as you will be hiding all the content in step 1 so aria-busy would not be applicable).
I would then update the message every second or two for long loads (i.e. 'loading', 'still loading', 'nearly loaded' etc. Or better yet a loading percentage if your script allows.)
Once the page content has loaded, you do not need to say 'loaded' instead make sure you have a heading for the section or page that has a tabindex="-1" added on it that accurately describes the content that has just been loaded in.
Once the load completes, programatically focus this heading and the user will know that the load is complete.
3. Make sure that if the API call fails you feed something meaningful back to screen readers
When your API call fails (notice I said when, not if!) make sure your JavaScript can handle this in a graceful way.
Provide a meaningful message within your modal aria-live region that explains the problem. Try to avoid stating error codes (or keep them short, nothing worse than hearing a 16 digit string on a screen reader for an error code), but instead keep it simple such as 'resource busy, try again later' or 'no data received, please try again' etc.
Within that region I would also add one or two buttons that allow to retry / go back / navigate to a new page depending on what is appropriate for your needs.
4. For long load times, let the user know what is happening.
I covered this in point 2 but just to emphasise it, make sure you feedback to users that things are still loading if there is a long load time by updating your aria-live region.
Nothing worse that wondering if the page has loaded and the developers forgot to tell you.
5. Give the option to cancel an API call so it doesn't become a keyboard trap.
Obviously the big problem with a whole page modal is it is a 'keyboard trap'.
To ensure this isn't an issue make sure you provide a cancel button.
Make sure it is clear that this will cancel the loading of the page, but don't rely on JavaScript alone.
Instead make this a <a> styled like a button that either points to the current page or the previous page (yet again depending on your needs) and add role="button".
Then intercept this click with JavaScript so that it can function like a button.
The reason for this is that when your JavaScript fails (yet again - when, not if) the user still has a way to get to a meaningful page, thus avoiding a keyboard trap.
This is one of the few times you should use an anchor as a button, as a fallback!
By doing this you ensure that the user always has a way to escape the modal.
You may also consider allowing a user to use the Esc key to close / cancel but that is yet again down to you and your circumstances.

Google Chrome follow developer console logging

Im using console.log lots in my javascript for debugging mouse move events. The problem im having is that when in the chrome console the new entries aren't followed.
Its best illustrated in these screenshots:
First lot of logs is fine because its big enough to see all of it on the screen:
A few seconds later:
The log has gone past the size of the window requiring me to scroll.
This makes it incredibly difficult to debug mouse move events because I have to move over to the console and scroll down, thus adding more entries to the log.
So my question is: How can I get chrome to essentially tail the log instead of stopping and require me to scroll.
With the console open, drag the scroll bar down to the bottom of the window and release it. It should tail the output for you.
It took me quite a few tries to get it to work in Version 27.0.1438.7 dev-m. But in Version 27.0.1440.0 canary, not only did it happen automatically, I could reattach the auto-scroll each time I tried.
You can download Canary from here.
The default behavior is for Console to follow (tail) logs as they head in there.
However, we had a bug in the DevTools where if you changed the zoom factor (cmd++) it didn't work always.
We just fixed that: https://codereview.chromium.org/180733003/ You'll need canary for a little while (from the date of this post) but it'll work its way down to Stable in about 10 weeks.
There's a rather pernicious bug here (present in Chrome for as long as I remember), where if you log any sort of expando-item like a DOM element or some such thing, it messes with the display of the log, and causes the scroll to stop following.
I solved this by applying a little bit of ingenuity, and finding the offending log, and you don't even need to delete the log statement, you just have to make it "friendlier". What works very often is I take any such log statement such as
console.log((mouse ? "mouse" : "touch") + " start on", jqtarg[0]);
and wrap it in an array:
console.log([(mouse_not_touch ? "mouse" : "touch") + " start on", jqtarg[0]]);
You may try do other things as well, in an attempt to make the log more readable, such as an object (haven't tested any of this rigorously, it may still cause the annoying failure-of-scroll-follow):
console.log({"mouse/touch start on": jqtarg[0]});
Based on a very small amount of testing, it would appear that if a log appears in the log buffer as an item that can be directly hovered (as opposed to requiring you to manually expand it first) to cause the inspector to highlight the item in the DOM for you, then it may trigger "scroll lock syndrome".
BTW, a helpful thing to be aware of is that if you log the exact same stuff repeatedly, Chrome helpfully "stacks" them like so: (See? I fixed the autoscroll by shoving my log in an object! yay!)
If you don't really need to see values based on precise coordinates, printing coarser values more ... coarsely will lead to a more compact log (which will still give you sensible feedback with counts).
Update: Sometimes none of this works. Sometimes you're just out of luck with this and you just have to clean up all the logs that you don't need and log the minimal amount of information to prevent overloading it and causing it to fail to scroll down.

Google Apps Script: formPanel and doPost

I have a quick question concerning GAS efficiency and best practices. I have a script that is embedded into a site. In an effort to try and make it quicker I changed from using a doGet() with a serverclickhandler attached to a submit button and another submit() function to using doGet() with submit and doPost. The initial version used a vertical panel, while the second version requires a form panel. My vertical panel has a grid setup on it, and I would like to keep as much existing code as possible. My question is:
Can I put a grid directly into a formPanel without it slowing down the loading process? I tried it and it seemed slower, but maybe Google's server was having a bad day.
Can I add the verticalPanel to the formPanel without slowing things down? What would be the best practice in this situation?
The reason I want to switch to doPost is that it shows another panel when you click the submit button, so the user knows that there submission went through. Previously I was clearing the GUI elements, which seems like a lot of extra code that could slow things down.
Thanks in advance!
concerning the last point of your post, you don't have to clear everything, you can mask the whole panel with another empty (or not) one on top of it... quick and efficient ;-)
Depending on the way you created your Ui different approaches are possible : one of the easiest it to setVisible(false) the parent panel that holds all the widgets while you setVisible(true) a big label saying 'thanks for you answer... bla bla bla' (this one can be there from the beginning but invisible ;-) and set to visible by a handler on the 'submit' button (client or server... both are able to do the job))
Having panels inside other panels shouldn't slow down the loading of the UI.

Blank time between resource loading under network inspector

I've been working on a new website and practicing my JS/jQuery/AJaxy skills. Last night I wanted to take a look at how long the page was taking to render and see if there were any areas I could clean up to increase speed. While the page loads in about 200 - 300 ms every time, I'm seeing a large amount of blank space between resource loads under the network inspector.
http://i.imgur.com/7ng6m.jpg
Has anyone else seen this or know what I can do to minimize that time (talking about the blank space between like the html and the first css file)?
Quite possibly it is caused by the extensions you have installed. AdBlock, LastPass and Google quick scroll took altogether about 200 ms on my machine.
Unfortunately, these extensions are invoked on every site and block loading the additional resources.
Try it with out of the box browser setup, the loading time will increase tremendously.
You've got a bunch of images loaded just after the page has been loaded (the load and DOMContentLoaded events have fired - the blue and red vertical lines across the Timeline). I can see that the images are loaded by the JQuery library (the Initiator column), perhaps to build a gallery or something.
So, the case is that JQuery loads the images after the page load, presumably in the onload handler (this can look like $(document).ready(handler) in your code, but other options are possible, too).
The delay between the initial page load and requesting the first resources is almost certainly caused by Chrome extensions. To find the culprit: Record a timeline in the Timeline tab in Chrome Developer Tools; Identify the scripts that are running during the Parse HTML phase; Work out which extensions they're from.
To record a timeline:
Open the timeline tab and click record.
Reload the page and then stop the recording. (A couple of seconds should be enough.)
To find the culprit:
Find the first main Parse HTML block on the timeline. On the row below you will probably see one or more Evaluate Script blocks. These are the culprits.
Click on one of the Evaluate Script blocks and find the script name in the bottom pane. Mouse-over the script name. The tooltip will have the URL of the script, which should be of the form chrome-extension://{long_identifier}/{path}
Memorise the first few letters of the identifier and search for it in the chrome://extensions/ page. This tells you which extension is causing the problem. Try disabling it - you should see a difference.
Repeat for the other Evaluate Script blocks.
In my case, I have 20 extensions installed but only two were causing a delay: LastPass and Fauxbar. I've chosen to leave them enabled because for me the productivity benefit of these extensions outweighs the downside of the added latency.

Chrome Extension - Chrome.windows.onFocusChanged Behavior

I'm trying to make an extension for Google chrome which requires me to be able to identify the currently selected tab. I did this with the chrome.tabs.onSelectionChanged method, however when I switch windows this isn't fired. I plan to use chrome.windows.onFocusChanged to detect when the window changes then use the chrome.tabs.getSelected method. However the problem is that chrome.windows.onFocusChanged seems to be fired more than once. If I'm not mistaken, it returns window -1, then the first window created (usually 1), then the current window. If the first window is selected then it's fires -1, then 1.
Am I using the right method here? Is there a better way of doing this? If I stick with it I might need to keep track of how window changes which is a bit messy.
Kinda worked on my own solution for this. For anyone interested in doing something similar, what I did instead was to use the onFocusChanged as an indicator that there is a window change happening which then starts a requestListener. Using content scripts, I sent a request to the extension whenever there was a window.focus event indicating that the focus is already on that window. The requestlistener then just removes itself. Unfortunately this approach requires all tabs to send requests every time they get focus. Some more tweaking to fix that I guess but for the mean time I think that suffices since sending requests every time there is a change of focus doesn't seem to eat up that much resources.