AngularJS form wizard save progress - html

I have a service in AngularJS that generates all the steps needed, the current state of each step (done, current, show, etc) and an associated directive that actually implements the service and displays the data of the service. But, there are 2 steps that are divided in 4 and 3 steps each:
Step one
Discounts
Activities
Duration
Payment Length
Step two
Identification
Personal data
Payment
How can I "save" the state of my form in case the person leaves the site and comes back later? Is it safe to use localStorage? I'm no providing support for IE6 or 7. I thought of using cookies, but that can end up being weak (or not)

Either local storage or cookies should be fine. I doubt this will be an issue, but keep in mind that both have a size limit. Also, it goes without saying that the form state will only be restored if the user returns on the same browser, and without having deleted cookies / local storage.
Another option could be to save the information server side. If the user is signed in, you can make periodic AJAX calls with the data and store the state on the server. When the user finishes all steps, you can make an AJAX call telling the server to delete any saved data it might have. This allows you to restore state even if the user returns on a different browser, as long as he is signed in.
Regardless of what direction you go with this, you can use jQuery's serialize method to serialize the form into a string and save it using your choice of storage.

Related

Populating password input fields on client

this question has been posed in many flavours, but no one fits my needs.
I'm working on a partially complete Razor project; the original developer has left our office, and he wasn't much concerned about securing password fields, as he left all of them in clear.
These passowrd fields authorize several aspects (Ftp primary and secondary access, Ftp on AS400 and mail sending), so nothing related with login/submit forms. When I changed these fields from text to password, they revert to blank fields, regardless the content of the View Model, and this should be the correct behaviour, as per the numerous answers I've seen googlin around.
My problem is this: the user needs to know at least if a password has been configured (seeing a string of * or any other mask character the browser use), so I need to show him that value to let him know if the service is configured, and the best would be to let him also reveal the password to check if it's correct. The option to not update the particular field in the DB if it's left blank is not an option.
This site works only on Intranet, so there is no concern about hackers monitoring the connection or similar.
I've tried all (I think) the possible combinations, including building the input element manually through html, using the #Html.TextFor and #Html.PasswordFor helpers, decorating the corrisponding member in the view model with [DataType(DataType.Password)]. The data is binded when the page is loaded, so no ajax calls help me retrieving data.
I'm relatively new to Razor, as my last two projects are entirely in PHP.
Thanks for any suggestions.
Ok, no other solution found than issuing an ajax call to a dedicated HttpGet controller method to retrieve only the password fiels, then populating the dedicated fields when the controller returns the object containing all the password I need.

Reject previous route's pending action upon page transition in Redux app

I have Redux app with React Router (based on https://github.com/este/este).
Inside one Route, there may be more than 1 AJAX calls (fired by redux-promise-middleware & redux-thunk). When the page changes (via react-router) I wish to reject all remaining _SUCESS or _FAILED callback actions fired by the previous route.
What is the best way to do this?
I'd suggest that you make the data you fetch page-aware. Meaning that in the action where the fetch is started, add a page-context. When the reducer gets the data it can either save it for that page-context or it can throw it away if the location is not the same as your browser (meaning that the user has navigated away). If you keep the data for the different pages/contexts you also have the bonus of these being ready if the user returns (if that is something that you'd want).
You are on url "/pageX". You start fetching data and the action makes sure that the page-context is remembered for when the SUCCESS action is to be dispatched. When the reducer handles the action it stores the data in store.context["/pageX"].data (or similar). Note: This is where you could also throw it away (reject) in case the current location is not the same as the received data.
The UI should know how to ask/use data from the context that matches it's location only.
You might also want to consider tracking the browser-location in the state for the app...

Html - single page - staying logged in

I have an Html page with a load of javascript that changes between views.
Some views require the person to be logged in, and consequently prompt for it.
How can I note the person has successfully logged in, using the javascript, that will not be a security issue, but will mean the person does not have to repeatedly log in for each view. I do not want to keep on going back to the server each time.
Edit:::
To explain more. Here are the problems I see.
Lets say I have the following in my javascript:
var isLoggedIn = true;
var userEmail = "myemail#mysite.com";
Anyone can hack my code to change these values and then get another person's info. That is not good. So instead of isLoggedIn do I need something like a hashed password stored in the javascript:
var userHashedPassword = "shfasjfhajshfalshfla";
But every where I read, they say you should not keep any password stuff in memory for any length of time.
So what variables do I keep and where? The user will be constantly flicking between non-user specific divs and user-based divs, and I do not want them to have to constantly log in each time.
****Edit 2:*****
This is what I am presently doing, but am not happy with.
There is a page/div with 3 radio buttons. Vacant games (does not require user information), My Game (requires knowledge of user and must be signed in), My Old Games (also requires logged in status).
When first going on the page it defaults on vacant games, and gets the info from the server, which does not require login.
In two variables in the javascript I have
var g_Email = "";
var g_PasswordEncrypted = "";
Note these are both 0 length strings.
If the user wants to view their games, they click the My Games radio button. The code checks to see if the g_Email and PasswordEncrypted are 0 length strings, if they are it goes to a div where they need to login.
When the user submits their loging info, it goes to the server, checks their details, and sends back an ok message, and all the info (My Games) that the user was requesting.
So if the login was a success, then
g_Email = "myemail#mysite.com";
g_PasswordEncrypted = "this is and encrypted version of the password";
If there is any failure in login, these two are instead set to "".
Then when the user navigates to any page that requires login, it checks to see if these two strings are filled. If they are, it will not go to a login page when you request information like My Games.
Instead it just sends the info in these strings to the server, along with the My Games request. The server still checks these Email and encrypted password are valid before sending back the info, but at the client side, the user has not had to repeatedly input this info each time.
If there is any failure in the server request, it just sends back an error message (I am using ajax) in the callback function, which knows to set the g_Email and g_PasswordEncrypted to "" if there is anything wrong. (In the latter case, the client side knows it has to re-request the login details because these two strings are "").
The thing I do not like is I am keeping the Encryted password on the person's client machine. If they walk away from their machine, someone can open up the debugger in something like chrome and extract these details, and then hack it into their machine some time later.
If javascript loads content for each view from the server then it is for server to know if a current session belongs to logged user or not. In case the user is not logged, the server responses with prompt to login, otherwise it sends content of the view.
If javascript bulds content for the views deriving it from the data that was already received from the server then it should use some variable keeping state of the user (logged/not_logged). And depending on that value javascript will either show a prompt to login or display required content of the view.

How can I configure Polymer's platinum-sw-* to NOT cache one URL path?

How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.

Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery).
We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur.
I thought that (in older browsers anyway) double clicking a submit button worked as follows:
User double clicks submit button.
Browser sends a post on the first click
On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response).
Browser waits for second post to return, ignoring initial post response.
I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response).
From our testing (FireFox 3.0, IE 8.0) this is what actually happens:
User double clicks submit button
Browser sends a post for the first click
Browser queues up second click, but waits for the response from the first click.
Response returns from first click (response is ignored?).
Browser sends a post for the second click.
So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request which it executes and responds to.
My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server?
It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts.
A similar situation that you need to handle (that the javascript disable-submit-button solution doesn't cover) is the one where the user clicks Submit, the server processes the request, but while it's processing the user's internet connection goes down (perhaps they're on a train going into a tunnel).
When the train comes out of the tunnel, the user doesn't know whether their transaction succeeded or not - they pressed the button, but nothing changed on the page (or perhaps they got a "Try again" page). The natural thing for them to do is click Submit again (or the "Try again" button).
The best way to handle this situation is to include a unique transaction id in the form (in a hidden field). Generate this id randomly, and when a transaction is successfully processed, store it in the database in a list of completed transactions.
Then when you get a POST, check whether this transaction has already been seen - and if it has, skip straight to the status page. Roughly:
BEGIN TRANSACTION
SELECT *
FROM completedTransactions
WHERE userId = ... AND transactionId = ...
<if we got a result - display results of previous transaction>
<otherwise - process the request as normal>
INSERT INTO completedTransactions (userId, transactionId)
VALUES (....)
END TRANSACTION
This has the advantage that (provided you have a database that properly supports transactions - and since you're processing payments I hope you do!) you don't need to do any sort of threading or locking - things "just work".
(though be careful - some database systems can arbitrarily abort your transactions if there is a concurrency problem - but this (rare) situation is easily dealt with using a retry loop...)
As to testing double clicks from browsers: does it make any difference if you press the "stop" button between the two "submit" clicks?
this may be a stupid response, but why dont you just disable the submit button with javascript on click, so you dont have to worry about multiple clicks. i usually do this on most forms i make and it seems to solve the problem.
you already said you are using javascript so thats not the issue right?
As long as the request is in its connecting or sending stage, clicking on submit during the first submission cancels the request, starting a new one without the server 'knowing'.