I know what autocomplete attribute is and how to use it, but I don't get the point of it!
I mean, why should I set autocomplete='off' as a developer?
Is there any security benefits in there? or something?
Yes, it's a security feature. Imagine this scenario:
Your website fell victim to a XSS attack (e.g. somebody was able to implant a piece of javascript in your page without your knowledge)
Your website also has username/password fields, and when a user enters the page, the malicious script immediately takes the values and sends them off to the attacker's server.
Granted, this is not strictly mitigated by disabling auto-complete, but at least it prevents userdata from being stolen just by somebody with saved data loading the page. It requires them to at least type in their credentials.
Also there is the rather obvious component of somebody gaining physical access to your machine and then being able to auto-complete their way into your accounts.
Related
Although I know a lot of email clients will pre-fetch or otherwise cache images. I am unaware of any that pre-fetch regular links like some link
Is this a practice done by some emails? If it is, is there a sort of no-follow type of rel attribute that can be added to the link to help prevent this?
As of Feb 2017 Outlook (https://outlook.live.com/) scans emails arriving in your inbox and it sends all found URLs to Bing, to be indexed by Bing crawler.
This effectively makes all one-time use links like login/pass-reset/etc useless.
(Users of my service were complaining that one-time login links don't work for some of them and it appeared that BingPreview/1.0b is hitting the URL before the user even opens the inbox)
Drupal seems to be experiencing the same problem: https://www.drupal.org/node/2828034
Although I know a lot of email clients will pre-fetch or otherwise cache images.
That is not even a given already.
Many email clients – be they web-based, or standalone applications – have privacy controls that prevent images from being automatically loaded, to prevent tracking of who read a (specific) email.
On the other hand, there’s clients like f.e. gmail’s web interface, that tries to establish the standard of downloading all referenced external images, presumably to mitigate/invalidate such attempts at user tracking – if a large majority of gmail users have those images downloaded automatically, whether they actually opened the email or not, the data that can be gained for analytical purposes becomes watered down.
I am unaware of any that pre-fetch regular links like some link
Let’s stay on gmail for example purposes, but others will behave similarly: Since Google is always interested in “what’s out there on the web”, it is highly likely that their crawlers will follow that link to see what it contains/leads to – for their own indexing purposes.
If it is, is there a sort of no-follow type of rel attribute that can be added to the link to help prevent this?
rel=no-follow concerns ranking rather than crawling, and a no-index (either in robots.txt or via meta element/rel attribute) also won’t keep nosy bots from at least requesting the URL.
Plus, other clients involved – such as a firewall/anti-virus/anti-madware – might also request it for analytical purposes without any user actively triggering it.
If you want to be (relatively) sure that any action is triggered only by a (specific) human user, then use URLs in emails or other kind of messages over the internet only to lead them to a website where they confirm an action to be taken via a form, using method=POST; whether some kind of authentication or CSRF protection might also be needed, might go a little beyond the context of this question.
All Common email clients do not have crawlers to search or pre-build <a> tag related documents if that is what you're asking, as trying to pre-build and cache a web location could be an immense task if the page is dynamic or of large enough size.
Images are stored locally to reduce load time of the email which is a convenience factor and network load reduction, but when you open an email hyperlink it will load it in your web browser rather than email client.
I just ran a test using analytics to report any server traffic, and an email containing just
linktomysite
did not throw any resulting crawls to the site from outlook07, outlook10, thunderbird, or apple mail(yosemite). You could try using a wireshark scan to check for network traffic from the client to specific outgoing IP's if you're really interested
You won't find any native email clients that do that, but you could come across some "web accelerators" that, when using a web-based email, could try to pre-fetch links. I've never seen anything to prevent it.
Links (GETs) aren't supposed to "do" anything, only a POST is. For example, your "unsubscribe me" link in your email should not directly unsubscribe th subscriber. It should "GET" a page the subscriber can then post from.
W3 does a good job of how you should expect a GET to work (caching, etc.)
http://www.w3schools.com/tags/ref_httpmethods.asp
I was wondering if it is (easily) possible to update a webpage when there is an action on another webpage.
Example: I check a checkbox and on another webpage, which is already open, there need to be a change. This must be done instantaneously and the time must be as small as possible.
I do not have any code written yet, so i can't show anything.
My first thought would be to put the result of the checkmark in a database with javascript and check the database with ajax every 10 ms on the other webpage.
But i know this will be too slow for me.
Is there a better way to do this (relatively easy)?
No, that is usually not possible without both websites coming from the same domain, and that domain establishing communication between the two Javascript sandboxes run in these two windows.
The point here is that what you describe would be called a cross-site-scripting attack (short XSS attack) and is the security nightmare of every browser developer and website admin.
Does anyone know why Box.com make it so hard to generate an authorization code programmatically? I wrote some code to do this through screen-scraping, and then recently this broke because (as far as I can tell) one HTTP request parameter changed from [root_readwrite] to root_readwrite. I was able to fix it reasonably quickly (thank you Fiddler), but why make developers go to this trouble?
Judging by the number of questions on this topic, many developers need to do this, presumably for good reason, and I don't think it can be prevented, so why not just embrace it?
Thanks for listening, Martin
The issue with doing OAuth programmatically is that it would effectively defeat the point of OAuth. Users are supposed to be presented with the Box login page so that they never have to give their username and password directly to your app. This allows users to see what permissions your app has over their account (the scope) and also allows them to revoke your app at any time.
Doing login programmatically means that at some point your app knows the user's password. This requires that the user trusts you to not do anything malicious, which usually isn't feasible unless you're a well-trusted name. The user also has to trust that you handle their credentials correctly and won't use them in an insecure way.
Box wants to encourage developers to do authentication the correct and secure way, and therefore isn't likely to support doing OAuth programmatically. You should really try to perform login the supported way by going through the Box login page.
In many websites that do not use AJAX, registration forms usually keep all the input data filled upon failed attempts, however, more often than not, the password field has to be refilled by the user.
My question is, why do web developers choose to do this? My first idea was that they are trying to prevent malicious scripts from stealing the password on page load, however, they can just as easily do this with an onKeyUp.
Any thoughts?
Browsers have a feature specifically for passwords, and are saved ONLY if the user explicitly allows the browser to. Most browsers will ask 'Do you want to store the password?'
You could further keep a master password to protect the saved passwords when a user is not on his/her machine.
IMO, I do not find it wrong to save passwords, because, since I let the browser save my password, I generally tend to have much stronger passwords, since I tend to use autogenerated passwords, which are usually difficult to remember. Also, it helps my keeps me away from using a single variation of the password for multiple websites.
There are various ways through which you might be able to sniff the password, irrespective of whether you save it or not. Sites usually have a password restore feature, that would link to, say my mobile phone, in case of a password breach.
So, the website should allow the user to save the password in the browser, and this puts the responsibility on the user, and it is HIS/HER decision how he/she wants to use the password.
Because to auto-fill a field, if you look at the markup, you'd see something like <input type="password" value="YOURPASSWORDHERE" /> which is not so great for security. Getting markup is easier than monitoring the JS events in terms of XSS, as I can request a page easier than I can manipulate the DOM with JS.
This isn't about scripts, the website needs to trust scripts running in its context.
Browsers often cache the page markup. You probably don't want to have your password in the cache.
I am relatively new to web development, and I was hoping I could get some pointers about the feasibility of a feature I would like to implement. Is it possible to have a url link that you can click on, that can contain login credentials for the website it is linking to, so as to bypass that websites login screen?
In other words, can I make a link from my website to facebook, that would allow me to login right in to my facebook, from any computer? Meaning, if I don't have cookies to store my login info in, is it possible to login still?
This is just a conceptual question, so any help would be appreciated! Thanks!
One reason why this is generally avoided, is because web servers often store the query string parameters in the access logs. And normally, you wouldn't want files on your server with a long list of usernames and passwords in clear text.
In addition, a query string containing a username and password could be used with a dictionary attack to guess valid login credentials.
Apart from those issues, as long as the request is made via HTTPS, it would have been safe during the transit.
It is possible to pass parameters in the URL through a GET request on the server, but one has to understand that the request would likely be made in clear text and thus isn't likely to be secure. There was a time where I did have to program a "silent" log-in using tokens, so it can be done in enterprise applications.
You used to be able to do this, but most browsers don't allow it anymore. You would never be able to do this using facebook only something that uses browser auth (the browser pops up a username/pass dialog)
it was like this:
http://username:pass#myprotectedresource.com
What you might be able to do is whip up some javascript in a link that posts your username and password to the login page of facebook. Not sure if it will work because you might need to scrape the cookie/hidden fields from the login page itself.
It is possible for the site to block you on account of no cookies, or invalid nonce or wrong HTTP referrer, but it may work if their security is low.
While it is possible, it is up to the site (in this case Facebook) to accept these values in the query string. There are some security issues to consider certainly, and isn't done generally.
Though, there are different options out there for single sign on. This web site uses OpenID for that.