Login per Local HTML form, security? - html

Recently I had problems with my email account (gmx.net). I have about 30 failed login attempts a day. But that is not the topic of the question (I already changed my password).
It got me thinking. Is this in automatic attack? And if so, how is it done? I took a look at the HTML code of the page and found out, that it is pretty easy to just copy the source code of the form element and do a login attempt through a local html file (copy and paste, new HTML file, open in browser, enter your credentials, submit). That means it is an easy task to automate such things (write a little script, that does a post with various values --> Brute Force attack). I was about to write an email to the mail hosted, when I found out, that the exact same process can be done on facebook.com....
I had the impression, that since we have all these new fancy web frameworks like Rails, Django and so on, we have an automatic protection against such attacks (for example the protect from forgery which Rails includes http://ruby.about.com/od/security/a/forgeryprotect.htm)
My question here is:
Is there any sane reason to allow a login attempt from another server?
Don't give me "API", the most APIs for web application require a manual login process before authorization.
I know there are many more ways to brute force attack any website login (use a framework that controls a browser etc...) and there are many ways to protect (IP-banning etc). But shouldn't disabling a remote login be one of the first security mechanisms you would take?

Related

Html Form "Action Attribute"

I'm learning to code and encounter a problem with making a form using HTML.
In the book, it stated that "every <form> element requires an action attribute and its value is the URL for the page on the server that will receive the information in the form when it is submitted."
But I thought about it for a long time, and I couldn't figure it out. What is meant by "the URL for the page on the server". If I got a site uploaded to a web hosting company, I would need to get it there? Or I need to rent a server elsewhere so that I will get one? Or it just fine to be store on a local file? Because I saw the data need to be processed by PHP, although I don't know whats that.
Can anyone help me with this? Really appreciated.
Regards,
Ace
An HTTP(S) URL will include a hostname (which identifies a computer (acting as a server) on a network) and a path (and possibly some other components which don't matter for this question).
When you type a URL into the address bar of a browser, the browser will make a request to the server and ask for whatever is at the path.
The server will respond (typically with some data like an HTML document).
The server has to perform some logic to decide what to respond with. Typically this will either be:
Reading a file from its hard disk and returning the contents or
Executing a program and generating some content programmatically
When you submit a form, you are making a request to a URL with some data attached to it. Almost all of the time you will want the server to execute a program and do something with that data (such as put it in a database).
The program that gets executed can be written in any programming language you like (such as Perl, PHP, JavaScript, Java, or whatever).
If I got a site uploaded to a web hosting company, I would need to get it there?
Typically, if you have web hosting already then you will use that web hosting for any server-side programming you need to do.
If the hosting service doesn't provide you with any server-side hosting options (i.e. if it is just static hosting such as you might find from Github Pages). Then that isn't an option. Likewise if the server-side options they program aren't suitable for you (e.g. they only support PHP but you want to run something written in Node.js) then you'll have to find an alternative.
The two alternatives you have are:
Move everything to hosting that provides the features you want
Host something elsewhere and keep the majority of your site in the original hosting
(There is nothing wrong with the latter option, I have one site which uses Amazon S3 static hosting for most of it but has a couple of web services running in Heroku).
Or I need to rent a server elsewhere so that I will get one?
Dedicated hosting is almost certainly very expensive overkill for your purposes.
Or it just fine to be store on a local file?
It isn't possible to do server-side programming with a file: scheme URL. There's no server to execute the program.
If you are only working locally then you can install a web server on your computer. This is normal for development purposes.
It is probably worth mentioning that there are a few common server-side programs which are available prewritten with hosting services (e.g. contact forms which email you when someone fills them in). These typically come with advertising and require that the <form> and its contents are constructed with the specific fields the service expects. If you look for one of these be careful to follow their instructions precisely.
Aside: The statement that the action attribute is required is flat out wrong. It is an optional attribute and in its absence the form will be submitted to the URL of the current page.
HTML is a front-end tool that allows you to take inputs using a form from the user. once the user clicks the submit button a post request is sent to the back-end tool (PHP in this casse; PHP is a back-end programming language). the back-end tool handles the information given and deals with it to your liking. the action attribute is basically telling the front-end (HTML) where to give the information to the back-end(PHP) which is in the form of a URL.
i highly suggest trying out this example on W3schools :
https://www.w3schools.com/tags/att_form_action.asp
I see you want to send your HTML form data to a PHP script. You can use WAMP server or XAMP server to do this locally. You should use the PHP script file path as the input for the .
For instance,
Home.html
<HTML>
....
<form action="process.php">
...
</form>
</HTML>
So, when you submit the form , the form data is passed onto the PHP file which you can access using a $_[SUBMIT] super global array in your PHP script.
Check github for projects on HTML,JAVASCRIPT,PHP.
Here's one to start: https://github.com/kristej/Uniform-Database-Management.git
If you are sending the data to an online server, you need to own it to process it. Hence try it out locally first.

(NodeJS, MySQL, AngularJS, Express 4.0) Risks of not blocking my api/routes for users?

At the moment I am working on a CRUD app that I am going to deploy (someday) and use for my own startup company. However I am nowhere near finishing this product and I stumbled upon a question that I can't seem to figure out.
I am using Express to serve angular the data out of my MySQL database. To do this I had to create '/api/' routes. However if I go (for example) to '/api/clients' I will be able to see the entire list of clients in an ugly array. In this case that does not really matter since it's just the data they were able to see anyways.
However my question is, is it important to block these kind of routes from users? Will problems arise when a user goes to 'api/createClient'? Could this result in a DB injection that could ruin my db?
My project can be found here: https://github.com/mickvanhulst/BeheerdersOmgevingSA
The server-side routing code can be found: server > Dao > clientDao.js
Controllers, HTML & client-side routing can be found in the 'public' folder.
I hope my question is clear enough and someone will be able to answer my question. If not, please state why the question is not clear and I will try to clarify.
Thanks!
Looking at the code, it looks like your URLs can directly be accessed using browser and if yes, then this does pose a security concern.
Doing DB transaction with the user provided fields or values is major security concern, if these data are not validated and sanitised before making a database call.
I would recommend following minimum steps to follow before crafting APIs which is internal but can be accessed using browser -
If this is internal, then do not provide HEADER ACCESS CONTROL from the server or keep it confined only to your domain name. This prevents any ajax call to be made to your APIs from another domains.
Do sanitise and validate all the data thoroughly before doing any kind of database transactions. There are lots of material on this everywhere on how to do it.
If these APIs are meant to be used for internal purpose, then kindly provide some kind of authentication to your APIs before doing the logical work in your routes with the help of middle-wares. You can leverage cookie authentication for very simple API authentication management. You can also use JSON Web Tokens, if you want a more levels of security.
If you are manipulating your databases then I would highly recommend to use some kind of authentication in your APIs. Ofcourse, point number 2 is must.

best way to switch between secure and unsecure connection without bugging the user

The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!
I generally advise never to skip security features, because they are there for a reason, but i found this for you to check out.
Perhaps it is time to consider retiring support for IE6?

Iframe Injections in Websites

My website has been compromised. Some one have injected some iframe markup in my website.
How they have done this? Only on my index.html, index.php page. But I have blocked write permissions for this page, then how they able to write in my pages.
Will it effect other pages on my server?
Is there any other solutions to block this?
Thank you
<?php`
include_once("commonfiles/user_functions.php");
include_once("user_redirect.php");
include_once("designs/index.html");
?>
<iframe src='url' width='1' height='1' style='visibility: hidden;'></iframe>
This is my index.php code <iframe> is injected after the php script.
Someone with FTP access to your site (you or your developers) has a virus on their workstations. This virus has installed a keylogger that is stealing credentials from your FTP client and sending this information back to the hacker.
The hacker collects hundreds of such credentials and then uses a program to log into each server, download a file, modify it to append an iframe or block of obfuscated JavaScript or PHP, upload the file, download the next file, modify, upload, next, etc. The files downloaded may either match a set of names (such as only index., default., home.* etc) or just any html or PHP file.
The appended code is often either an iframe that is visibility: hidden or of 1x1px size, a <script> sourcing a remote JavaScript file on a dubious domain, a collection of Javascript obfuscated by some clever str.CharCode'ing, or a block of base64_encode'd eval()'d code. Unobfuscating the code, the result is often an iframe. More recently, some clever attackers are inserting remote shells, granting them backdoor access to your server.
Once all the files have been modified, the attacker logs out. Visitors to your site will be subject to malicious code from the domain linked in the iframe with the intention of installing viruses and rootkits. Among other functions, these viruses will install a keylogger to sniff FTP credentials... and the virus continues spreading.
The attacker is using your credentials, so they can only access files that you have access to. Sometimes, they will upload an additional file in certain directories with an encoded shell, allowing them return access to the server (the common ones are _captcha.php in /forums directores and img.php or gifimg.php in /gallery directories). If you host other domains on your server, as long as the user for the affected domain has no access beyond their current domain, others will not be affected.
There are two ways to stop this sort of attack -- prevention and proper antivirus. The attacks can be easily deflected by use of a firewall and limiting FTP access to only a few select IPs. The attackers are not attacking from your own workstation (yet), but rather a server elsewhere in the world. Using proper antivirus on all workstations with access to your FTP account -- or, better yet, not using Windows XP -- will help prevent the original infection from occurring.
If you are infected, it's fairly easy to clean the messes up using a bit of clever sed, depending how good you are at spotting the injection and making effective regexes. Otherwise, backups backups backups -- always have backups! ...Oh, and change your FTP password or they'll be back tomorrow.
If the php file itself has been edited to include this iframe and if there truly is no way for another script you are running to write to the file then a user account with access to the file might have been compromised. If there is a user account with access to the file that has a weak password this would be my candidate as the most likely culprit.
They may have used some form of injection on your site to acquire usernames and password hashes and bruteforced those, they might have installed a keylogger on someone's machine who has access, or they may have just brute forced your login directly (assuming you don't have some sort of mechanism in place to prevent this).
First thing I would do is ensure there are no viruses running on anyone's computer who has access to the machine. Then go about changing passwords. And finally review the php scripts of the site for possible points of injection. Trouble spots are pretty much anywhere you're taking in some kind of user input and processing it without first checking to make sure it is safe to process (i.e. failure to strip dangerous characters from a user login form).
From the comments that have been posted so far it seems almost certain that someone has gained access to a user account with write permissions on the files that are having code injected into them. It sounds like some individual has discovered one or more account passwords and has made it their pastime to occasionally log into your FTP and make some changes. Have you tried changing your passwords? I recommend using a fairly secure password, of at least 15 characters and using a variety of character types including unprintable characters if you are able (use alt/meta keys to enter UTF codepoints on the number pad).
If, after changing your password, you still observe the same problems, then there could be another issue. I would first seriously scrutinize your PHP scripts. Anywhere your scripts accept user input from a form, data stored in a cookie, or other data originating from outside the script itself (and therefore potentially "dirty" data), go over the operations of the script with this data very carefully. If you are using any such potentially dirty data to run an OS command, open/read/write a file, or query a database, then it is possible that the data contain escape characters that will escape your code, allowing an attacker to execute any code they wish within your script.
Keep an eye on your access logs. You mentioned that you remove the injected iframe code from your scripts and it keeps being re-injected, so if you can catch when it happens you can probably find a clue as to the source of the changes in your access logs.
See this thread for more on trying to block iframes.

Linux web front-end best practices

I want to build a web based front-end to manage/administer my Linux box. E.g. I want to be able to add users, manage the file system and all those sorts of things. Think of it as a cPanel clone but more for system admin rather that web admin.
I was thinking about creating a service that runs on my box and that performs all the system levels tasks. This way I can have a clear separation between my web based front-end and the actual logic. The server pages can than make calls to my specialized server or queue tasks that way. However, I'm not sure if this would be the best way to go about this.
I guess another important question would be, how I would deal with security when building something like this?
PS: This just as a pet project and learning experience so I'm not interested in existing solutions that do a similar thing.
Have the specialized service daemon running as a distinct user -- let's call it 'managerd'. Set up your /etc/sudoers file so that 'managerd' can execute the various commands you want it to be able to run, as root, without a password.
Have the web server drop "trigger" files containing the commands to run in a directory that is mode '770' with a group that only the web server user and 'managerd' are members of. Make sure that 'managerd' verifies that the files have the correct ownership before executing the command.
Make sure that the web interface side is locked down -- run it over HTTPS only, require authentication, and if all possible, put in IP-specific ACLs, so that you can only access it from known locations, in advance.
Your solution seems like a very sensible solution to the 'root' issue.
Couple of suggestions:
Binding the 'specialised service' to localhost as well would help to guarantee that requests can't be made externally.
Checking request call functions that perform the actions and not directly give the service full unrestricted access. So calling a function "addToGroup(user,group)" instead of a generic "performAction(command)".