Restricting Web Sockets to LAN network - html

Is it possible to restrict HTML5 WebSocket to a LAN network?
For example, I would have a live website, at http://example.com, and all users on the same local network would need to be treated as a group. Similarly, these same users would NOT be able to see or affect any actions of users outside of the LAN.
I have looked into wrappers such as NowJS, and this has built-in support for "groups", but I'm not sure if this is what I'm after.
Any ideas?

If you have access to the user's IP address, you could check if the first three segments are same and use that as a grouping criteria. But I guess that might work ok only with office LANs without NAT.

Related

Zabbix: filter discovery action by IP address

I'm currently monitoring several routers I have in my network with Zabbix 3.4.4. I'm now adding them manually but I'd like to use the discovery feature to do this automatically. The problem I have is that I need to monitor only the router, and not all other hosts on the net.
For example: I have a discovery rule for 10.0.0.0/16, I add a new network 10.0.10.0/24 which has several hosts, but I want to monitor only 10.0.10.1. Sadly being routers and from different manufacturers I cannot test services or responses, I can rely on ping only.
From what I see in the Action options there's no way to filter for such option, am I right? Is there any other way to filter hosts IPs so that I can add Zabbix monitoring only to router's IPs?
It seems like the benefit of repeatedly scanning the whole subnet just to find a small number of hosts is just not there. I'd suggest looking into creating those hosts via API instead.
Having said that, a range of 10.0.0-255.1 might work, and also reduce your network traffic significantly.

Allowing multiple logins from one account with ejabberd

I have just started getting my hands dirty with building IM applications with ejabberd XMPP server and I have a requirement to allow one user account to login simultaneously from multiple devices and be able to follow conversations on all their logged in devices much like what gives in Skype, FB.
Is this possible with ejabberd out of the box or are there any further customizations one has to do?
Any pointers I can get woild be helpful. The body of knowledge out there is quite huge and knowing where to start looking has been quite daunting.
Yes, connecting from multiple devices at once is part of the XMPP standard. In a JID, the "resource" portion (e.g.: the part after the slash in jome#stackoverflow.com/desktop) is unique to a single connection and users may have many resources. So the resource could be your MAC or some unique device ID.
Vanilla XMPP allows users to specify priorities with each resource, and messages are routed to the highest-priority resource present. To follow a conversation across all resources at once, you need to enable XEP-0280.

Business website hosted publicly (for APIs, etc) needs to be accessible ONLY from inside office?

Would appreciate your patience with this question; still learning a lot of things.
My Taxi booking start-up has a website (CakePHP) hosted on EC2 (for reliability) which is a ERP of sorts used only by internal employees. This tool also interacts with the Cabs/Taxis' GPS receivers in that these GPS machines send some data to the public server through some APIs which help decide logic for the Booking process. And as we don't have very strong Net on premises, we've kept it all on EC2.
Now, we are increasingly concerned about leaving information (customer data, vehicle info) like this on the public domain and accessible from the internet and outside the premises by a rogue employee. For our implementation, MySQL replication has already been considered with us reading from a local slave, writing to the master and etc. The only issue being, there's no way non-technical employees would know whether the data is new or whether the replication is broken. Also, we'd prefer our servers online as we don't want to invest in physical security for this hardware.
We are thinking of the following:
IP address based auth; those belonging to the local NAT would be allowed. Problem is we have a dynamic IP.
Computername/MacID based auth; almost no-security once the user finds out. Also, can we read these parameters from Chrome?
Storing a list of IP addresses that login and as there are just 6 employees, we'd be able to monitor it for weird IPs. Not scalable or even secure.
Hosts file configuration on employee PC and this "host" would be configured on apache2 so directly hitting the IP address would do no good. Again, needs one smart employee.
Do help us out!
I think you should look at VPC, its Amazon's virtual private cloud. Its the better option for hosting solutions on EC2 that are private to your enterprise.
it would allow you to create a private network that is only accessible from your computers, with internet facing servers in a seperate subnet.
you have a number if ways of connecting the private subnet to your office, a VPN seems the option here for you (low cost, no special h/w required), see http://aws.amazon.com/vpc/ and http://d36cz9buwru1tt.cloudfront.net/Extend_your_IT_infrastructure_with_Amazon_VPC.pdf
I considered writing this as a comment but don't have enough rep...
I'm not sure where you are from, but in my region, the cost of a static IP is negligible ($10-$50) a month, which is a drop considering the risk of liability you are facing. Then you can secure the server with usernames and passwords, and check the originating IP also.
You may also be able to setup a computer to scrape something like whatismyip every hour to see if the IP changes and update the IP if it changes.

Should I use SSL on all web pages or just some account pages?

My user account and login pages are SSL, but the rest of my site is not. What bebnefit is there to switching between the two as I am doing vs making the whole site SSL?
There is an overhead to using SSL, although in reality it may not cause a concern - as pointed out in this SO question.
You can minimise what overhead there is by only using SSL for those transactions where it adds value - i.e. where you want to ensure the confidentiality and integrity of the data in transit. In many cases this is only the case for username and password details, however there may be other transactions where you also want these features.
in general, once logged on, a session-id is passed between client and server. if this cookie is sent in clear text (as with non-ssl requests/responses), it can be sniffed and used to enter the user's account without having to log on (session hijacking attack). this is why google recently enabled 'always on https' for gmail.
Use ssl on pages where you ask user to submit his credit card number, for example. Don't overuse it without enought reasons.

Simple, secure API authentication system

I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'