Determining a user's general employment status - html

I'm developing a web application for people who manage nonprofit organizations. This will require me to verify that certain users belong to a nonprofit. Here are a few verification methods I've considered:
Verifying the user's location through JavaScript's Geolocation API.
Maintaining a list of nonprofit domains and requiring that the user's email matches one of the domains.
Sending a physical letter to a business address.
Soliciting a phone number and calling the users to confirm their employment status.
None of these methods seem to be efficient or scalable. The first method may provide an extra little layer of security, but users can easily spoof geolocation. Is there a better way to accomplish this type of verification?

Related

Using Azure Service Bus to communicate between 2 organizations

We wish to decouple the systems between 2 separate organizations (as an example: one could be a set of in house applications and the other a set of 3rd party applications). Although we could do this using REST based APIs, we wish to achieve things like temporal decoupling, scalability, reliable and durable communication, workload decoupling (through fan-out), etc. And it is for these reasons, we wish to use a message bus.
Now one could use Amazon's SNS and SQS as the message bus infrastructure, where our org would have an SNS instance which would publish to the 3rd party SQS instance. Similarly, for messages the 3rd party wished to send to us, they would post to their SNS instance, which would then publish to our SQS instance. See: Cross Account Integration with Amazon SNS
I was thinking of how one would achieve this sort of cross-organization integration using Azure Service Bus (ASB), as we are heavily invested in Azure. But, ASB doesnt have the ability to publish from one instance to another instance belonging to a different organization (or even to another instance in the same organization, not yet at least). Given this limitation, the plan is that we would give the 3rd party vendor separate sets of connections strings that would allow them to listen and process messages that we posted and also a separate set of connection strings that would let them post messages to a topic which we could then subscribe to and process.
My question is: Is this a good idea? Or would this be considered an anti-pattern? My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
Is this a valid concern?
Have you done this?
What other issues might I run into?
Using Azure Service Bus connections string with different Shared Access Policy for senders and receivers (Send and Listen) is intended to be used by senders and receivers with limitted permissions. Just like you intend to use it.
My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
The coupling always exists. You're coupled to the language you're using. Datastore technology used to persist your data. Cloud vendor you're using. This is not type of coupling I'd be worried, unless you're planning to change those on a monthly basis.
Not more specific to the communication patterns. Sessions would be a business requriement and not a coupling. If you required ordered message delivery, then what else would you do? On Amazon you'd be also "coupling" to FIFO queue to achieve order. Message ID is by no means coupling either. It's an attribute on a message. If receiver chooses to ignore it, they can. Yes, you're coupling to use BrokeredMessage/Message envelope and serialization, but how else would you send and receive messages? This, is more of a contract for partied to agree upon.
One name for the pattern for connecting service buses between organizations is "Shovel" (that's what they are called in RabbitMq)
Sometimes it is necessary to reliably and continually move messages
from a source (e.g. a queue) in one broker to a destination in another
broker (e.g. an exchange). The Shovel plugin allows you to configure a
number of shovels, which do just that and start automatically when the
broker starts.
In the case of Azure, one way to achieve "shovels" is by using logic apps, as they provide the ability to connect to ASB entities in different namespaces.
See:
What are Logic Apps
Service Bus Connectors
Video: Use Azure Enterprise Integration Services to run cloud apps at scale

How is location determined from internet?

I was just installing Ubuntu and noticed it was downloading updates from ca.archive.ubuntu.com. How did it know I was in Canada? As far as I'm aware an IP packet carries no information regarding physical (geographcial location) and there is no stipulation in the Ethernet standard saying anything about information regarding location.
So how do things such as geolocation work? For example this website tells you which country your IP address belongs to. Is it just a matter of looking up an IP address in a table? If so where does the data come from, it's not as if people actively signup to have their IP address associated with the building address?
how does IP address geolocation work, does it just lookup the IP from a table?
Yes, that's exactly how it works.
IP geolocation is nothing more complicated than a database lookup. IP addresses are assigned by IANA to regional governing entities who then assign (sell) them to ISPs, governments and corporations (IBM for example has a dedicated block of IP addresses for themselves because they got into the internet game very early on).
Based on this fact we can sort of figure out where an IP address is located. IANA themselves publish the block level allocations on their site: https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xml which is rendered beautifully in this XKCD comic: http://xkcd.com/195/.
As for the more detailed info like which city that IP address comes from, to get that information requires more data gathering. Some ISPs may tell you their assignment schemes, most dont. So most databases like whatismyipaddress.com painfully build theirs up by surveys (simply asking people where they are or via smartphone apps tapping into GPS), looking up whois databases (which may or may not lie) and careful guessing.
Yes, your IP carries a geolocation as well. I'm not sure that's the best way to describe it, as it doesn't really carry the information (I don't think?). This link gives a pretty good idea of the kind of details they can get from your ISP though:
http://whatismyipaddress.com/geolocation-accuracy
Of course all of that revealing information can be partially negated by using a proxy.

Device identification authentication like in google / facebook

How to identify user's device on authentication -- such that second factor auth (such as SMS) can be enforced if user is logging in from an unknown device? I've seen google and facebook has this feature, and they're not using a simple IP check -- if I have two devices on the same network, it can still detect if I tried logging in from an unknown device.
Especially on websites -- as far as I know we can only get user's IP, and other information on HTTP headers, but how do we identify the user's device securely?
You could use cookies - if the user doesn't have the appropriate cookie, then require a successful two factor authentication?
You could also consider browser fingerprinting in combination with IP address / geo-ip information for a more heuristic approach, though that would be a lot more complex to implement and likely more fragile.

How do people handle authentication for RESTful api's (technology agnostic)

i'm looking at building some mobile applications. Therefore, these apps will 'talk' to my server via JSON and via REST (eg. put, post, etc).
If I want to make sure a client phone app is trying to do something that requires some 'permission', how to people handle this?
For example:
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
How can this be done?
I've had a look at how twitter does it with their OAuth .. and it looks like they have a number of values in a REQUEST HEADER? If so (and I sorta like this approach), is it possible that I can use another 3rd party as the website to store the username / password (eg. twitter or Facebook are the OAuth providers) .. and all I do is somehow retrieve the custom header data .. and make sure it exists in my db .. else .. get them to authenticate with their OAuth provider?
Or is there another way?
PS. I really don't like the idea of having an API key - I feel that it can be too easily handed to another person, to use (which we can't take the risk).
Our website sells things -> tv's, car's, dresses, etc. The api will
allow people to browse the shop and purchase items. To buy, you need
to be 'logged in'. I need to make sure that the person who is using
their mobile phone, is really them.
If this really is a requirement then you need to store user identities in your system. The most popular form of identity tracking is via username and password.
I've had a look at how twitter does it with their OAuth .. and it
looks like they have a number of values in a REQUEST HEADER? If so
(and I sorta like this approach), is it possible that I can use
another 3rd party as the website to store the username / password (eg.
twitter or Facebook are the OAuth providers) .. and all I do is
somehow retrieve the custom header data .. and make sure it exists in
my db .. else .. get them to authenticate with their OAuth provider?
You are confusing two differing technologies here, OpenID and OAuth (don't feel bad, many people get tripped up on this). OpenID allows you to defer identify tracking and authentication to a provider, and then accept these identities in your application, as the acceptor or relying party. OAuth on the other hand allows an application (consumer) to access user data that belongs to another application or system, without compromising that other applications core security. You would stand up OAuth if you wanted third party developers to access your API on behalf of your users (which is not something you have stated you want to do).
For your stated requirements you can definitely take a look at integrating Open ID into your application. There are many libraries available for integration, but since you asked for an agnostic answer I will not list any of them.
Or is there another way?
Of course. You can store user id's in your system and use basic or digest authentication to secure your API. Basic authentication requires only one (easily computed) additional header on your requests:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
If you use either basic or digest authentication then make sure that your API endpoints are protected with SSL, as otherwise user credentials can easily be sniffed over-the-air. You could also fore go user identification and instead effectively authenticate the user at checkout via credit card information, but that's a judgement call.
As RESTful services uses HTTP calls, you could relay on HTTP Basic Authentication for security purposes. It's simple, direct and is already supported for the protocol; and if you wan't an additional security in transport you could use SSL. Well established products like IBM Websphere Process Server use this approach.
The other way is to build your own security framework according to your application needs. For example, if you wan't your service only to be consumed by certain devices, you'll need maybe to send an encoded token as a header over the wire to verify that the request come from an authorized source. Amazon has an interesting way to do this , you can check it here.

How to monitor a POP, SMTP and Exchange Server for mail activity

We need to write a .Net (C#) application that monitors all mail activity through a POP, SMTP and Exchange Server (2007 and later) and essentially grab the mail for archiving into a document management system. I realise that the way to monitor each type of server would probably be different so I'd like to know what the best (most elegant and reliable) way is to achieve this.
Thanks.
Many countries have rather narrow regulations for what such a system must do and what it must not do in order to be in compliance with the law. If you are developing a product for a company in SA that wants to sell it internationally, I would suggest that need a more targeted approach.
Depending on the legal framework, your solution will have to intercept and archive all emails, or just a subset.
For instance, some countries do not allow the company to store private emails of employees, in which case the archival process needs to be configurable with rules that the employee can control.
If the intent is to archive each and every email, then the network-level approach that Jimmy Chandra suggested is better, because it is easier to deploy.
I don't think you need to worry about POP right? it is not used for sending mails (unless you need to monitor access to emails too).
Regarding Exchange, versions 2000 onwards have Journaling support (don't know about previous ones), so a mail is copied cto a mailbox as it is sent/recieved (there are several different options depending Exchange version, check it out). Then you can read that mailbox or set a rule to forward it to an external SMTP, and you app listen to it.
For other SMTP servers, it would be possible to get a similar approach by forwarding rules etc, and some might have custom support as Exchange has.