Is a software token a valid second factor in multi-factor security? - language-agnostic

We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"?
Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough.

I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following:
you would notice when you don't have it - a clear indication security has been compromised
only 1 person can have it. So if you do, someone else doesn't
Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has".

While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing.
Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows".
With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations.

A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing).

I agree with #freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site.
Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system?
The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though.

What you're describing is something the computer has, not the user.
So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user...
Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine.

Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks.
You can find out more here:
http://www.wikidsystems.com/learn-more/technology/mutual_authentication
and
http://en.wikipedia.org/wiki/Mutual_authentication
and here is a tutorial on setting up mutual authentication to prevent phishing:
http://www.howtoforge.net/prevent_phishing_with_mutual_authentication.
The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.

Related

Using a SSl certificate [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So, I'm brand new to creating a https-compatible site.
I'm currently working with a client with whom I developed a custom Facebook tab for; the files are currently hosted on my server which I have not purchased a security certificate for.
My client has a security certificate for one of their websites, which I do not have access to. My client sent me a text file with a combination of letters and numbers, and I have absolutely no idea what I'm supposed to do with it.
Anybody have any clue how I'm supposed to use it?
In short, you (probably) can't use it for that. But we need to check to be sure.
Background
As you know, SSL is used to secure the communication between two systems, one a server and the other a client (well, for the purposes of this communication link anyway). For the code that sits on initiating, client end of the communication channel to know that it's talking to the right server, it needs not just to have secure communication but also the identity of the server. (Without that, DNS spoofing or any number of IP-level tricks would be utterly massive problems.) This is where certificates come in.
Servers have a cryptographic identity (a public/private key-pair) that they use as part of the boot-strapping of the SSL connection which proves who they are. The public part of that is told to anyone who asks, and the server proves that it has private part through the fact that it can do the key-pair based cryptography (basically, that's mathematical magic, a.k.a. number theory). Then, all the client has to do to know whether to trust the connection is to work out whether they trust the identity stamped into the public key. This can either be by having been previously told directly “trust this certificate” or by the fact that it was digitally signed by someone it trusts (which is how the Certificate Authority system works).
A certificate is basically the public key of a key-pair, at least one digital signature, plus additional information. Examples of the additional information that could be there are the name of the host for which this is a certificate, the period of time for which the certificate is valid, who the administrative contact is, or where to go to find out whether the certificate has been withdrawn early. There are many other options.
What to do with a bare certificate?
With a bare certificate (in PEM format, as you say) all you can do is add it to your collection of trusted certificates or look at the information encoded within the certificate. So we'll start by looking at the information. For that, we use the openssl program (which has a horrible command line interface):
openssl x509 -in thecert.pem -text -noout
That will splurge a whole bunch of information out. The most important part is the “Subject” field; what or who is this certificate talking about? Since this is about HTTPS (which imposes a few extra constraints of its own) we should check whether that contains a hostname of some kind, and what host it is talking about.
Now you have the information to be able to figure out what's going on.
If the whole certificate matches up (especially the digital signature) with what you've already got deployed on your own HTTPS-enabled server, then your customer has just sent you back something you already have. Ho hum.
If the hostname is for a machine that you control and your customer doesn't (e.g., your development server) then your customer has just tried to get a certificate on your behalf. That's a bit of a no-no, but I advise taking it well — especially if you've not yet set up HTTPS. For the purposes of testing, you can get your own single-host certificate (that signs a public key where you've generated the private key yourself) for next to nothing. It's also a reasonable expense to bill your customer.
If the hostname is for the machine where the customer has told you they want to deploy your code in production, then they've just given you something you don't really need. I suppose it might be relevant for client code that wants to connect to the deployment server, but that's not as useful as all that; certificates expire, stuff moves round, and all sorts of things happen in production that can mean that it is useful to issue a new server certificate. Having to push updates to all the deployed clients just because someone accidentally deleted the server certificate without keeping a backup (a more common thing than you might wish) would Truly Suck. Thus, the deployment host certificate is not something you should need.
If its none of these, and it's a long lived certificate (check the Validity field from the information you printed out before) then it might actually be the certificate of a back end service that you're supposed to talk to. Or the certificate of a private CA that signs all the certificates of the back-end services that you talk to. (Are you doing this? I don't know, and I don't know your app, but it's quite possible.) In this case you would add the certificate to the list of trusted certificates in your code (the exact way depends on how your code handles SSL) and this is the only use I can think of for a certificate at the stage you're at.
Trouble is, I don't think (on the basis of what you write) that it's all that likely. Talk to your customer; security is something where you want to get it right, and use and trust of certificates is key to that.
If it's truly none of the above, talk to the customer and say you're a bit confused. I know I am in this case!

Demystifying Web Authentication

I'm currently researching user authentication protocols for a website I'm developing. I would like to create an authentication cookie so users can stay logged in between pages.
Here is my first bash:
cookie = user_id|expiry_date|HMAC(user_id|expiry_date, k)
Where k is HMAC(user_id|expiry_date, sk) and sk is a 256 bit key only known to the server. HMAC is a SHA-256 hash. Note that '|' is a separator, not just concatenation.
This looks like this in PHP:
$key = hash_hmac('sha256', $user_id . '|' . $expiry_time, SECRET_KEY);
$digest = hash_hmac('sha256', $user_id . '|' . $expiry_time, $key);
$cookie = $user_id . '|' . $expiry_time . '|' . $digest;
I can see that it's vulnerable to Replay Attacks as stated in A Secure Cookie Protocol, but should be resistant to Volume Attacks, and Cryptographic Splicing.
THE QUESTION: Am I on the right lines here, or is there a massive vulnerability that I've missed? Is there a way to defend against Replay Attacks that works with dynamically assigned IP addresses and doesn't use sessions?
NOTES
The most recent material I have read:
Dos and Don'ts of Client Authentication on the Web
aka Fu et al.
(https://pdos.csail.mit.edu/papers/webauth:sec10.pdf)
A Secure Cookie Protocol
aka Liu et al.
(http://www.cse.msu.edu/~alexliu/publications/Cookie/cookie.pdf)
which expands on the previous method
Hardened Stateless Session Cookies
(http://www.lightbluetouchpaper.org/2008/05/16/hardened-stateless-session-cookies/)
which also expands on the previous method.
As the subject is extremely complicated I'm am only looking for answers from security experts with real world experience in creating and breaking authentication schemes.
This is fine in general, I've done something similar in multiple apps. It is no more susceptible to replay attacks than session IDs already were. You can protect the tokens from leakage for replay by using SSL, same as you would for session IDs.
Minor suggestions:
Put a field in your user data that gets updated on change-password (maybe password generation counter, or even just the random salt), and include that field in the token and signed-part. Then when the user changes their passwords they are also invalidating any other stolen tokens. Without this you are limited on how long you can reasonably allow a token to live before expiry.
Put a scheme identifier in the token and signed-part, so that (a) you can have different types of token for different purposes (eg one for auth and one for XSRF protection), and (b) you can update the mechanism with a new version without having to invalidate all the old tokens.
Ensure user_id is never re-used, to prevent a token being used to gain access to a different resource with the same ID.
Pipe-delimiting assumes | can never appear in any of the field values. This probably works for the numeric values you are (presumably) dealing with, but you might at some point need a more involved format, eg URL-encoded name/value pairs.
The double-HMAC doesn't seem to really get you much. Both brute force and cryptanalysis against HMAC-SHA256 are already implausibly hard by current understanding.
Unless your transactions/second will tax your hardware, I would only pass a hash in the cookie (i.e. leave out the user_id and expiry_date -- no sense giving the bad people any more information than you absolutely have to).
You could make some assumptions about what the next dynamic IP address should be, given the previous dynamic IP address (I don't have the details handy, alas). Hashing only the unchanging part of the dynamic IP address would help in verifying the user even when their IP address changes. This may or may not work, given the varieties of IP address allocation schemes.
You could get information about the system and hash that also -- in Linux, you could uname -a (but there are similar capabilities available for other OSes). Enough system information, and you might be able to skip using the (partial) IP address entirely. This technique will require some experimentation. Using only normally-browser-supplied system information would make it easier.
You need to think about how long your cookies should remain fresh. If you can live with people having to authenticate once daily, that would be easier on your system authentication coding than allowing people to authenticate only once a month (and so on).
I would consider this protocol as very weak!
your session-cookie is not a random source with high entropy.
The server must do asymmetric encryption on each page to verify a user.
The security of ANY user only relies in the security of the server-key sk.
The server-key SK is the most vulnerable part here.
If anyone can guess it or steal it, he can login as a specific user.
So if sk is generated for each session and user, then why the hmac?
I think you will use TLS anyway, if not, consider your protocol as broken because of replay attacks and eavesdropping in general!
If sk is generated for each user, but not for each session, it is similar to a 256bit password.
If sk is identical for all users, someone just has to crack 256 bits and he can log in as any user he wants. He only has to guess the id and the exiration date.
Have a look at digest-authentication.
It's a per request authentication, specified by the rfc2617.
It is secure for repay attacks using nonces, sent on each request.
It is secure for eavesdropping using hashing.
It is integrated in HTTP.

How to defend against users with Multiple Accounts?

We have a service where we literally give away free money.
Naturally said service is ripe for abuse. To defend against this we do the following:
log ip address
use unique email addresses (only 1 acct/email addy)
collect more info like st. address, phone number, etc.
use signup captcha
BHOs (I've seen poker rooms use these)
Now, let's get real here -- NONE of this will stop a determined user.
Obviously ip addresses can be changed via a proxy (which could be blacklisted via akismet) but change anyways if the user has a dynamic ip or if more than one user is behind a NAT'd network (can we say almost everyone?)
I can sign up for thousands of unique email addresses each hour -- this is no defense.
I can put in fake information taken from lists for street addresses and phone numbers.
I can buy captchas from captcha solving services (1k for $5).
bhos seem only effective for downloadable software -- this is a website
What are some other ways to prevent multiple users from abusing the service? How do all the PPC people control click fraud?
I know we could actually call the person but I don't think we are trying to do that anytime soon.
Thanks,
It's pretty difficult to generate lots of fake phone numbers that can send and receive SMS messages. SMS verification could go a long way towards cutting down on fraud. Of course, it also limits you to giving away free money to cell phone owners.
I think only way is to bind your users accounts to 'real world' information, like his/her passport number, for instance. Of course, you'll need to make sure that information is securely stored and to find some way to validate it.
Re: signing up for new email accounts...
A user doesn't even need to do that. Please feel free to send your mail to brian_s#mailinator.com, or feydr.asks.a.question#spamherelots.com, or stackoverflow#safetymail.info, or my_arbitrary_username#zippymail.info. I haven't registered any of those email addresses, but all of them will work.
Those domains are owned by ManyBrain, and they (and probably others as well) set the domain to accept any email user. ManyBrain in particular then makes the inboxes for those emails publicly accessible without any registration (stripping everything by text from the email and deleting old mail). Check it out: admin#mailinator.com's email inbox!
Others have mentioned ways to try and keep user identities unique. This is just one more reason to not trust email addresses.
First, I suppose (hope) that you don't literally give away free money but rather give it to use your service or something like that.
That matters as there is a big difference between users trying to just get free money from you they can spend on buying expensive cars vs only spending on your service which would be much more limited.
Obviously many more user will try to fool the system in the former than in the latter case.
Why it matters? Because it is all about the balance between your control vs your user annoyance. I see many answers concentrating on the control part, so let's go through annoyance, shall we?
Log IP address. What if I am the next guy on the computer in say internet shop and the guy before me already used that IP? The other guy left your hot page that I now see but I am screwed because the IP is blocked. Yes, I can go to another computer but it is annoyance and I may have other things to do.
Collecting physical Adresses. For what??? Are you going to visit me? Or start sending me spam letters? Let me guess, more often than not you get addresses with misprints at best and fake ones at worst. In fact, it is much less hassle for me to give you fake address and not dealing with whatever possible spam letters I'll have to recycle in environment-friendly way. :)
Collecting phone numbers. Again, why shall I trust your site? This is the real story. I gave my phone nr to obscure site, then later I started receiving occasional messages full of nonsense like "hit the fly". That I simply deleted. Only later and by accident to discover that I was actually charged 2 euros to receive each of those messages!!! Do I want to get those hassles? Obviously not! So no, buddy, sorry to disappoint but I will not give your site my phone number unless your company is called Facebook or Google. :)
Use signup captcha. I love that :). So what are we trying to achieve here? Will the user who is determined to abuse your service, have problems to type in a couple of captchas? I doubt it. But what about the "good user"? Are you aware how annoying captchas are for many users??? What about users with impaired vision? But even without it, most captchas are so bad that they make you feel like you have impaired vision! The best advice I can give - if you care about user experience, avoid captchas as plague! If you have any doubts, do your online research first!
See here more discussion about control vs annoyance and here some more thoughts about being user-friendly.
You have to bind their information to something that is 'real world', as Rubens says. Of course, you also need to be able to verify this information (I can just make up passport numbers all day if you don't check to make sure they're correct).
How do you deliver the money? Perhaps you can index this off the paypal account, mailing address, or whatever you're sending the money to?
Sometimes the only way to prevent people abusing a system is to not have the system in the first place.
If you're doing what you say you're doing, "giving away money to people", then surprise surprise, there will be tons of people with more time available to try to find ways to game the system than you will have to fix it.
I guess it will never be possible to have an identification system which identifies fake identities that is:
cheap to run (I think it's called "operational cost"?)
cheap to implement (ideally one time cost - how do you call that?)
has no Type-I/Type-II errors
is scalable
But I think you could prevent users from having too many (to say a quite random number: more than 50) accounts.
You might combine the following approaches:
IP address: can be bypassed with VPN
CAPTCHA: can be bypassed with human farms (see this article, for example - although they claim that their test can't be that easily passed to other humans, I doubt this is true)
Ability-based identification: can be faked when you know what is stored and how exactly the identification works by randomly (but with a given distribution) acting (example: brainauth.com)
Real-world interaction: Although this might be the best one, but I guess it is expensive and not many users will accept it. Also, for some users/countries it might not be possible. (example: Postident in Germany, where the Post wants to see your identity card. I guess this can only be faced in massive scale by the government.)
Other sites/resources: This basically transforms the problem for other sites. You can use services, where it is not allowed/uncommon/expensive to have much more than 1 account
Email
Phone number: e.g. by using SMS, see Multi-factor authentication
Bank account: PayPal; transfer not much money or ask them to transfer a random (small) amount to you (which you will send back).
Social based
When you take the social graph (vertices are people, edges are connections), you will expect some distribution. You know that you are a single human and you know some other people. So you have a "network of trust" (in quotes, because I think this might be used in other context as well). Now you might not trust people / networks how interact heavily with your service, but are either isolated (no connection) or who connect a large group with another large group ("articulation points"). You also might not trust fast growing, heavily interacting new, isolated graphs.
When a user provides content that is liked by many other users (who you trust), this might be an indicator that there is a real human creating it.
We had a similar issue recently on our website, it is really a hassle to solve this issue if you are providing a business over one time or monthly recurring free credits system.
We are using a fraud detection solution https://fraudradar.io for a while and that helped us a lot to clean out most of the spam activities. It is pretty customizable with:
IP checks
Email domain validity
Regex rules
Whitelisting options per IP, email domain etc.
Simple API to communicate through
I would suggest to check that out.

Do we really need email confirmation?

I've gotten into a habit of using the standard register->send activation email->activate account process for every site that supports user authentication and free registration without questioning if I really need this.
What are your thoughts on this? If I have captcha on the registration form is the email confirmation process really necessary?
EDIT:
OK, so the general consensus seems to be that by getting the users to confirm the email they entered I'll keep them away from putting someone else's email in there.
What about when I let users edit their profile/settings and they enter another email?
If I need to keep them away from entering other people's addresses then I'd need to confirm that email address (by temporarily deactivating their accoun)t every time they change it.
Captcha+activation prevents bots AND spoofed people
Well basically it is since each part prevents one problematic scenario:
Captcha prevents (if you use strong captcha like reCaptcha) bots from registering new users
Email activation prevents people from registering other people (by their email address)
I guess this is a valid everyday pattern for registration that's widely acknowledged by IT community.
EDIT
Yes. When you want to prevent users from changing their email address, you'd have to repeat email activation procedure to make it robust.
But you don't have to deactivate their account while doing it. All you have to do is having a pending email-change email activation active. If it gets activated, you change email address at that point (not when they change it), otherwise the old one is still used.
If you don't confirm an e-mail, you're supposing that the user registering that service owns that email account. How can you start sending a lot of system e-mails, reset passwords and etc to a person that has nothing to do with your system? I would be really pissed of if it was my e-mail.
Another scenario: what if the register mispelled his e-mail when registering? Suppose he doesn't check his "account settins" in your application, doesn't change his email, and needs to reset his password. If the e-mail is registered in a wrong way, it's your fault for not checking it before.
Of course, I'm just saying this to services that would REALLY demand an account to be created. Avoid the login barrier when possible, or use openid when your service isn't so critical.
You should give serious consideration to supporting OpenID. http://openid.net/get-an-openid/what-is-openid/
The key benefit for OpenID is that it reduces the complexity for your user. There is no reason to force people to remember login credentials for hundreds of sites when a viable alternative exists. There is no worldwide netizen database - and there likely never will be - but OpenID simplifies the situation greatly.
I know that as a user I found the registration process for Stack Overflow to be painless and easy. I wish more sites used OpenID.
It's the lowest-level attempt at identity validation. It encourages users to re-use the same account when they return (by having a common, shared identifier you and they can use to reconnect), and it prevents impersonation, because it requires access to the claimed identity as proof.
It's not perfect, but something by definition works infinitely better than nothing.
If identity doesn't matter on your site (e.g. your service is throwaway after each use) then you don't need email activation. Otherwise, you probably want it.
On my site, I let users sign-up and do everything non-public until they confirm their email address. Because I run a gaming website, it means users can earn medals, post scores, just not post in the forum or post comments in the blog until they verify their email address.
I find it works pretty well. I have 16,000 registered users.
I find it both unnecessary and annoying. If I can, I avoid doing this.
However, I do do this if 1) email will be sent by the program, so I can test if the email address is valid, or 2) this is a very large, public-facing website, in which case I want to filter out as many potential problems as possible.
For most basic sites, I don't bother with either. Both email activation and captcha are relatively easy for dedicated spammers to bypass and overcome and do little but cause an annoyance to most of the users, driving away at least a certain percentage who might have otherwise signed up. I've found in my experience, focusing more on spam filters for member posted content has a better ROI overall.
For sites with more serious content, you'll typically have more serious users. In cases like that, I'll throw everything I've reasonably got available at it to counter the spam.
I find it useful when an email is sent for confirmation. This makes sure that I am the one who has registered with that email address.
Even with captcha you can register someone else email address although he may or may not approve that confirmation.
You only seem to need e-mail confirmation to confirm identity, not to send useful content by e-mail. But e-mail confirmation is only one means to an end. You may consider others, preferablly less intrusive ones.
Generally you can check something that
you are (e.g. fingerprint, iris scan)
you have (e.g. token, creditcard, key, access to an e-mail account)
you know (e.g. PIN, password, your mom's weight, name of your favorite deceased pet, the optimistic length of your most private bodyparts measured in inches)
Also, you can delegate the check to others; the creditcard company, phone company, someone's friends.
Example: GoogleMail could not ask
for a confirmation e-mail address
upon creation of your GMail account.
Instead, the early adopters had a limited supply of
"invites" to share with friends.
So - unless you actually need me to receive information you'd e-mail, which I generally hate anyway - you might be inclined to resort to more creative/fun means.

How can I convince IT that F/OSS software isn't evil? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When trying to link some well established tools to my company's active directory, I hit a roadblock. I was told that:
"Sorry, I cannot trust our domain admin password to [F/OSS] software...".
This question deals specifically with how to convince IT that F/OSS software isn't (automatically) less trustworthy than any other software just because it's free/oss.
I'm doing fine with adopting OSS software (I'm a linux ninja at heart) so to put it another way: How can I promote the acceptance of OSS at my company?
The technical issue of tying into AD without an admin account is for another post.
EDIT:
I got some clarification on these issues. This really has little to do with the active directory and all to do with trust of F/OSS in general. So I think my original bolded questions are still valid, just ignore the part about the "admin password".
Any IT person worth their salt will be well aware of the benefits of open source software.
The answer that has been given sounds to me like a palm off answer, some possibilities of why they don't want to implement it could be:
Possible lack of enterprise level support for that specific software open source software
Not wanting non-IT department employees to be modifying the active directory (you)
The software you have found doesn't have the industry recognition that other similar products have
There is no perceived benefit for the IT department for the work it would require them to do (both in the initial setup and ongoing maintenance)
I work as a sysadmin. From my perspective this question isn't about trusting Open Source software specifically. Your IT person mentioned a specific case saying he didn't trust it with the domain admin username and password. I think he may be concerned with the software storing that username and password. If that is in fact how it works I would deny the request for open source or commercial software. No properly setup system should need to store the domain admin username and password, possibly an account with lower credentials, or depending on the tool if it is interactive have it setup to ask for credentials at runtime and authentcate against the domain.
Bottom line you need to work with IT to come to a better understanding of your and their needs. Things need not always be only a yes or no issue.
I would try it this way:
Why would open-source software be less trustworthy than it's close-sourced equivalent? If anything, the transparency of its code would require that it be even more trustworthy, in terms of private data storage such as passwords, since any attempt to subvert it would be discoverable by examining the source code.
This, of course, is only valid if the company compiles the source themselves, and does not trust a binary distribution.
Ask them if they have read the license since that is what they object too. Ask them specifically what in the license is an issue for them. If what they are really resisting is Open Source Software, then that is a separate issue from resisting the GPL.
Why not run as a non domain admin? I can understand why they don't want to give a domain admin password to any software. Especially if there is only one "Domain Admin" account.
How about you determine exactly the permissions needed to run the software and request a new account with only those permissions. You could convice them to put this in a different OU, with additional auditing. If the software provides value, you are creating a process for them to "audit" and decide to trust OSS.
Identify exactly what he cannot trust about F/OSS software and then you can tailor your explanation to address his concerns.
Is it concern about backdoors being coded in?
Is it concern about code quality that leads to security risks?
Is it concern about how soon security risks will be fixed?
"how to convince IT that F/OSS software isn't (automatically) less trustworthy than any other software just because it's free/oss."
"How can I promote the acceptance of OSS at my company?"
You can't.
All you can do is the following.
Find the F/OSS they currently use. This can be hard. In some cases, it's trivial because many folks use Apache and Java without thinking about it.
Ask how is what you're going to use different than what they're already using?
That will make the case for exactly one new piece of F/OSS. Or, they'll go crazy and banish stuff they've been using.
You can't make a general understanding happen. You can only make the case one specific detailed case at a time until someone else starts to piece the big picture together on their own.
Sometimes they are not, sometimes they are. You need evidence to backup your thoughts.
CVE numbers don't lie. Go to http://cve.mitre.org/ , http://www.securityfocus.com/bid/, http://www.secunia.com and compare commercial and OSS version of the same line of products that you'd choose.
See which one is better sometimes it's the fact that the OSS product is really rubbish such as PHPNuke but sometimes it's darn good when it comes to security such as qmail.
Also don't forget you need to choose a OSS solution which got a good community otherwise you might see the project is dead after a year. this is possible in the commercial world, but let's face it less likely
I would put the onus on IT to prove their case. Simply ask "why not?", or possibly "what evidence do you have that this is any less secure than non-GPL software?". If they attempt to give some explanation, you can take some of the other suggestions to explain their misconceptions to them. If they just stubbornly stand their ground, they are standing in the way of you doing your job - and for no good reason. Gently explain to them how you have found incredible value (ie free) software that adds value to the company, and that you're sure the higher levels of management would want you to take advantage of it. Hopefully this will remind them they have no evidence. If even this fails and it's important, you could then take it to higher levels of management, but proceed with caution as it's a sure fire way to make enemies.
What tools do you want to use? Make the business case about how much time/$$ will be saved by using these tools. Give examples of other, highly-successful companies (Google comes to mind) that use these tools.
First and most importantly, make sure these decisions by IT are being recorded somewhere. Email or whatever. If you can't do your job effectively because of them, make sure you have enough documentation to redirect the blame where it belongs.
Look beyond IT. Your sysadmin may be following rules set down somewhere else in the company, typically a legal department. If that's the case, you may have a company lawyer who doesn't know about software or FOSS reacting with a corporate lawyer's typical reaction to the unknown - forbid it. After you've demonstrated cost and security benefits, you may need to ask the company to reach out to a legal expert in the area of FOSS.
You're talking about Windows admins. Just point out how MSFT has handled recent security issues (like the recent IE holes that have mainstream media telling people to use alternate browsers) and ask how OSS can be any worse.