Local Machine Admin rights? - acl

What is the opinion of everyone out there about having local admin rights for a developer on their local machine? Or at least the ability to do it such as through runas without having to rely on someone else?

It depends on what you mean by "Developer".
DO NOT Grant Local Admin if...
Your "Developers" take business requirements and translate them verbatim into program code, in a well-developed, proven environment.
DO Grant Local Admin if...
Your "Developers" are Software Engineers that have the freedom to be creative, find new solutions, challenge the status quo of the software development process.

I think it is a good idea. I'm for it. For all the development teams I have managed, I have insisted on it from IT.
Also, I usually push for rights to allow programmers to temporarily disable on-access virus scanning on their workstations.

Absolutely necessary. And regular users should never have admin privileges.

Depends on what you want your developers to write. For example, as a proof of concept, I fired up Eclipse + Apache + MySQL + Php (XAMPP) on a machine where I didn't have admin rights and I was able to do a lot. On the other hand, there is no way I'd be able to do effective ASP.NET + SQL development on locked down/no-admin-rights machine.
Also, if the code under development has to operate under no-rights, it can be useful to develop that way, e.g winforms apps.
Otherwise, as a practical matter, if the network admins cripple the corporate machines enough, developers will stop using them and use their personal machines, the suits will stop asking internal developers for work and start hiring outside contractors (who do own machines they have right to). I've seen the these patterns, too.

I've found that most development tools require the user to be an admin to work correctly in all cases. It's a little annoying since, as a developer, we should switch to a non-admin account for testing, but it's doable.
However, if your question is "should we have to be an admin user to use our tools," then my answer would be no. Unless you're writing a driver, there's really no good reason (that I can think of) where an admin account should be required.

Regardless of what the opinions of people here are -- and I think it's a pretty safe bet that most developers are going to think they need admin rights -- the question of whether your developers actually get admin rights is in the end going to come down to a complex interplay of politics, security policy, and Clue distribution at different levels of your organization.
In other words, if you're trying to poll the crowd here to get ammo for an argument, you've probably already lost...

This may depend on what you are trying to achieve and if any COTS products you are using require adminstrative priviledges. This might be required to use the product or to get it to run in your enviornment correctly.
For the developers in my area, they have to be approved for admistrative rights for their local box. It has been helpful in trying to troubleshoot issues, but has not always been necessary.
Some reasons you may not want them, would be if the user might not have those rights or priviledges. This might help find errors or issues, such as writing data to directories where the user might not have any priviledges. Depending upon where it is being install, there may be restrictions on the users priviledges, so not having priviledges may help in making sure it will work in the intended environment.

Related

Is it bad to have your test and production environments on the same machine?

Would it be bad to have things set up so that MySite.com is production and test.MySite.com is test? Both running off the same machine. The site doesn't get a lot of traffic.
UPDATE
I am talking about an ASP.NET web application running on a Windows server.
Yes, it is a bad idea.
Suppose your test code has a bug that consumes all memory/cpu/disk space? Then your production site goes down.
Have separate machines for production and test and use DNS to point the URLs to each.
Edit (more points):
If the sites share a machine, they share an IP address, so when using an IP address to access a site, you will not know whether you are on production or test.
When sharing the same machine, deployment can be tricky, you have to be extra careful not to deploy untested code to production (easier to do, since both live on the same machine).
The security considerations for production and test should be separate - this kind of setup makes it more difficult.
It'd be really hard to test whether environment updates (new version of php/perl/python/apache/kernel/whatever) with test and production on the same machine.
It is a bad idea. When you have a new untested feature it may kill the production site.
Is compliance with any kind of standard an issue? Generally you want developers to have lots of access to test environments so they can resolve issues. However, it's not always a good idea (or even allowed) for developers to have the same level of access to production systems.
In theory, yes. When developing, there are a lot of things that could go awry, like #Oded mentioned. By having a dedicated webserver run your main site, you avoid the complexity of having duplicated databases, virtual hosts, etc. You could certainly make test.mysite.com publicly available, though.
As a customer, often times, the first thing I do is visit a company's website. If the site is inaccessible, even briefly, it looks unprofessional and I quickly lose interest. You do not want to lose business because you were too cheap to buy one extra computer!
Edit: I see from your comments above that this is indeed a business server. Answer updated.
"Good, bad, I'm the guy with the gun." - Ash
Bad is really a range. It can be anywhere between replacing motherboards with the power plugged in and wet hands, to using excessively short variable names. What you really want to know is what are the tradeoffs. You obviously know some of the benefits or you wouldn't be thinking about using the production server for testing.
The big con is the test code is running in a shared environment with production. If there is no sandbox (process limits, memory limits, disk limits, chroot file system, etc) you risk impacting the production server is something goes awry in the testing. You may accidentally DOS your self by consuming all of a particular resource. You may accidentally remove the production site. Someone may think it's okay to do a load test. If you are fine with taking those risks, then you can go ahead an run your test app on the production server.
BTW: It is bad.
As your question is not platform specific, I'll try to answer in a general form. I'll lso refer only to the "same machine" part of your question, since the "domain name" should be very easy to change ... if all common precautions have been taken.
What you really need is to isolate environments. Depending on the technology used, that may mean "separate machines" or not.
As an example, a lot small to medium banks in the world run their critical systems on one mainframe. It's not unusual for one of those beasts to cost (peripherals and all) six figures. Some of them opted to have separate, smaller machines for development and testing, while others run hundreds of environments (sometimes as VMs) on the same machine. The tricky detail is that the mainframe hardware and OS do provide real and consistent isolation between those environments, assigning disks, CPUs, comm channels, credentials, libraries, OS modules, DBs, etc, etc based on a strict policy that can be as granular as you want.
The problem with many other platforms is that finding the way to isolate the environments is up to you, while in a dinosaur platform is provided by the grace of HAL.
HTH!

MS Access as Enterprise Software?

Something that I often run into with my users is their desire to aquire solutions quickly means that they sometimes have said "Heck, I'll just roll up my sleeves and do it in Access - it's installed on my desktop".
Sometimes, we're lucky and the person that creates the Access database back-ends it to a SQL Server, so at least the mdb file issues that often come up aren't an issue.
However, it is my opinion that rolling out an Access front-end to a SQL Server database as an enterprise solution with thousands of users, and hundreds of thousands of rows is still problematic.
What are your opinions on this? What are some of the potential pitfalls?
OR
Is this a perfectly acceptable, stable, maintainable, and robust solution?
I've worked with this scenario a great deal. In fact as a consultant/developer Access front end SQL Server back end has been a significant part of my bread and butter work over the past 10 years. Which doesn't mean I like Access ;-)
Up until the common adoption of AJAX it was a perfectly reasonable solution. And there's still vast numbers of small to medium sized applications put together in Access out there that run bespoke business systems perfectly happily and I doubt it's going to go away for the next 10 or more years - indeed Access/SQL is probably going to be the Cobol of the 21st century. If you're working on a 'green field' site then there is now virtually no excuse for deploying Access when building from scratch - but if you do inherit an existing application then the costs of a rewrite may not be worthwhile and difficult to pass with the users.
Access does have some advantages that are still significant - and can present problems if proposing to convert to a web app
It's quick. For simple CRUD work it's as fast to write and deploy as any other realistic solution.
Built-in reporting is easy to get running and remarkably powerful given the system. It's usually pretty easy to create and deploy new reports for users on demand.
It integrates well with Office. This one tends to be the show-stopper when looking to move Access apps to web-apps. It's extremely common for a 'department-size' Access application to tightly integrate with Outlook, Word or Excel - and often all three.
This is the major problem when dealing with real-world situations. It's very easy for coders to underestimate the importance of this for everyday usage of such systems and the imposition of even a small degree of additional hassle for the users will generally be met with much resistance - often enough to completely scupper the project.
If your working with a reasonable sized department - a dozen people or so - it's quite common for there to be someone in the office who fancies themselves as a bit of a computer wizard. These people can be a major pain if handled incorrectly, but equally can be a major asset. If I have such a person I will try to get management to send them on an Access course or two so they can write simple queries and reports, and set up a separate Access application for them which they own which has appropriate (restricted) access to the SQL database. You can then trust this person to handle producing simple reports and the like for their colleagues. This can be a real win-win - you gain someone who is on your side and will use you as a mentor - a ready-made advocate for you in the department - and they keep the grunt report work out of your hair. They gain a lot kudos and job satisfaction - and even a potential career path. It's far harder, well near impossible, to do this kind of thing with any other system but Access.
Main practical disadvantages
Deployment can be a nightmare. Generally if you have a very tightly defined environment - a small company, single department, citrix based or distributed with an IT department that closely controls it's PCs then you're fine. Deployment as a commercial app across multiple companies - well only if you can charge significant maintenance (been there).
Code does not scale. Access VBA code, even when written by a professional has a strong tendency to rot into rancid spagetti. It's quite common to end up with an Access application that was easy enough to maintain, but gradually becomes unmaintainable as dependencies multiply.
So I'd say Access still has a place, and it's use is defendable in many real world situations, but increasingly it's better to choose a more modern solution if circumstances permit.
We have built such a solution (Access front-end, SQL back end), with now something like 80 users, millions of rows replicated between different countries, more than 100 000 updates a month. It works fine. I think the main mistake about Access is to consider it as a tool made for amateurs to develop applications. It can work this way, but keep in mind that amateur development will give you amateur applications, while professional development will give you professional results.
A quick list of its advantages, problems and limits:
It's free for the final user, thanks to msAccess runtime
It works with the free SQLServer Express, and not so expensive SQL Server Enterprise.
It's quick, specially when dealing with forms
It communicates very easily with other Office apps, which are still enterprise standards
You can manage its interface to be so close to Office standards that using it can be very intuitive, making people happy (I talked a little bit about that on my blog, need to be updated!)
On a large scale, you have to think about the best way to distribute it to your users. This issue can turn into a nightmare, as noted by #Cruachan, but it can be solved by building and distributing msi packs for example. Such msi packs can also contain all your external references such as 'added' dll, ocx, tlb files (report dll, activeX scanner controls, etc). We had a few words on this here.
When distributing an updated version of the mdb file, you can have a common network folder holding the new mdb/zipped file that clients will check/update at startup. Your clients should have the possibility to reinstall a previous version of the mdb file. Upgrading becomes then easier than installing a new .exe file.
You have to set a version controlling system. Please check here for details.
You must be very strict on your code organisation. One of our basic rules is for example not to have any specific code at the form level. Please check here on this subject.
I didn't find any problem with VBA code scaling, as noted by #Cruachan. If professional coding rules are implemented, there will not be any unusual code scaling issue. As an example, our application is now working really fine with more than 180 different forms, and still growing without any problem.
As a conclusion, our main problem with Access is an image problem, where Microsoft still let people think that Access is here to give them the possibility to develop real sofware in 10 lessons ... and professionals, who know that is not possible, view it as a amateur tool for amateur development, looking down on ms-access users as boring low IQ red-necks.
I know quite a few professional Access developers who have developed and maintained Enterprise-level apps using Access as the front end (either MDB or ADP) and supporting user populations in the 100s (and even in a few cases, thousands).
Like any Enterprise-level application development, it requires a higher level of programming skill than building a little Access database for your 5-person department.
Oddly enough, the design principles that make for an efficient Enterprise-level app also make for a more efficient workgroup-level Access app.
I think the reason most of the people posting in this thread can't conceive of it as a good solution is simply because they've never seen it done properly, or were themselves not sympathetic to the development model that Access uses.
Yes, it's hard to do properly.
But at that level, so is every other development platform -- all of them require planning, experience and a high-level skillset.
And you can rag on Access apps developed by people without all of that (Enterprise or not), but frankly, I've encountered a boatload of non-Access database apps of all kinds that are incredibly badly implemented.
Sturgeon's law applies everywhere, and there's no reason to assume that Access development would be any different.
I started out doing desktop applications in Access with JET back-ends. I moved up to using SQL Server/MSDE with Access as the front-end and then VB6 and a smattering of classic ASP.
There are many "enterprisey" reasons to go with a "real" development tool like Visual Studio. For the scale you are inquiring about, thousands of users, I think those reasons may apply.
That said, I think there are scenarios where it still works to use Access. In my own experience, I fell back to Access with a SQL database when given a mandate to come up with an enterprise solution, albeit for a much smaller enterprise, in a very short period of time. The main reason driving my decision was time. I can put together a database UI in Access much, much faster than I can in any other tool. Some of that is familiarity with the tool, but a lot of it is that Access just gives you more database purpose-specific bits to work with out of the box. The Access UI can also be tweaked to look and operate very much like a standard WinForms app.
The hitch that many run into in an enterprise scenario is rolling Access and the application MDB/MDE out to the masses. This is easily resolved by setting it up on a Windows Terminal Server, which can also be rigged to operate almost like another app window on the client machine with the right RDP file parameters. But even that approach has its limits. I don't think it would scale into the thousands very well, but for several dozen users, I found that it worked just fine and bought enough time to meet the time constraints I had to work with so a web interface could be implemented when time allowed.
For a professional who knows what they're doing in a SQL database, an Access front end is not necessarily an unpardonable sin, especially when the mandate is cheap and fast and there isn't a religious purism involved.
If you have the choice, no.
That being said, there are situations where it may be alright. One situation is if you never ever plan on updating the access application. If it is installed for thousands of users, you may run into problems getting all of the client apps updated.
You are much better off making a web front end... Although Access makes the multiple master-detail forms easier than anything I have seen. Even Oracle Application Express, intended to compete with Access, cannot do everything that Access does.
My advice is that if you are a programmer, you can make an asp.net app that will do the same job in a much more scaleable, maintainable, nature.
For a lot of CRUD (Create Read Update Delete) work, MS Access is OK. I'm more confident in it if the data is in another engine (MSSQL/Oracle/MySQL). However, most of the time I have problems with an MS Access database it's because:
It was home grown by a desktop user (not a programmer/IT professional) and hasn't planned ahead for future development (so additions are often more painful that if a pro had been involved)
It's full of unnormalized tables, inconstancies, and key-less tables.
My solution. Limit MS Access to the pro's and deploy the runtime version to the users desktops.
For the multiple user/high data volume situation, I use Access front-ends with a MySQL back end. I must say that in the client-server situation especially on a LAN, MS Access is as good as they come. Personally, I find development in MS Access much faster than say Visual Studio especially when it comes to database driven apps. And Access reports are as good, if not better than the industry standard, Crystal Reports.
The only shortcoming I see with Access is when it comes to non-LAN situations where you have to distribute the application to users spanning a wide geographical area. But again, web apps themselves have a major shortcoming - handling of one-to-many relationships, something access superbly handles with its sub-form, sub-report feature.
And more importantly, Access has a very powerful event model that most applications cannot match.
Personally, I can literally do anything on Access! So, my conclusion is that MS Access has many advantages that make it a competent development tool especially in LAN environments.
Sadly, I have quite a bit of experience with this. We built an entire product around Access Forms tying into SQL database. Honestly, the performance wasn't an issue - it really is the normal db connection type scenarios that you'd have to be concerned about with any client/server app. In our case, the original developer knew tons of "tricks" in Access, and did things like databinding drop downs to stored procedures. Oh, and the awful triggers. Awful. As in, 45 triggers firing per update awful.
The tables we worked with did indeed have millions of rows of data, however typically the roll-out was to tens or hundreds of users. I'd imagine that any effort going out to thousands of users would benefit more from a custom development so that you can do things like build the software correctly, support it from a performance and development perspective, and build automated deployment options (MSIs or ClickOnce, for example).
So, I would not say it is a perfectly acceptable, stable, maintainable or robust solution. It worked for us because we were there to support it (and eventually rewrite it in .NET), but I wouldn't recommend it for anyone. I have, however, worked in government where trying to get anything done from "IT" (which I was part of) was so filled with red-tape and paperwork that departments would oftentimes just do the Access solutions.
Ultimately if that's the case you are in - where the departments simply can't get access to IT resources - then showing them at least some best practices for how to eventually scale the app would be helpful. As long as right after you show them, you put your resume out to find a better job.
12-15 years ago this might have been an acceptable practice (not really advisable, but acceptable) but nowadays its unforgivable. There are so many more scalable and distributable solutions that Access should be the last thing to cross somebody's mind.
When you say Access as a solution what comes to my mind is a simple, 2-3 table application that some marketing employee put together, not a real developer. If the marketing guy had a really good idea then perhaps the development team should look at it (I'm assuming there is one sense you indicated there may be thousands of users), refactor it to a better platform (intranet or winforms distributed via ClickOnce, etc), and then deploy it.
Back in the early 90s I was an Access developer--even had a MS certification. I built dozens of "Enterprise" apps (meaning 10-15 people used them). Those days are gone, IMO. There are easier solutions to build, deploy, and maintain nowadays.
I've had the misfortune to work on Access front ends like you describe, here are some non-Enterpise arguments.
Programming is easy! Creating forms in access is geared toward non-developers. Case in point, if you have multiple columns in a drop down, do you have list fields and data fields. No way! you just set he width of things you don't want to see to 0". So your looking at forms either thrown together by non-developers, or that will irk most people that have to work on them.
Versioning? Who needs versioning?, Just send out an attachment If changes need to be made to the front-end re-deployment is time consuming and fault prone.
This form, I'm thinking magenta The front end doesn't lock down well so end-users can get creative.
With Microsoft "giving away" free versions (MSDE, or SQL Express for 2005 onwards) of the SQL Server engine with each release, there is really no need to use Access any more. Although these free versions don't have a visual front end which can make development harder, good knowledge of SQL is all you need.

Which authentication mechanism to choose?

Well, on my free time, I'm making this small web site. The site will not require to authenticate, only some actions (like leaving a comment) will require to do so.
I would expect to have up to 100 (probably less) unique visitors a day. I don't really expect more than 50% to (bother to) register.
Right now, I'm thinking of three possible authentication mechanisms (but I'm open to suggestions):
OpenID authentication;
HTTP Digest or at least HTTP Basic authentication;
My own (form based) authentication.
OpenID seems to me a little bit of an overkill for a small site like this. Also, buzzword like "OpenID" on the login page of my site might scare away the less tech-savvy people.
HTTP Digest (or Basic) authentication provides a low security level (or none at all), because the site will not be under HTTPS.
My own implementation would, most likely, suffer the same security problems as the HTTP Digest would. Although, I could implement some more protection against brute-force attacks (display a captcha after three failures etc).
What other mechanisms would you suggest? What are the pros and cons that I'm not seeing? What would you choose?
Well, if you want your visitors to leave comments I really think you're better of with something like OpenID. Because if you provide your own form based authentication who will really bother registering yet another account with some password wondering if they can trust you?
I think it's safe to say that people who like the internet own a gmail account, and all those people have an OpenID (Google account).
I suggest you use that... that's what I would do.
You haven't said what language/technology you're using. It could affect things. But I'd be inclined to just roll your own form-based authentication. It's not terribly difficult. Just remember a few basics:
Always sanitize user input. It can't be trusted;
Never store a username or password in a cookie (believe me people do);
Only store encrypted passwords using a reliable encryption method like MD5 or SHA1;
Use a non-predictable salt;
Require cookies to be enabled. Don't try and do URL rewriting.
Why not just have a name field when they post a comment, perhaps remember it in a cookie if you want. Most users just want to identify themselves not have an account.
Just make sure that you have some spam blocking in place as forms attract spam bots. Even if that is just a capcha with the form every time.
Openid is the best I think. Also if you give proer help about open id (or like SOF shows) then people will uderstand. Once less tech savvy people uderstand the use of opend id (no new username and pwd) then they will start liking it.
Definitely go with OpenID - the more people we get onboard, the more familiar people will become with it, and it's not really that strange to use the first time. If you are a microsoft dev, the dotNetOpenID library makes implementation pretty straightforward - I have done this for both ASP.NET and ASP.NET MVC sites with no problems.
EDIT:
With regard to supporting non tech-savvy users, some links / explanation on the login page would go a long way to alleviating concerns. The redirect they will see is quite similar to experiences that they are more familiar with, like credit card or paypal authorization, so should be easy to explain in these terms.
It depends in part who your target audience is. If they're all computer geeks, go with OpenID. They're either familiar with it, or will understand what you're doing. If they're not necessarily computer geeks, they may not have been exposed to OpenID authentication yet, so OpenID could present a barrier to entry. In that case, you might want to go a more traditional route, such as register/validate email/login approach, whether roll-your-own or off-the-shelf.
You could distribute some RSA SecurID to your visitors ;-)
Seriously, the main question to ask is: does the total hour of work to implement a decent security system for my users to log in are worth the content that may be accessed if the website security is broken?
You should look into RPX (https://rpxnow.com/), its a layer on top of OpenID and a few other schemes that for most languages is really easy to implement (there is a gem for ruby and I know a friend of mine got it into his php application in a less than a couple of hours).
OpenID rules! As an informed user I'm not sure it's been looked at to the point where it's "bulletproof" for security, so I probably wouldn't use it for financial / medical websites, but for the 95% of other websites, it would save me from having to write down my cheat-sheet of 137 different usernames and passwords. I've used it in a (nonpublic) site I developed and it was a bit of a hassle to get the authentication working properly, but if you can use one of the libraries out there, go for it!
HTTP authentication is standardized but something about it disturbs me. I dunno what. Something about a separate dialog box popping out of the browser makes me suspicious.
p.s. BBC's Digital Planet had a radio program my local radio station aired yesterday (17 Feb 2009) that talked about OpenID. So I guess when the radio talks about it, it must be starting to go mainstream.
My advice: do not reinvent the wheel. Web authentication is a wheel if I ever saw one, and it's remarkably difficult to get all the subtle pitfalls handled correctly. Chances are you'd miss something and end up with effectively no security.
Either go with an OpenID solution, or look into the many auth libraries out there, and pick a thoroughly-tested one.
See also: The Definitive Guide To Website Authentication

Internet facing Windows Server 2008 -- is it secure?

I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.

How would I get started writing my own firewall?

There is previous little on the google on this subject other than people asking this very same question.
How would I get started writing my own firewall?
I'm looking to write one for the windows platform but I would also be interested in this information for other operating systems too.
­­­­­­­­­­­­­­­­­­
For Windows 2000/XP there is an article with examples on CodeProject Developing Firewalls for Windows 2000/XPFor Vista I think you will need to use Windows Filtering Platform
This question is alarmingly similar to those asking how to write an encryption algorithm. The answers to both should end in gentle reminders about industry standard solutions that already:
embody years of experience and constant improvement,
are probably far more secure than any home-grown solution, and
account for ancillary requirements, such as efficiency.
A firewall must inspect every packet efficiently and accurately, and it therefore runs within the OS kernel or network stacks. Errors or inefficiencies jeopardize the security and performance of the entire machine and those downstream.
Building your own low-level firewall is an excellent exercise that will provide an education across many technologies. But for any real application, it's much safer and smarter to build a shell around the existing firewall API. Under Windows, the netsh command will do this; Linux uses netfilter and iptables. Googling any of these will point you to lots of theory, examples, and other helpful information.
So, to get started, I'd brush up on TCP/IP (specifically, the header information: ports and protocols), then learn about the various types of attacks and how to detect them. Learn about each operating system of interest and how it interacts with the network stacks. Finally, think about administration and logging: how will you configure your firewall and trace packets through it to ensure it's doing what you want it to do?
Good luck!
The usual approach is to use API hooking. Google can teach you that. Just hook all important networking stuff, like connect's and listens's, and refuse what you want.