Building a manual sandbox for malware analysis - reverse-engineering

I want to build a manual sandbox to analyze malwares on Windows systems. I mean a manual environment, not something automated like Cuckoo Sandbox.
There are many tools and I selected some of them, but I can't really see if each of this tool is worth it or not. Can you say me what you think and if these tools are useful for my sandbox?
First I consider some of them are unavoidables like IDA, winDBG, Wireshark, npcap, an HTTP Proxy like Fiddler, the Sysinternals suite, Volatility, maybe Foremost.
Then there are others tools I never really tried but which seems to be interesting. About static analysis, I have spotted the following tools and I would like to have an eventual feedback about it : Log-MD (a tool which look at the system using advanced Windows audit policies), Cerbero Profiler, Pestudio, Unpacker (it seems it is an automated tool to unpack binaries, seems faster but I am bit skeptical but I'm not a RE specialist, if you know this tool...), oledump.py by Didier Stevens (to identify various elements like heuristic patterns, IP, strings)...
About dynamic analysis, I noted Hook Analyzer (statically analyze elements with heuristic patterns and allow you to hook applications), Malheur (detect "malicious behavior"), ViperMonkey (detect VBA macro in Microsoft Office documents and emulate their behavior.
Do you have any recommandations about my setup and tools I could have forgotten? I want to analyze classic malicious elements (PE, PDF, various scripts, Office documents, ...).
About malware evasion, is there a risk a malware refuse to be analyzed while detecting RE and analysis tools?
Finally should I use Internet in the sandbox? Most of malwares today use C&C server and I see that some sandboxes are built with simulators like iNetSim but since the connection is not real, will I lost some information?
Thank's!

You might want to consider the SEE framework to build your analysis platform.
Its plugins based design will allow you to integrate scanning tools in a pretty flexible manner.
Bear in mind that lots of malware inspect the execution environment and, if any RE tool will be spotted, will refuse to run.
For what concerns the Internet connection, it depends on how much information you want to gather. It is indeed true that lots of malware communicate with C&C nowadays, yet they must ensure their persistence on the target machine.
Therefore, the injection mechanism will still be executed even if Internet connection is absent. My 2 cents on the matter is to run without Internet by default and activate it only when necessary.

Related

Interactive 3D visualization on browser

I am trying to create a website where users can view and interact with room-furnishings in a 3D environment in a browser. Now, I do not wish to create anything from scratch, if it is possible to build upon existing open source efforts. So far, my research shows that:
The most established open source project I could build upon, that allows me to show 3D scenes on the browser and have users interact with them, uses Java3D for browser view, encapsulated in a java applet (sweethome3Dviewer).
Java 3D itself seems to be out of vogue, with most people recommending HTML5+WebGL (where unfortunately, I can't find any solutions that are as developed).
So here are my questions for this forum:
1) Are there any serious drawbacks of using a Java3D based approach?
I am talking of ANY drawbacks here, for example: "it is too slow"; "it is not stable"; "is limited by the number of concurrent users", etc.
2) What would you suggest I start with and build upon, if not the one based on Java3D?
Please note my preference for not re-inventing the wheel!
Yes, there is a serious drawback to using Java applets today: they are likely to simply not work at all.
The biggest problem is that the Java security system, which is intended to prevent programs like applets from accessing other parts of your computer (modifying files, running additional unsandboxed programs, etc.), has a history of security holes. Because of this history, there is a general consensus that permitting Java applets is simply not an acceptable security policy for the current day. Therefore, many browsers omit the Java plug-in or disable it by default.
And there are also browsers which simply have never had a Java browser plug-in at all, such as those on Android and iOS devices. Besides the security risk, there is also the issue that Java is “heavyweight” as web content goes — it can be seen as a waste of limited resources, for portable devices.
Thus, using Java applets is not a good choice: your applet will never work for many users, and those it does work for are taking an unnecessary security risk.
WebGL, on the other hand, is “just” another JavaScript-based API, which only does graphics, not lots of other things that have to be turned off by a “security manager” element. There are risks inherent to WebGL (GPU drivers are not the most security-minded thing out there) but in the current state of things it's unlikely that WebGL will be simply shut off rather than being fixed, if a problem is found.

What does "headless" mean?

While reading the QTKit Application Programming Guide I came across the term 'headless environments' - what does this mean? Here is the passage:
...including applications with a GUI and tools intended to run in a “headless” environment. For example, you can use the framework to write command-line tools that manipulate QuickTime movie files.
"Headless" in this context simply means without a graphical display. (i.e.: Console based.)
Many servers are "headless" and are administered over SSH for example.
Headless means that the application is running without a graphical user interface (GUI) and sometimes without user interface at all.
There are similar terms for this, which are used in slightly different context and usage. Here are some examples.
Headless / Ghost / Phantom
This term is rather used for heavy weight clients. The idea is to run a client in a non-graphical mode, with a command line for example. The client will then run until its task is finished or will interact with the user through a prompt.
Eclipse for instance can be run in headless mode. This mode comes in handy when it comes to running jobs in background, or in a build factory.
For example, you can run Eclipse in graphic mode to install plugins. This is OK if you just do it for yourself. However, if you're packaging Eclipse to be used by the devs of a large company and want to keep up with all the updates, you probably want to find a more reproducible, automatic easier way.
That's when the headless mode comes in: you may run Eclipse in command line with parameters that indicate which plugins to install.
The nice thing about this method is that it can be integrated in a build factory!
Faceless
This term is rather used for larger scale application. It's been coined in by UX designers. A faceless app interacts with users in a manner that is traditionally dedicated to human users, like mails, SMS, phone... but NOT a GUI.
For example, some companies use SMS as an entry point to dialog with users: the user sends a SMS containing a request to a certain number. This triggers automated services to run and reply to the user.
It's a nice user experience, because one can do some errands from one's telephone. You don't necessarily need to have an internet connection, and the interaction with the app is asynchronous.
On the back-end side, the service can decide that it does not understand the user's request and get out of the automated mode. The user enters then in an interactive mode with a human operator without changing his communication tool.
You most likely know what a browser is. Now take away the GUI, and you have what’s called a headless browser. Headless browsers can do all of the same things that normal browsers do, but faster. They’re great for automating and testing web pages programmatically.
Headless can be referred in terms of a browser or a program that doesn't require a GUI. Not really useful for a general person to view and only to pass the info in the form of code to another program.
So why one uses a Headless program?
Simply because it improves the speed and performance and is available for all user, including those that have access to the graphic card. Allows testing browserless setups and helps you multitask.
Guide to Headless Browser
What is GUI ?
In software development it is an architectural design that completely separates the backend from the front end. The front end, gui, or UI is a stand alone piece and communicates to the backend through an API. This allows for a multi server architecture, flexibility in software stack and performance optimization.

What is the experience with Google 'Omaha' (their auto-update engine for Chrome)?

Google has open-sourced the auto update mechanism used in Google Chrome as Omaha.
It seems quite complicated and difficult to configure for anybody who isn't Google. What is the experience using Omaha in projects? Can it be recommended?
We use Omaha for our products. Initially there was quite a bit of work to change hardcoded URLs and strings. We also had to implement the server ourselves, because there was not yet an open source implementation. Today, I would use omaha-server.
There are no regrets with ditching our old client update solution and going with Omaha.
Perhaps, you can leverage the courgette algorithm, which is the update mechanism that is used in Google Chrome. It is really easy to use and apply to your infrastructure. Currently, it just works for Windows operating systems. Windows users of Chrome receive updates in small chunks, unlike Mac and Linux users who still receive the chunks in total size.
You can find the source code here in the Chromium SVN repository. It is a compression algorithm to apply small updates to Google Chrome instead of sending the whole distribution all the time. Rather than push the whole 10 MB to the user, you can push just the diff of the changes.
More information on how Courgette works can be found here and the official blog post about it here.
It works like this:
server:
hint = make_hint(original, update)
guess = make_guess(original, hint)
diff = bsdiff(concat(original, guess), update)
transmit hint, diff
client
receive hint, diff
guess = make_guess(original, hint)
update = bspatch(concat(original, guess), diff)
When you check out the source, you can compile it as an executable (right click compile in Visual Studio) and you can use the application in that form for testing:
Usage:
courgette -dis <executable_file> <binary_assembly_file>
courgette -asm <binary_assembly_file> <executable_file>
courgette -disadj <executable_file> <reference> <binary_assembly_file>
courgette -gen <v1> <v2> <patch>
courgette -apply <v1> <patch> <v2>
Or, you can include that within your application and do the updates from there. You can imitate the Omaha auto update environment by creating your own service that you periodically check and run Courgette.
I've been using Omaha in various projects since 2016. The projects had between a handful and millions of update clients. Target operating systems were mostly Windows, but also some Linux devices and (via Sparkle) macOS.
Omaha is difficult to set up because it requires you to edit Google's C++ implementation. You also need a corresponding server. The standard implementation is omaha-server and does not come from Google. However, in return it also supports Sparkle for automatic updates on Mac (hence why I mentioned Sparkle above).
While setting up the above components is difficult, once they are configured they are work extremely well. This is perhaps not surprising given that Google use Omaha to update millions (billions?) of devices.
To help others get started with Omaha, I wrote a tutorial that gives a quick overview of how it works.
UPDATE
Customizing google omaha isn't that easy espacialy if you have no knowledge about c++, python or com.
Updates aren't published that frequently
crystalnix/omaha is managed by the community and they try to merge the main repo into their's; additional features are implemented and basic things are fixed
google/omaha is more active and changes from google are added but not frequently
To implement manual updates in any language you can use the com classes
Resume
google omaha is still alive but in a lazy way
bugs are fixed but do not expect hotfixes
google omaha fits for windows client apps supported from windows vista and upwards
the server side I'm using supports also sparkle for crossplatform support
feedbacks and crashes are also supported on the server
feedbacks are sent with the google protocol buffers
crash handling is done with breakpad
I personaly would go for google omaha instead of implementing my own solution. However we will discuss this internal.
In the .NET world you might want to take a look at ClickOnce deployment.
An auto-update mechanism is something I'd personally code myself, and always have in the past. Unless you have a multi-gigabyte application and want to upload bits and pieces only, just rely on your own code/installer. That said, I've not looked at Google's open source library at all.. and didn't even know it existed. I can't imagine it offering anything superior to what you could code yourself, and with your own code you aren't bound by any licensing restrictions.

Pros and cons of building apps with proprietary database systems

I've been interested in 4D SAS' database product for a long time, though have barely touched it in eons.
In considering what tools to use for application development, especially one that will require a database component, what should be looked for when considering open-source tools like MySQL and PostgreSQL vs proprietary solutions like 4D or Pervasive SQL?
What good (and bad!) experiences has the SO community had with various DB tools like 4D, Pervasive, FilemakerPro, etc?
Any bad experiences?
Difficult to make a relevant list of Pros and Cons without a context.
My advice would be the following: when making the decision of using a proprietary database, make sure that this decision is based on strong facts and not merely a technical interest for an exotic tool. Put into the balance the benefits for using the proprietary database and the advantages of a non-proprietary solution.
The answer is different from system to system.
A prerequisite is that your system is well identified, with a clear scope, a quite predictable evolution, so that the results of your analysis will be robust. Then, if your proprietary solution brings a real benefit for your system, that you are comfortable with the support and that you can afford the overall cost, you should be a good candidate for the proprietary solution.
4D is a MacOS/Windows only cross-platform, proprietary database system with both stand-alone and Client-Server varieties. You would do well to compare it to Alphafive.com software which is Windows only. I've worked with it for 17 years and it has served me and my department very well. Off the top of my head ...
Pros:
Interface & code are closely tied to the data engine which makes development of rich, cross-platform user interfaces very fast and easy.
Proprietary relational data engine runs natively on both platforms, along with native client interfaces (but requires licenses for multi-users). Auto-relations are helpful (but sometimes get in the way).
Can access external systems via SOAP and ODBC and SQL drivers (limited).
Can access 4D from external systems via SOAP or http requests & web pages.
Native procedural programming language based on Pascal and is EASY to learn.
Excellent tool for small to mid-sized departments.
Latest version accepts subset of SQL commands AND original data access, so it's backward compatibility record has been very good.
Security is EASY in 4D.
You can build solutions to deploy through a variety of means, and are not limited by whether or not MS Access is installed.
Cons:
Interface & code are closely tied to the data engine which can lead to limited use of abstraction and "black-box" coding unless you make it a goal of your development.
Compiles to one monolithic structure file forcing restart for single fixes.
Language is still only procedural--making it harder for object-oriented programmers to accept. Every method requires separate "file" in 4D so you can't include more then one function or procedure in a single routine -- it will take some getting used to it.
While company appears to be in good shape, growing and developing, you simply never know as they keep their condition to themselves.
Company has never really marketed itself--trusting in its developer base to spread the word and grow the product through site deployments and product upgrades. Web site is clearly useful only to developers who already use the product -- it simply fails to attract new users.
Product upgrades have always seemed to focus on how the tool is better for the DEVELOPERS rather than for the CUSTOMERS of those developers.
SQL lacks views, compound indexes, and other common SQL features.
When a user requests a report of specific columns of data, I often have to write yet another program just to provide that specific data -- I can't always just query the data and generate a text file.
Does not handle new OS versions with nearly the ease of web browser based applications. Older version is broken on Mac OS 10.6, and newest version requires the latest Mac OS 10.6. No version is certified yet on Windows 7.
I've been nearly a year at learning ASP.NET and a few weeks at Ruby on Rails. While SQL data stores are EASY, user interface is HARD -- but worth it when your application still functions through OS upgrades. You can always use an older browser if the latest version breaks something.
I'd recommend you consider either of those, depending on how much funds you have available to implement the project--Rails being the cheaper of the two. Then, ANY system with a web browser can access the data, and you can fix interface pages on the fly as needed rather than taking the whole system down a few minutes for a single, simple update. Those skills might be more marketable in the future.
I will only say one thing.. Watch the "actual" cost of your decision.. Most proprietary database systems are Windows only.. or sometimes Mac/Windows only.
This means that along with paying quite a bit of money for the database system, you must also pay a good amount of money on a Server operating system to run it...
Also, compare the database system with current open source solutions. Is it really worth it? After moving from Microsoft Sql Server(which has a free edition, but anyway) to PostgreSQL I was blew away that people pay so much for SQL Server.. I mean, Postgres to me is a lot more clean, and most of it works exactly how you'd expect(unlike in certain SQL server syntaxes) and it has more features built into it(programming stored procs in Ruby anyone?)
So basically, compared the proprietary with the open source software and decide upon which one to take by total pricing(including OS) and feature set..
Pro of zeroing in on any DB: it's got good non-portable features that help you get things done
Con of zeroing in on any DB: sometimes a different DB is appropriate (for example running your tests with in-memory SQLite instances), but that option is now closed
Con of a proprietary commercial DB: if you need many instances, licensing costs can kill you
Consider the following questions:
How easy (or difficult) is it to make changes in maintenance? Applications are likely to spend far more time in maintenance than they do in development, so if changes are hard, long-term pain is guaranteed.
What is the quality of support? A system that is well-documented, proprietary or otherwise, is going to be easier to work with.
How large (or small) is the user community? Systems with larger user communities mean more people to ask for assistance if and when things go wrong.
How robust are the import/export capabilities of this proprietary database system?
I found the last point particularly useful at my first full-time job. Our client was using CA-Ingres, and no one at the company knew it well enough to write queries to validate the data. So I came up with the idea of exporting the data from Ingres and importing it into MS SQL Server (which I knew from a brief stint at Sybase Professional Services) so we could write our validation queries there. If it had been really hard to export data from Ingres, my idea wouldn't have been an option at all.
From 4D's webpage, I gather that we are looking at a complete development+deployment environment, not a standalone database as such. So the alternatives you could be looking at include stuff like django, ruby-on-rails, hibernate and others. The real question, of course, is if the proprietary system can save you enough money doing the product lifetime to justify the costs of the product. And that would depend on the type of human resources you have available.
4D is a good option for vertial applications. I have worked for a company which used 4D to build a medical records and billing application for general practitioners and specialists. The rapid design and deployment features of 4D enabled the application to quickly move with market desires and legislated changes to medical record storage.The environment itself was not cutting edge, but it was integrated, cross platform and very productive.
If you are entering a market with high vendor lock-in and a high barrier to entry, then I think proprietory integrated development environments are a good option.
At various points in my career, I've used and gotten very good at FileMaker Pro, FoxPro, 4D, and a few other commercial products. Now I mainly use PHP/MySQL, and haven't used the latest versions of any of the products.
I've always liked FileMaker because most people who can use a computer can pick up FileMaker and design their own systems. They don't have to know programming or database design. But, you can "program" FileMaker, put a web front end on it, or do other more sophisticated setups if you need to. Many times I was "handed" a system created in FileMaker by a non-technical person that needed to be made into a full fledged data management system. The good part was that all the "specs" and data flow were already designed into a system. The prototype was already created!
4D and FoxPro I always found required a certain amount of extra programming and/or database knowledge to really do anything with. 4D & FileMaker are really complete self-contained systems, not just database systems. Although they all have the ability to hook into other backend databases systems (i.e. MySQL, Oracle), that is not their strong point.
On the downside, doing more complex, dynamic systems can be difficult in 4D and Filemaker due to everything being tightly coupled. Because of their cost, you really would want to create multiple systems with them. Which means you need to really "buy into them" to get your money's worth.
The key concept is always adherence to standards: if you plan to use 4D's custom and / or special designed functions (but the discussion could be far more general, and cover any other free or commercial tool in the wild), well, just use it and take your advantage.
Not surprisingly, that's why huge DB systems like Oracle or IBM's DB2 in the past were wide accepted for specific business areas, as commercial transactions, for instance.
The other main reason to adopt a very closed solution is the legacy support. One of the products you cited (Pervasive SQL) acted as a no-effort port for BTrieve-based applications in late 90s, and it gained popularity thanks to the huge BTrieve community all over the planet.
Finally, last but not least, you should evaluate the TCO (Total Cost of Ownership) not only in terms of license price (single seat, network environment, site licenses and so on), but also for what concerns tech support, updates and availability for your platform. Many business units I know have been obliged to change their base OS for DB related problems.
Tip: add a bonus for custom solution that are proven or supported for usage in virtualized environments, if you aren't in seek for extreme performances. It will save more than a head ache for your DB manager.
In all other cases, rely on opensource/freesoftware DBs. MySql and Postgres for big projects, SQLite for single app persistence layer. Fairly standard and very good (community) support. Good value for no price.
I don't have any experiences with the proprietary database products you listed: 4D, Pervasive, FilemakerPro.
I'd be interested in knowing what those products offer that make them more attractive to you than the open source alternatives, you listed: MySQL and PostgreSQL.
I'd be interested in what makes those more attractive to you than the much more popular proprietary alternatives: Oracle, SQL Server, DB2, etc.
Without you providing more specifics, it's hard to advise you.
I personally feel safer using a widely used open source solution than a narrowly used closed source solution. The more widely used, the more battle-tested it's likely to be. The more open, the more control over my own destiny I have in case I do encounter some bug.
I have reported bugs to open source projects and gotten a quick fix. I have reported bugs to companies that make for-profit proprietary software and have gotten nothing.

Internet facing Windows Server 2008 -- is it secure?

I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.