I'm developing on super fast fibre optic connection.
I want a tool that allows me to test web sites at certain preset speeds for example - I want to feel the experience of my site loading at modem speeds, then perhaps 1mbps, 2mbps, etc.
Basically I want to be able to set the speed of the connection so that I get the real feel of the site loading remotely from other countries and connections.
Anyone know of such a tool?
WANEM is a nice open source solution that can simulate Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
It also supports a mode of operation that only uses one network-interface, which makes it super quick to set up a test environment.
EDIT
Although WANEM is a Linux application, you only need to burn the bootable CD and start a machine with that CD, no need to sacrifice a machine to run WANEM. If even that's not an option you can also download it as a virtual appliance that runs in a VMWare Workstation ($$), VMWare Player (free) or VMWare Server (free).
However, in my opinion(based on real usage of such products) it's really easier to have the "network simulator" on a separate machine instead of loading it on either the server or the client under test. And as explained above, thanks to the bootable cd option that can be any machine you have lying around - we typically use decommissioned desktops and notebooks for this purpose.
there are a lot of tools outside like:
http://www.netlimiter.com/
http://www.antamediabandwidth.com/
...
basically the most of them work likes proxies
Related
I have to develop a plugin for a program that uses dongle to activate.Just wondering can i crack the key of the usb or something else?
I'm sure you can, but you might be running afoul of the various legislation regarding the act of reverse engineering content protection systems. I am, of course, referring to the American DCMA statues.
In any event, as pure thought experiment, I might try the following:
Clone the USB firmware image, and load it into a virtual USB port
As you say, crack the key and the USB interface, and short-circuit the check in a virtual USB device.
Locate the part of the code in the program that is doing the security check, and edit the bytecode / machine code to return successful without actually looking for the device.
NOTE: Do not contact me for services related to defeating security systems. I won't do it, and I'll probably lecture you.
Would it be bad to have things set up so that MySite.com is production and test.MySite.com is test? Both running off the same machine. The site doesn't get a lot of traffic.
UPDATE
I am talking about an ASP.NET web application running on a Windows server.
Yes, it is a bad idea.
Suppose your test code has a bug that consumes all memory/cpu/disk space? Then your production site goes down.
Have separate machines for production and test and use DNS to point the URLs to each.
Edit (more points):
If the sites share a machine, they share an IP address, so when using an IP address to access a site, you will not know whether you are on production or test.
When sharing the same machine, deployment can be tricky, you have to be extra careful not to deploy untested code to production (easier to do, since both live on the same machine).
The security considerations for production and test should be separate - this kind of setup makes it more difficult.
It'd be really hard to test whether environment updates (new version of php/perl/python/apache/kernel/whatever) with test and production on the same machine.
It is a bad idea. When you have a new untested feature it may kill the production site.
Is compliance with any kind of standard an issue? Generally you want developers to have lots of access to test environments so they can resolve issues. However, it's not always a good idea (or even allowed) for developers to have the same level of access to production systems.
In theory, yes. When developing, there are a lot of things that could go awry, like #Oded mentioned. By having a dedicated webserver run your main site, you avoid the complexity of having duplicated databases, virtual hosts, etc. You could certainly make test.mysite.com publicly available, though.
As a customer, often times, the first thing I do is visit a company's website. If the site is inaccessible, even briefly, it looks unprofessional and I quickly lose interest. You do not want to lose business because you were too cheap to buy one extra computer!
Edit: I see from your comments above that this is indeed a business server. Answer updated.
"Good, bad, I'm the guy with the gun." - Ash
Bad is really a range. It can be anywhere between replacing motherboards with the power plugged in and wet hands, to using excessively short variable names. What you really want to know is what are the tradeoffs. You obviously know some of the benefits or you wouldn't be thinking about using the production server for testing.
The big con is the test code is running in a shared environment with production. If there is no sandbox (process limits, memory limits, disk limits, chroot file system, etc) you risk impacting the production server is something goes awry in the testing. You may accidentally DOS your self by consuming all of a particular resource. You may accidentally remove the production site. Someone may think it's okay to do a load test. If you are fine with taking those risks, then you can go ahead an run your test app on the production server.
BTW: It is bad.
As your question is not platform specific, I'll try to answer in a general form. I'll lso refer only to the "same machine" part of your question, since the "domain name" should be very easy to change ... if all common precautions have been taken.
What you really need is to isolate environments. Depending on the technology used, that may mean "separate machines" or not.
As an example, a lot small to medium banks in the world run their critical systems on one mainframe. It's not unusual for one of those beasts to cost (peripherals and all) six figures. Some of them opted to have separate, smaller machines for development and testing, while others run hundreds of environments (sometimes as VMs) on the same machine. The tricky detail is that the mainframe hardware and OS do provide real and consistent isolation between those environments, assigning disks, CPUs, comm channels, credentials, libraries, OS modules, DBs, etc, etc based on a strict policy that can be as granular as you want.
The problem with many other platforms is that finding the way to isolate the environments is up to you, while in a dinosaur platform is provided by the grace of HAL.
HTH!
I have been curious about better ways to cross browser test than those screenshot services or maintaining my own array of VMs to VNC into. Then today I found crossbrowsertesting.com (which seems to allow you to connect from your browser via VNC to one of their machines running virtually any browser). This is really similar to a solution I had been thinking about, but veered away for a few reasons. I have two questions about this service:
if you have used the service, what are its pros/cons?
how do they get around people doing all kinds of nasty things on their VMs, since they give you a full desktop to play around in.
Bonus: how do they get around the legal issues regarding people VNC'ing into Windows and using IE when the connecting clients clearly do not own the software?
Have not used it sorry, but the standard con for a remote service is that your test site has to be accesible to the web.
You secure the desktops with the tools you get with Windows Server, the ability to lock down a user has been around for a while, although it still needs work.
Bonus: You can licence Terminal Services for multiple users, we frequently use it on our "management" servers that allow all the technical staff to log onto one server in the environment and then connect from there to the production servers. We licence it for everyone to log into at once.
Apart from not being able to access local, i.e. on company's intranet or your hard drive, web sites crossbrowsertesting.com may also have a response time issues. VNC is not a very efficient protocol and working using VNC can be a pain.
I prefer tool that allow me to install all relevant browsers on my PC, such as BrowserSeal
I'm getting pretty tired of my development box dying and then I end up having to reinstall a laundry list of tools that I use in development.
This time I think I'm going to set the development environment up on a Virtual Box VM and save it to an external HDD so that way I can bring the development environment back up quickly after I fix the real computer.
It seems to be like a good way to make a "hardware agnostic backup" and be able to get back up to speed quickly after a disaster.
Has anybody tried this? How well did it work? Did it save you time?
I used to virtualize all my development eviroments using VirtualBox.
Basically, i have a Debian vbox image file stamped in a DVD. When i have a new project i copy it to one of my external hdds and customize it to my project.
Once my project was delivery, then i copy the image from my external hdd to a blank DVD and file it.
I've done this with good success, we had this in our QA environment even and we'd also make use of Undo disks, so that if we want to test for example Microsoft patches we could roll the box back to it's previous state.
The only case we had issues was on SQL Server's particullary if you do a lot of disk activity. We had two VM's replicating gigs of data btw each other hosted on the same physical box. The disks just couldn't keep up; however, for all the other tiers it worked like a breeze.
One cool idea I just saw a presentation on is using VirtualBox, and have your host using OpenSolaris with ZFS. That makes it easy to take a snapshot of your image(s), and rollback to the snapshot when things go wrong, or when you want to restore to a known state for QA purposes.
I keep all development on virtual machines. In a multi-developer shop this allows for rapid deployment of a new development environment if someone fries their VM (via service pack or whatever) and allows a new developer to join the project almost immediately.
K
I'm reading the question much differently than the rest of you guys. I read it as the OP asking about keeping an image of a fresh install as a VM, then, when a server needs to be redeployed, you can restore from a backup of the VM.
In this case, the VM is nothing more than a different way of maintaining an image of an OS install, and if it works, it's not a half bad idea, IMO.
In the companies I work with, I encourage the use of network installable operating systems. With the right up-front work you can configure a boot server on your office network which will install your base operating system, all the drivers you need for your hardware, and all the software you'll use. Not only will this bail you out in a disaster scenario where you lose a machine, but it makes deploying hardware for new employees trivial.
This is easier with Linux than it is with Windows or Mac, but the latter two can work in this manner too.
I use the same network install methods for deploying servers in a live environment too.
The Virtualisation approach isn't a bad answer to the same problem, but to me it doesn't seem quite as clean.
That's not the way to go.
When you are developing you want to have many tools, some which require a lot of computing power.Keep in mind that (IIRC, I couldn't find it on VBox website ) only emulates a PIV.
At the moment only one VM simulates a dual core CPU, and that's very new. This is important because there are race conditions that can only be seen on multiple CPU machines, so you want to test your code under multiple CPU/cores.
I think a simpler and better thing to do is make a disk image of your system and configuration partitions, restore it once a month to keep a clean system, and restore it
when ever your system gets mussed.
Now a quick word about Windows, since the other systems where I have done this are no problem. The partitions that you image, should not be changed in between. Not a problem
for other OS's, but some briliant person decided to put Profiles on Windows smack dab in the system files. I simply make it a point to not put anything in my Profile (or on my Desktop which is in my Profile ) that I'm not willing to lose.
I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.