AWS' gateway load balancer "helps you easily deploy, scale, and manage your third-party virtual appliances" . Reading up on virtual appliances, they sound an awful lot like containers. How are they different?
Related
I'm looking for a scaling mechanism on OpenStack cloud, and then I found OpenShift. My scenario is something like this: we have a distributed system with many agents stand on many nodes. One node contain a Message Broker that direct the traffic. We want to monitor the Message Broker node, if a queue is full, we scale out the agent nodes handle that queue. In brief, we monitor one node to scale other nodes.
We used OpenStack cloud now. In OpenStack, I found heat and ceilometer which are able to create alarm and scale out nodes. However, alarms are based only on general info like CPU, RAM, Network usage, etc (not inside-VM info).
Then I search for a layer above: PaaS. I found OpenShift can handle scaling apps. But as I knew, the scaling mechanism of OpenShift is: duplicate the apps based on network traffic, then put an HAProxy in front.
Am I right that OpenShift can't monitor software specific data. Is there any other tool that suit our scenario?
You can try using this script (https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-haproxy/usr/bin/haproxy_ctld.rb) to control how your gears are scaled, but I believe that it is still experimental. Make sure that you read through all of the comments and understand what you are doing before making any changes. You might also consider spinning up a second scaled application to test this on before messing with your production application.
I've got a few HTML pages with the requisite images, css and other bits and pieces, all static content no CGI required. I currently host it on an Amazon EC2 image that I need to have up and running for a different application. Ideally I'd like to move the hosting of the static content off the EC2 image so that it's independent of any single EC2 instance. I'd like to host it on one of the free or at least pay as you go cloud options.
The options I've come across are:
Windows Azure, in this case I haven't been able to get .html pages working and even if it is possible would it mean I'd have to update the whole Windows Azure app everytime I needed to update an image? Or is there an easy way static web content could be served up from Azure blobs?
Amazon's S3, I think I'd have to put fully qualified URL's into each HTML page for each image, css etc. file but that wouldn't be too bad. This seems like a reasonable option.
Google's App Engine, only spent 10 minutes looking at it but it seems like it would work as well.
Wordpress, I could just incorporate the HTML into a wordpress blog site but I find the themes a little bit too restrictive, pages can only be so wide etc.
Is there an easier way?
Update:
After some further investigation the two best ways I found are the S3 approach as described by Sug and Windows Azure Blob storage (rather than a Windows Azure service).
The difference between S3 and Azure Blobs is how the CNAME can be managed:
For S3 you'll end up with a CNAME like mybucket.mydomain.com
For Azure you'll end up with a CNAME like *.mydomain.com where * represents whatever you like. To access blobs the path is then *.mydomain.com/container/.
So S3 dictates the CNAME host but gives full flexibility on the resource path. Azure gives full flexibility on the CNAME host but dictates the first part of the resource path.
For serving only static files, using services like AppEngine or Azure will be over kill.
The simplest solution will be to use AWS S3:
1) No coding required
2) Pricing
3) You can easily map a bucket to your own domain or subdomain.
4) Free client tools to manage your buckets as it was dead simple filesystem.
I personally use S3Fox but there are many others (BucketExplorer is another example)
“S3 dictates the CNAME host”
Amazon has a CDN service called CloudFront, that uses an S3 bucket for storage. You only pay for S3 data transfer (I think).
Your bucket contents are copied to Amazon’s CDN, meaning superfast access from around the world. However, because it’s a CDN, files are automatically cached for a long time (so there’s a delay when re-naming or deleting files).
Just using an S3 bucket, and setting up another domain to point to the bucket via a CNAME, might be the best idea.
For simple sites like this, I've had good experiences with Nearly Free Speech.Net.
GitHub.com pages. You just need to know Git basics, check out the gh-pages branch, and put the static content there. It will be available at http://your-name.github.io/your-project/
For example, this is my project's file.
What's quicker: serving a static HTML file from the filesystem or from MemCache?
Also, is there scaling and/or other concerns I should be aware of?
It depends on the site. I'm sure if you benchmarked a simple small static web page that's not dynamic versus a database powered memcached site the former would be "quicker", but this can totally differ depending on the variables at hand, there are just too many factors to take into account to give you a simple yes or no answer.
Like any performance-related issue: benchmark. It's highly dependent on architecture, server setup, network, disk, etc. This question sounds simple enough to benchmark in a few mins with a load testing tool.
It depends on whether the filesystem is local or over the network. It also depends on what your network connection speeds are.
Data will change based on how the file is used and whether or not the web servers are in a cluster (and if the individual web servers need to generate the file once and then cache it).
I'd be willing to bet that serving a local file from the filesystem is going to be faster than using Memcache to serve the file (especially if it's a fast SATA drive) -- simply because you're cutting out the network layer of the equation.
Even when installed locally, your app would need to use the network stack to access Memcache, and that's going to involve some overhead.
I'm developing on super fast fibre optic connection.
I want a tool that allows me to test web sites at certain preset speeds for example - I want to feel the experience of my site loading at modem speeds, then perhaps 1mbps, 2mbps, etc.
Basically I want to be able to set the speed of the connection so that I get the real feel of the site loading remotely from other countries and connections.
Anyone know of such a tool?
WANEM is a nice open source solution that can simulate Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
It also supports a mode of operation that only uses one network-interface, which makes it super quick to set up a test environment.
EDIT
Although WANEM is a Linux application, you only need to burn the bootable CD and start a machine with that CD, no need to sacrifice a machine to run WANEM. If even that's not an option you can also download it as a virtual appliance that runs in a VMWare Workstation ($$), VMWare Player (free) or VMWare Server (free).
However, in my opinion(based on real usage of such products) it's really easier to have the "network simulator" on a separate machine instead of loading it on either the server or the client under test. And as explained above, thanks to the bootable cd option that can be any machine you have lying around - we typically use decommissioned desktops and notebooks for this purpose.
there are a lot of tools outside like:
http://www.netlimiter.com/
http://www.antamediabandwidth.com/
...
basically the most of them work likes proxies
I'm getting pretty tired of my development box dying and then I end up having to reinstall a laundry list of tools that I use in development.
This time I think I'm going to set the development environment up on a Virtual Box VM and save it to an external HDD so that way I can bring the development environment back up quickly after I fix the real computer.
It seems to be like a good way to make a "hardware agnostic backup" and be able to get back up to speed quickly after a disaster.
Has anybody tried this? How well did it work? Did it save you time?
I used to virtualize all my development eviroments using VirtualBox.
Basically, i have a Debian vbox image file stamped in a DVD. When i have a new project i copy it to one of my external hdds and customize it to my project.
Once my project was delivery, then i copy the image from my external hdd to a blank DVD and file it.
I've done this with good success, we had this in our QA environment even and we'd also make use of Undo disks, so that if we want to test for example Microsoft patches we could roll the box back to it's previous state.
The only case we had issues was on SQL Server's particullary if you do a lot of disk activity. We had two VM's replicating gigs of data btw each other hosted on the same physical box. The disks just couldn't keep up; however, for all the other tiers it worked like a breeze.
One cool idea I just saw a presentation on is using VirtualBox, and have your host using OpenSolaris with ZFS. That makes it easy to take a snapshot of your image(s), and rollback to the snapshot when things go wrong, or when you want to restore to a known state for QA purposes.
I keep all development on virtual machines. In a multi-developer shop this allows for rapid deployment of a new development environment if someone fries their VM (via service pack or whatever) and allows a new developer to join the project almost immediately.
K
I'm reading the question much differently than the rest of you guys. I read it as the OP asking about keeping an image of a fresh install as a VM, then, when a server needs to be redeployed, you can restore from a backup of the VM.
In this case, the VM is nothing more than a different way of maintaining an image of an OS install, and if it works, it's not a half bad idea, IMO.
In the companies I work with, I encourage the use of network installable operating systems. With the right up-front work you can configure a boot server on your office network which will install your base operating system, all the drivers you need for your hardware, and all the software you'll use. Not only will this bail you out in a disaster scenario where you lose a machine, but it makes deploying hardware for new employees trivial.
This is easier with Linux than it is with Windows or Mac, but the latter two can work in this manner too.
I use the same network install methods for deploying servers in a live environment too.
The Virtualisation approach isn't a bad answer to the same problem, but to me it doesn't seem quite as clean.
That's not the way to go.
When you are developing you want to have many tools, some which require a lot of computing power.Keep in mind that (IIRC, I couldn't find it on VBox website ) only emulates a PIV.
At the moment only one VM simulates a dual core CPU, and that's very new. This is important because there are race conditions that can only be seen on multiple CPU machines, so you want to test your code under multiple CPU/cores.
I think a simpler and better thing to do is make a disk image of your system and configuration partitions, restore it once a month to keep a clean system, and restore it
when ever your system gets mussed.
Now a quick word about Windows, since the other systems where I have done this are no problem. The partitions that you image, should not be changed in between. Not a problem
for other OS's, but some briliant person decided to put Profiles on Windows smack dab in the system files. I simply make it a point to not put anything in my Profile (or on my Desktop which is in my Profile ) that I'm not willing to lose.