Monitoring system for build, source control and integration servers - language-agnostic

Do you know about any lightweight system that will monitor servers in terms of disk space, CPU usage, uptime/availability?
I'm talking mainly about DB, Subversion, Hudson, integration, qa and build servers. All the advanced server monitoring tools are very hard to configure and use. So I'm looking for something simple.
Open source tools are preferable.

Nagios is very good. Super flexible and can monitor just about anything. It can also execute workflows when certain events/alerts happen. And it's free.

We are using Hostmonitor.
Simple to setup, use and at a very reasonably price.
HostMonitor can check any TCP service,
ping a host, check a route, monitor
Web, FTP, Mail, DNS servers. It can
check the available disk space,
monitor size of a file or folder,
check integrity of your files and web
site; it tests your SQL servers,
monitors network traffic and much,
much more.

Related

simple ping from remote agent

I have been looking around to see if there is some simple, stand-alone(ish) agent/server setup that would allow a ping to be launched from a host with an agent on it. When I say "ping," I mean via icmp echo and/or tcp port check. I have Windows, Linux, and AIX systems that would get such an agent.
I would like to set up a central server with authentication that can issue pings from any device that has one of these agents. The primary use would be VPN testing, so that traffic can be initiated from a device that I don't necessarily have access to.
It seems that some monitoring software has this (e.g., Zabbix) but I don't want to go through the pain of installing a whole big piece of software like that just to get this functionality.
Almost all our AIX and Linux systems have perl installed, so that could be a nice option if I had to write my own. I would rather find something "tried-and-true" though...
I didn't realize that we already had SaltStack installed on almost all our servers (I'm a network guy, not a server guy). Once I talked to one of the server administrators, he showed me how this could be done using Salt.

Can I install MySQL on the VMs provided in Azure Cloud Services?

From what I gather, the only way to use a MySQL database with Azure websites is to use Cleardb but can I install MySQL on VMs provided in Azure Cloud Services. And if so how?
This question might get closed and moved to ServerFault (where it really belongs). That said: ClearDB provides MySQL-as-a-Service in Azure. It has nothing to do with what you can install in your own Virtual Machines. You can absolutely do a VM-based MySQL install (or any other database engine that you can install on Linux or Windows). In fact, the Azure portal even has a tutorial for a MySQL installation on OpenSUSE.
If you're referring to installing in web/worker roles: This simply isn't a good fit for database engines, due to:
the need to completely script/automate the install with zero interaction (which might take a long time). This includes all necessary software being downloaded/installed to the vm images every time a new instance is spun up.
the likely inability for a database cluster to cope with arbitrary scale-out (the typical use case for web/worker roles). Database clusters may or may not work well when a scale-out occurs (adding an additional vm). Same thing when scaling in (removing a vm).
less-optimal attached-storage configuration
inability to use Linux VMs
So, assuming you're still ok with Virtual Machines (vs stateless Cloud Service vm's): You'll need to carefully plan your deployment, with decisions such as:
Distro (Ubuntu, CentOS, etc). Azure-supported Linux distro list here
Selecting proper VM size (the DS series provide SSD attached disk support; the G series scale to 448GB RAM)
Azure Storage attached disks being non-Premium or Premium (premium disks are SSD-backed, durable disks scaling to 1TB/5000 IOPS per disk, up to 32 disks per VM depending on VM size)
Virtual network configuration (for multi-node cluster)
Accessibility of database cluster (whether your app is in the vnet or accesses it through a public endpoint; and if the latter, setting up ACL's)
Backup / HA / DR planning
Someone else mentioned using a pre-built VM image from VM Depot. Just realize that, if you go that route, you're relying on someone else to configure the database engine install for you. This may or may not be optimal for what you're trying to achieve. And the images may or may not be up-to-date with the latest versions, patches, etc.
Of course, what I wrote applies to any database engine you install in your own virtual machines, where a service provider (such as ClearDB) tends to take care of most of these things for you.
If you are talking about standard VMs then you can use a pre-built images on VMDepot for that.
If you are talking about web or worker roles (PaaS) I wouldn't recommend it, but if you really want to you could. You would need to fully script the install of the solution on the host. The only downside (and it's a big one) you would have would be the that the host will be moved to a new host at some point which would mean your MySQL data files would be lost - if you backed up frequently and were happy to lose some data then this option may work for you.
I think, that the main question is "what You want to achieve?". As I see, You want to use PaaS solution with Web Apps or Cloud Service and You need a MySQL database. If Yes, You have two options (both technically as David Makogon said). First one is to deploy Your own (one) server with MySQL and connect to it from the outside (internet side). Second solution is to create one MySQL server or cluster and connect Your application internally in Azure virtual network. WIth Cloud Service it is simple but with Web App it is not. You must create VPN gateway in Azure VM and connect Your Web App to this gateway. In this way You will have internal connection wfrom Your application to Your own MySQL cluster.

High available web service based on LAMP to encounter link failure

Recently I started a project aims at decentralizing a Moodle e-learning web server to encounter link failure. Here's a detailed description:
Connection here (rural area in Africa) is fragile (60-70% uptime), which is the main problem in this project. And our goal is to enable students to access course content as much as possible.
Thus, I'm thinking to have a local server constantly caching web content and provide accessibility during down-time. Although, due to the interactive nature of online learning (discussion board, quiz etc), the synchronization should be bi-directional between master and slaves. Plus, slave server should be able to provide transparency to end users, record all interactions locally and update master server once link is recovered (race condition and conflicts need to be solved intelligently). These slave servers will be deployed in either Raspberry Pi or other low-power consumption platform powered by solar. Load balancing would be bonus.
In short, the system should share characteristics of web cluster and database replication, but emphasizing disconnected operation. Weak consistency is acceptable
I've been looking into these areas:
CODA file system
Content Delivery Network
Apache2 web server cluster
MySQL cluster
Although most of them mainly focus on scalability and increasing throughput, which are the trend of networking but not main concerns in my project.
I'm still struggling to find a suitable mechanism/schema and would appreciate any advises!
Thanks in advance!

Migrate from cpanel/whm to Heroku or AWS

I have a dedicated server with WHM and cPanel installed on it.
recently I decided to move to cloud services since the dedicated server is costly and I'm not actually using any of its power, freedom and functionality.
I was considering moving to AWS or Heroku since they are less expensive, scalable and I don't need to manage the server myself.
I only have few websites on my server and I'm managing them via cPanel and WHM
I'm only using mySql database
I have also have some cron jobs setup
I use ftp to upload and maintain my websites (no git)
I was wondering if anyone could explain how I can transfer my files, databases, and domains to either AWS or Heroku.
I prefer the one that is easier and faster to migrate to.
Thanks.
If server/network management is not your strength, I would strongly advise against using AWS (even as big a proponent of AWS as I am). You absolutely must manage the servers yourselves, at least the configuration aspect (not the hardware aspect). In fact, you will find that you have to do things like set up security policies, identity access management, IP addresses, etc. that are not always that intuitive to one who is not used to working in a bit of an operations capacity.
You will also likely have to consider application architecture changes to work best with AWS services. Additionally, you will have to become accustomed to the AWS way of doings things (that starting and stopping server instances may make all your data go away and such).
If you are looking for a hands-off server approach, you might be better served looking at something like Slicehost/Rackspace.
I can't talk much to Heroku as I have only minimal experience prototyping on it. You can think of it more as an application platform. For simple applications that don't have unique traffic demands or architectural requirements, it seems a good solution for getting an application up and running with minimal server-related configuration. Again a legacy app will probably require some re-architecting to do things the Heroku way.
AWS are good but the support at Rackspace is far better and much more suited for someone like you. Rackspaces support is 24/7 and even on their online chat system you don't need to wait more than a few mins to speak to someone who actually knows what they are doing.

Is there an online web interface to manage Mercurial repositories?

My problem is quite simple
I'm behind agressive proxy, firewall and every known human way to make a developer's life miserable and I cannot clone a repository from Google Code or any other sort of online repository for that matter.
Question, Is there an online tool that allows me at least cloning a mercurial repository without having to access using non http protocols?
I doubt you'll be able to get around your network's restrictions with just a tool on your university machine.
I asked a sysadmin friend about this, and together we came up a few ideas. These are all rather vague because there really isn't enough information about the university network to give a clear-cut solution. However, they all require the help of another machine outside of the university network. Well, almost all of them.
Fork it
It may be possible to set up a repository and server on a computer outside that network that allows http for pulling, especially if you already know which projects you want to clone. You can setup a scheduled job to pull from the original repositories to keep the forks up to date.
If the university network is only blocking port 443 communication rather than the https protocol itself and you can only setup the forking server for https, you can configure it for a port other than 443 such as 8080, and since this web server is special-purpose you could even make it port 80.
Tunnel around
SSH, Telnet, Remote Desktop. Some repositories allow connections in addition to https, such as ssh. Not many that I've seen, though. But if the university network is not blocking certain remote connection protocols, you may be able to use one of those to connect to a computer outside the network and clone/pull to that machine and then to yours at the university. Or at the very least, copy it once you've cloned it.
Air beats fire
AKA Sneakernet. Clone them to portable storage outside of the university and carry it with you. Then plug it into the university computer and clone from there. There is a noticeable lag time, mind you.
Other storage variations probably exist as well, such as if the university gives you network storage space you can access outside of that network. You could zip the repository and upload it to that.
Machiavelli
Orchestrate events and manipulate people so that the sysadmin is replaced by a competent sysadmin who will lift the draconian, asinine measures that are currently in place. The other options are probably much easier. And safer.