Nginx vs Cherokee - configuration

What is the best for 5-10K concurrent connections?
Is someone using Cherokee for huge web applications? (I mean giants like Google, IBM etc.)

I personally tested 4 web servers (Apache 2.2, Cherokee 1.0.15, Lighttpd 1.4.26 and Nginx 0.7.65) and summarized results in this picture.
Cherokee vs other famous web servers
Same as above but different link address
For the simulation, I used two machines connected by an Ethernet cable. The server machine had a Pentium dual-core CPU T4300 2.10GHz with 4GB RAM, while the client machine used for stressing web servers bore a Pentium M processor 2GHz with 1GB RAM. Both stations had Gigabit Ethernet interface.
The command used to stress-test the web servers was ab and I created a small static file (100 bytes) to prevent network bandwidth bottleneck and to show the performance of web server software instead of the kernel.in
ab [-k] -n 10000 -c <concurrency_level> http://<server_IP>/100.html
Here, the -k option turns on keepalive, while -n 10000 generates 10,000 HTTP requests and -c sets how many concurrent requests are asked at a time to the target web server.

Related

Why is GCE VM with COS image faster?

I have two GCE instances: one with a COS image running CentOS 7 in a container. Let's call it VM1. And another with a CentOS 7 image directly on it. Let's call it VM2. Both of them run the same PHP app (Laravel).
VM1
Image: COS container with CentOS 7
Type: n1-standard-1 (1 vCPUj, 3,75 GB)
Disk: persistent disk 10 GB
VM2
Image: CentOS 7
Type: n1-standard-1 (2 vCPUj, 3,75 GB)
Disk: persistent disk 20 GB
As you can see, VM2 has a slightly better spec than VM1. So it should perform better, right?
That said, when I request an specific endpoint, VM1 responds in ~ 1.6s, while VM2 responds in ~ 10s. It's about 10x slower. The endpoint does exactly the same thing on both VMs, it queries the data base on a GCP SQL instance, and returns the results. Nothing abnormal.
So, it's almost the same hardware, it's the same guest OS and the same app. The only difference is that the VM1 is running the app via Docker.
I search and tried to debug many things but have no idea of what is going on. Maybe I'm misunderstanding something.
My best guess is that the COS image has some optimization that makes the app execution faster. But I don't know what exactly. Firstly I thought it could be some disk IO problem. But the disk utilization is OK on VM2. Then I thought it could be some OS configuration, then I compared the sysctl settings of both VMs and there's a lot of differences, as well. But I'm not sure what could be the key for optimization.
My questions are: why is this difference? And what can I change to make VM2 as faster as VM1?
First of all, Container-Optimized OS is based on the open-source Chromium OS, it is not CentOS, basically it is another Linux distribution.
Having said that, you need to understand that this OS is optimized for running Docker containers.
It means that Container-Optimized OS instances come pre-installed with the Docker runtime and cloud-init and basically that is all that this OS contains, because it is a minimalistic container-optimized operating system.
So, this OS doesn’t waste resources with all the applications, libraries and packages that CentOS have that can consume extra resources.
I have installed both OSs in my own project to check the Disk usage of each OS, and Container-Optimized OS from Google only uses 740MB, and CentOS consumes 2.1GB instead.
$ hostnamectl | grep Operating
Operating System: Container-Optimized OS from Google
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.2G 740M 482M 61% /
$ hostnamectl | grep Operating
Operating System: CentOS Linux 7 (Core)
$ df -h
/dev/sda2 20G 2.1G 18G 11% /
I wasn't able to use a small persistent disk with CentOS, the minimum is 20 GB.
On the other hand, containers let your apps run with fewer dependencies on the host virtual machine (VM) and run independently from other containerized apps that you deploy to the same VM instance and optimize the resources used.
I don't have many experience with GCP (just Azure and AWS), but this problem maybe about latency. So You need confirm if all your assets are in the same region.
You can try to verify the time to response from each VM to your BD. With this information you will be able to know if this situation is about of latency or not.
https://cloud.google.com/compute/docs/regions-zones/

Data transfer between local server and AWS server

We have an ASP.NET MVC 5 web application that reads data locally from within the same server. This server is in Europe. However when trying to read the same data from an AWS server based in Sidney the lag is many times greater. A ping from our local server to the AWS server in Australia takes 5 seconds. The data needs to be located in Australia because of data protection laws issued by the Australian Government. The database is MySQL. We have created a VPN between both servers and made no difference.
What are our options in order to improve the speed between these two servers?
If it is a web application serving content to users over internet you can use CloudFront distribution to reduce your latency issues.
https://aws.amazon.com/cloudfront/
If you are trying to connect your servers from your data center to AWS
Use AWS Direct Connect, this will provide a dedicated link between your on-premise datacenter and to the AWS Servers; Decreasing your latency by a lot.
https://aws.amazon.com/directconnect/
AWS runs your application regardless of which platform(ASP.NET, JAVA, C...) it is, AWS only provisions infrastructure. You don't need to worry about the platform on which your application is running and what database it connects to. You just need to ensure that all the network connections are properly open so that your servers can communicate with AWS servers.

Wordpress setup latency on Azure

I have a Wordpress environment setup on Azure.
Front end is on WebApp (Size is S2 - 2 cores & 3.5 GB RAM) whilst DB is on 2 replicated Classic Virtual Machines (Size F2 - 2 cores / 4 GB Memory).
We also tried connecting the web app to the VMs over a point-to-site VPN which in a nutshell is a VPN from 1 Azure service (WebApp) to another (VMs), so ultimately connection is still being made over the internet.
I'm looking for ways to improve network latency between Azure's WebApp and Virtual Machines.
Firstly, If your trying to "improve" the network latency then you have a issue somewhere else. Please provide more details on your latency issue.
You should be pushing towards the ARM stuff now. If you want to improve the performance then you can try using azure service fabric.

Load testing a ec2 Node.js machine - Now... how do I remotely load test 6500 QPS?

Ok, I have my server built on ec2. My stack is Nginx as a load balancer, supervisord for managing processes for node.js i.e. one process for each cpu, and redis, master and slave on separate boxes. I have stress tested by testing failover and taking services offline. Using apache AB, on the server I can get up to 6500 QPS.
Now, I need to load test remotely. What are the best open source tools to accomplish this or even the most cost effective SaaS method to do this. I do expect 6500 QPS per server in production and need to extend the isolation of apache AB to remote testing. E.g. I will have servers in singapore and I need to test 6500 QPS from Japan and the effect of latency. I am aware of apache Jmeter but looking for a best practice solution.
Thanks
I have successfully used jMeter for load testing at significant scale.
If a single load generation client cannot output enough load, you can configure jMeter with multiple load generation clients, with the load coordinated by a master instance.
Using "open source tools" implies that you have the ability to spin up servers in the zones you're interested in (e.g. Japan). If you locate a cloud provider in that region, you can spin up as many load generation instances as needed. You may, however, need quite a few instances depending on the network connectivity offered to individual instances. The nice thing about jMeter is that it can coordinate many load generation instances.
You can use blazemeter as SaaS solution. It's 100% Jmeter compatible. There is Japan(Tokyo) load origin location which you need.

Development Environment for Testing MySQL Replication

Is there an easy way to setup an environment on one machine (or a VM) with MySQL replication? I would like to put together a proof of concept of MySQL replication with one Master write instance and two slave instances for reads.
I can see doing it across 2 or 3 VMs running on my computer, but that would really bog down my system. I'd rather have everything running on the same VM. What's the best way to proof out scalability solutions like this in a local dev environment?
Thanks for your help,
Dave
I think to truly test MySQL Replication it is important to do so in realistic constraints.
If you put all the replicate nodes under one operating system then you no longer have the bandwidth constraint, the data transfer speed would be much higher that what you would get if those replicate DBs are on different sites.
Everything under one VM is a shortcut to configurations, for instance it does not make you go through the configuration of the networking.
I suggest you use multiple VMs, even if you have to put them under one physical machine, you can always configure the hypervisor to make the packets go through a router, in which case the I/O will be bound by whatever the network interface has as throughput.
I can see doing it across 2 or 3 VMs
running on my computer, but that would
really bog down my system.
You can try and make a few VMs with JeOS (Just Enough OS) versions of the operating system you want. I know Ubuntu has one and it can boot on 128 RAM, which makes it convenient to deploy lots of cloned VMs under one physical machine without monster RAM.
Next step would be doing the same thing on a cloud (Infrastructure as a Service, IaaS) provider, and try your setup on different geographical sites.
If what you're testing is machine-to-machine replication, then setting up multiple VMs on a virtual private network would be the correct environment to test it. If you use Ubuntu Server, you don't have to install more than you actually need -- just give the VMs enough space for a base install + MySQL + your data. Memory usage can be as little as 256MB per VM. All you have to do is suspend or shutdown the VMs when you're not running a full-up test.
I've had situations where I was running 4 or more VMs simultaneously on my workstation, either for development or testing purposes -- it's not that taxing unless you're trying to do video rendering in each VM.