From logistic reasons I've stated using a database server that is physically located far from the main Apache server (not in the same farm, no internal network, not even in the same country).
I understand this isn't recommended, but as I said, logistic reasons.
The question is how can I measure how problematic is it really for my specific architecture? Is there a way to break down a query request to network time and actual process? Also, if you had to estimate, how much of a delay in loading pages would you think such a far database server should cause?
Related
I am curious to know the overall effects on performance if i have my Database server separately from Tomcat server(Spin a MySQL server on Amazon). Actually, I am having some performance issues and not sure if it might be the cause of.
Yes, absolutely i have found that separating the DB and application can actually uncover performance issues not evident in a co-location situation, for the network latency reasons mentioned by ck1. In fact, if you capture stack traces during the slow operations, by sampling it will indicate the database/application code sensitive to network latency. The use cases with performance issues ( in non co-located apps) generally make a lot of round trips to the database. Instead try offloading the processing into the DB with a more complex query and reduce the rows returned.
Pros of having database and app servers co-located:
Network latency will be minimized
You only need to maintain a single server
Cons of co-location:
The app and database servers will contend for a common set of CPU, memory, and disk I/O resources. For example, queries causing a spike in CPU usage will affect the app server's performance
You have more than a single server to maintain
It's difficult to scale horizontally
Apologies for the fairly generic nature of the question - I'm simply hoping someone can contribute some suggestions and/or ideas as I'm out of both!
The background:
We run a fairly large (35M hits/month, peak around 170 connections/sec) site which offers free software downloads (stricly legal) and which is written in ASP .NET 2 (VB .Net :( ). We have 2 web servers, sat behind a dedicated hardware load balancer and both servers are fairly chunky machines, Windows Server 2012 Pro 64 bit and IIS 8. We serve extensionless URLs by using a custom 404 page which parses out the requested URL and Server.Transfers appropriately. Because of this particular component, we have to run in classic pipeline mode.
DB wise we use MySQL, and have two replicated DBs, reads are mainly done from the slave. DB access is via a DevArt library and is extensively cached.
The Problem:
We recently (past few months) moved from older servers, running Windows 2003 Server and IIS6. In the process, we also upgraded the Devart Component and MySql (5.1). Since then, we have suffered intermitted scalability issues, which have become significantly worse as we have added more content. We recently increased the number of programs from 2000 to 4000, and this caused response times to increase from <300ms to over 3000ms (measured with NewRelic). This to my mind points to either a bottleneck in the DB (relatively unlikely, given the extensive caching and from DB monitoring) or a badly written query or code problem.
We also regularly see spikes which seem to coincide with cache refreshes which could support the badly written query argument - unfortunately all caching is done for x minutes from retrieval so it can't always be pinpointed accurately.
All our caching uses locks (like this What is the best way to lock cache in asp.net?), so it could be that one specific operation is taking a while and backing up requests behind it.
The problem is... I can't find it!! Can anyone suggest from experience some tools or methods? I've tried to load test, I've profiled the code, I've read through it line by line... NewRelic Pro was doing a good job for us, but the trial expired and for political reasons we haven't purchased a full licence yet. Maybe WinDbg is the way forward?
Looking forward to any insight anyone can add :)
It is not a good idea to guess on a solution. Things could get painful or expensive quickly. You really should start with some standard/common triage techniques and make an educated decision.
Standard process for troubleshooting performance problems on a data driven app go like this:
Review DB indexes (unlikely) and tune as needed.
Check resource utilization: CPU, RAM. If your CPU is maxed-out, then consider adding/upgrading CPU or optimize code or split your tiers. If your RAM is maxed-out, then consider adding RAM or split your tiers. I realize that you just bought new hardware, but you also changed OS and IIS. So, all bets are off. Take the 10 minutes to confirm that you have enough CPU and RAM, so you can confidently eliminate those from the list.
Check HDD usage: if your queue length goes above 1 very often (more than once per 10 seconds), upgrade disk bandwidth or scale-out your disk (RAID, multiple MDF/LDFs, DB partitioning). Check this on each MySql box.
Check network bandwidth (very unlikely, but check it anyway)
Code: a) Consider upgrading to .net 3.5 (or above). It was designed for better scalability and has much better options for caching. b) Use newer/improved caching. c) pick through the code for harsh queries and DB usage. I have had really good experiences with RedGate Ants, but equiv. products work good too.
And then things get more specific to your architecture, code and platform.
There are also some locking mechanisms for the Application variable, but they are rarely the cause of lockups.
You might want to keep an eye on your pool recycle statistics. If you have a memory leak (or connection leak, etc) IIS might seem to freeze when the pool tops-out and restarts.
The networking team has flagged our Ruby on Rails application as one of the top producers of network traffic on our network, specifically from packet traffic between the app server and the database server (mysql).
What are the recommended best practices to reduce traffic between a Rails app and the database? Persistent database connections?
Is it an actual problem, or do they ding the top 3 db consumers no matter what? Check your logs or have them supply you with a log of queries that they think are problematic.
Beyond that, check to see if you're doing bad things like making model calls from your views in loops. Your logs should tell you what's going on here, if you see each partial paired with a query every time it's rendered, that's a big sign that your logic should be pulled back into the models and controllers.
Fire up Wireshark or another network scanner and look for the biggest packets or small packets that are too frequent - to identify the specific, troublesome queries.
Then, before even considering caching, check if that query can really be cached or if it just pulls too much data you are not using.
At this point, there are too many different possible causes - each with it's own recommended practices.
I was benchmarking my production server (it's in Beta) and the results were poor to say the least. On pages without any dynamic content, 1000 Requests with a concurrency of 1 returned 73 Requests/Sec.
When I start to add MYSQL queries to the equation, things quickly spiral out of control. The same 1000 requests on my homepage produce the following results:
CPU spikes to 50%
Load spikes to 3.7 (though that doesn't always happen)
complete request:1000
failed requests:0
write errors:0
requests/sec: 2.44
transfer rate: 113.26[Kbytes/sec]
90% of requests are served within 142ms.
95% of requests are served within 3531ms (it just keeps getting worse after that).
Taking a look at top while I run the benchmark
mysqld runs as a process is consuming roughly 7% of memory and 2.5% cpu
Apache seems to spawn 7 concurrent processes at times
At other points, Apache does not show up in Top
I'm running preforked Apache on a Micro AWS instance (ubuntu) and I'll upgrade to a higher instance, but I worry that there is an underlying problem here with the code or my Apache setup.
I am deploying Django with Mod_WSGI and I set KeepAliveTimeout to 3 just in case a couple of slow processes were screwing me up.
My code for the homepage is seemingly straightforward and though it requires joins.
def index(request):
posts=Post.objects.filter(photo__isnull=False).order_by('date').distinct()[0:7]
ohouses=Open_House.objects.filter(post__photo__isnull=False).order_by('day').distinct()[0:4]
return render_to_response("index.html", {'posts':posts,'ohouses':ohouses},context_instance=RequestContext(request))
I have left the default configuration in place for MYSQL.
Could this all be attributable to running a Micro Instance? Could my instance be somewhat corrupted? Any other plausible explanations?
There's a ton that goes into quick response times. Django is pretty optimized for what it is, but relying on a framework alone will never get you where you want to be.
If you're going to use Apache, use the MPM fork, and even then disable all modules you don't absolutely need. Apache can be made to run fast, but it's not the fastest horse out there. You'll do better with something like Nginx or (cringe) Cherokee. Cherokee is a good webserver, but usability index is like zero.
Any static resources should be served directly by your webserver or better yet, off a CDN.
Assuming you've optimized your own code to not make inefficient use of queries, Django's built in, automatic query caching will help reduce the overall amount of queries needed to the database. After that, you need to employ something like memcached.
Then, there's the server itself. Depending on the size of your site, you may not need much RAM and CPU, but it's always better to have too much than not enough. It might be beneficial to put some artificial load on your server (automated testing, spidering your site, etc), and see how your system resources hold up. If you get anywhere near capping out (I'd say over 50% with simple tests like that), you need to add some more into your instance's pool.
Search online for articles on how to optimize MySQL. Out of the box, it tends to use a lot more resources than it actually needs; there's lots of room for improvement there. And, if it's not already on its own server, consider strongly offloading it to it's own server. If you're anticipating a lot of traffic, the same server responding to web requests and fetching data from a database will become a bottleneck quick.
Could this all be attributable to running a Micro Instance?
Micro instances burst to 2 CPUs for a short period of time, after which they are severely capped for several minutes. I wouldn't trust any benchmarks done on a Micro EC2 instance for that reason.
We are Web 2.0 company that built a hosted Content Management solution from the ground up using LAMP. In short, people log into our backend to manage their website content and then use our API to extract that content. This API gets plugged into templates that can be hosted anywhere on the interwebs.
Scaling for us has progressed as follows:
Shared hosting (1and1)
Dedicated single server hosting (Rackspace)
1 Web Server, 1 DB Server (Rackspace)
1 Backend Web Server, 1 API Web Server, 1 DB Server
Memcache, caching, caching, caching.
The question is, what's next for us? Every time one of our sites are dugg or mentioned in a popular website, our API server gets crushed with too many connections. Or every time our DB server gets overrun with queries, our Web server requests back up.
This is obviously the 'next problem' for any company like ours and I was wondering if you could point me in some directions.
I am currently attracted to the virtualization solutions (like EC2) but need some pointers on what to consider.
What/where/how to scale is dependent on what your issues are. Since you've been hit a few times, and you know it's the API server, you need to identify what's actually causing the issue.
Is it DB lookup times?
A volume of requests that the web server just can't handle even though they're shortlived?
API requests take too long to process? (independent of DB lookups, e.g., does the code take a bit to run)?
Once you identify WHAT the problem is, you should have a pretty clear picture of what you need to do. If it's just volume of requests, and it's the API server, you just need more web servers (and code changes to allow horizontal scaling) or a beefier web server. If it's API requests taking too long, you're looking at code optimizations. There's never a 1-shot fix when it comes to scalability.
The most common scaling issues have to do with slow (2-3 seconds) execution of the actual code for each request, which in turn leads to more web servers, which leads to more database interactions (for cross-server sessions, etc.) which leads to database performance issues. High performance, server independent code with memcache (I actually prefer a wrapper around memcache so the application doesn't know/care where it gets the data from, just that it gets it and the translation layer handles DB/memcache lookups as well as populating memcache).
Depends really if your bottleneck is reads or writes. Scaling writes is much harder than reads.
It also depends on how much data you have in the database.
If your database is small, but cannot cope with the read load, you can deploy enough ram that it fits in ram. If it still cannot cope, you can add read-replicas, possibly on the same box as your web servers, this will give you good read-scalability - the number of slaves from one MySQL master is quite high and will depend chiefly on the write workload.
If you need to scale writes, that's a totally different game. To do that you'll need to split your data out, either horizontally (partitioning / sharding) or vertically (functional partitioning etc) so that you can spread the workload over several write servers which do not need to do each others' work.
I'm not sure what EC2 can do for you, it essentially offers slow, high latency machines with nonpersistent discs and low IO performance on the end of a more-or-less nonexistent SLA. I guess it might be useful in your case as you can provision them relatively quickly - provided you're just using them as read-replicas and you don't have too much data (remember they have nonpersistent discs and sucky IO)
What is the level of scaling you are looking for? Is it a stop-gap solution e.g. scale vertically? If it is a more strategic scaling project, does your current architecture support scaling horizontally?