Loadrunner vuser limit - stress-testing

So I've read elsewhere that LoadRunner is well known to support 2-4k users easily enough, but what that didn't tell me was what sort of environment LoadRunner needed to do that. Is there any sort of guidance available on what the environment needs to be for various loads?
For example, Would a single dual-core 2.4Ghz CPU, 4GB RAM support 1,000 concurrent vUsers easily? What about if we were testing something at a larger scale (say 10,000 users) where I assume we'd need a small server farm to generate ? What would be the effect of fewer machines but with more network cards?

There have been tests run with loadrunner well into the several hundred thousand user ranges. You can imagine the logistical effort on the infrastructure required to run such tests.
Your question on how many users can a server support is actually quite a complex question. Just like any other piece of engineered software, each virtual user takes a slice of resources to operate from the finite pool of CPU, DISK, Network and RAM. So, simply adding more network cards doesn't buy you anything if your limiting factor is CPU for your virtual users. Each virtual user type has a base weight and then your own development and deployment models alter that weight. I have observed a single load generator that could take 1000 winsock users easily with less than 50% of all used resources and then drop to 25 users for a web application which had significantly high network data flows, lots of state management variables and the need for some disk activity related to the loading of files as part of the business process. You also don't want to max load your virtual user hosts in order to limit the possibility of test bed influences on your test results.
If you have immature loadrunner users then you can virtually guarantee you will be running less than optimal virtual user code in terms of resource utilization which could result in as few as 10% of what you should expect to run on a given host to produce load because of choices made in virtual user type, development and deployment run time settings.
I know this is not likely the answer you wanted to hear, i.e, "for your hosts you can get 5732 of virtual user type xfoo," but there is no finite answer without holding the application as a constant and the skills of the user of the tool as a constant. Then you can move from protocol to protocol and from host to host and find out how many users you can get per box.

Ideally each virtual user needs around 4 mb of ram memory.. so you can calculate what number your existing machine can reach up to..

Related

Max no of connections using web sockets

I am developing a web application using web-sockets which needs real time data.
The number of clients using the web application will be over 100 000.
Server side web socket coding is done in Java. Can a single web-socket server handle this amount of connections?
If not, how can I achieve this. I have to use web sockets only.
WebSocket servers, like any other TCP-based server, can open huge numbers of connections. They can be file-descriptor-based. You can find out the max (system-wide) FDs easily enough on Linux:
% cat /proc/sys/fs/file-max
165038
There are system-wide and there are kernel parameters for user limits (and shell-level things like "ulimit"). Btw, you'll need to edit /etc/sysctl.conf to increase your FD mods during a reboot.
And of course you can increase this number to whatever you want (with the proportional impact on kernel memory).
Or servers can do tricks to multiplex a single connection.
But the real question is, what is the profile of the data that will flow over the connection? Will you have 100K users getting 1 64-byte message per day? Or are those 100K users getting 50 1K messages a second? Can the WebSocket server shard its connections over multiple NICs (ie, spread the I/O load)? Are the messages all encrypted and therefore need a lot of CPU? How easily can you cluster your WebSocket server so failover is easy for you and painless for your users? Is your server mission/business critical?... that is, can you afford to have 100K users disappear if a disaster occurs? There are many questions to consider when you thinking about scalability of a WebSocket server.
In our labs, we can create millions of connections on a server (and many more in a cluster). In the real-world, there are other 'scale' factors to consider in a production deployment besides file descriptors. Hope this helps.
Full disclosure: I work for Kaazing, a WS vendor.
As FrankG explained above, the number of WebSocket connections is depended on the use case.
Here are two benchmarks using MigratoryData WebSocket Server for two very different use cases that also detail system configuration (let's note however that system configuration is only a detail and the high scalability is achieved by the architecture of the MigratoryData which has been designed for real-time websites with millions of users).
In one use case MigratoryData scaled up to 10 million concurrent connections (while delivering ~1 Gbps messaging):
https://mrotaru.wordpress.com/2016/01/20/migratorydata-makes-its-c10m-scalability-record-more-robust-with-zing-jvm-achieve-near-1-gbps-messaging-to-10-million-concurrent-users-with-only-15-milliseconds-consistent-latency/
In another use case MigratoryData scaled up to 192,000 (while delivering ~9 Gbps):
https://mrotaru.wordpress.com/2013/03/27/migratorydata-demonstrates-record-breaking-8x-higher-websocket-scalability-than-competition/
These numbers are achieved on a single instance of MigratoryData WebSocket Server. MigratoryData can be clustered so you can also scale horizontally to any number of subscribers in an effective way.
Full disclosure: I work for MigratoryData.

Database querying performance when using shared and dedicated hosting servers

I would like to know how many database requests per page view (that is, every page that an user browses will start multiple requests to retrieve data from the database) should be made in order to have an "optimum" performance when I am using shared or dedicated hosting servers whose hardware is the most "commonly" provided (for example that that offer HostMonster or Bluehost providers). For both cases, I would like to know that when
I use MySQL or another database system
The database size is 1, 10, 100, 1000 Megabyte
I don't or I do use cache optimization
Users browsing pages are 10, 100, 1000, 10000 per second
In few word, under what conditions (considering the above cases) the server will begin to slow down and the user experience will be affected in a negative way? I appreciate some statistics...
P.S.: At this time I am using Ruby on Rails 3, so it is "easy" to increase requests!
I've had Facebook apps hosted on a shared host that did about a million pages per month without too many issues. I generally did 5-8 queries per page request. The number of queries isn't usually the issue, it's how long each query takes. You can have a small data set that is poorly indexed and you'll start having issue. The hosting provider usually kills your query after a certain length of time.
If you are causing the CPU on the server to spike, for whatever reason, then they may start killing processes on you. That is usually the issue.

Multithreaded Delphi database application failing with large amounts of data

Overview of the application:
I have a Delphi application that allows a user to define a number of queries, and run them concurrently over multiple MySQL databases. There is a limit on the number of threads that can be run at once (which the user can set). The user selects the queries to run, and the systems to run the queries on. Each thread runs the specified query on the specified system using a TADOQuery component.
Description of the problem:
When the queries retrieve a low number of records, the application works fine, even when lots of threads (up to about 100) are submitted. The application can also handle larger numbers of records(150,000+) as long as only a few threads (up to about 8) are running at once. However, when the user is running more than around 10 queries at once (i.e. 10+ threads), and each thread is retrieving around 150,000+ records, we start getting errors. Here are the specific error messages that we have encountered so far:
a: Not enough storage is available to complete this operation
b: OLE error 80040E05
c: Unspecified error
d: Thread creation error: Not enough storage is available to process this command
e: Object was open
f: ODBC Driver does not support the requested properties
Evidently, the errors are due to a combination of factors: number of threads, amount of data retrieved per thread, and possibly the MySQL server configuration.
The main question really is why are the errors occurring? I appreciate that it appears to be in some way related to resources, but given the different errors that are being returned, I'd like to get my head around exactly why the errors are cropping up. Is it down to resources on the PC, or something to do with the configuration of the server, for example.
The follow up question is what can we do to avoid getting the problems? We're currently throttling down the application by lowering the number of threads that can be run concurrently. We can't force the user to retrieve less records as the queries are totally user defined and if they want to retrieve 200,000 records, then that's up to them, so there's not much that we can do about that side of things. Realistically, we don't want to throttle down the speed of the application because most users will be retrieving small amounts of data, and we don't want to make the application to slow for them to use, and although the number of threads can be changed by the user, we'd rather get to the root of the problem and try to fix it without having to rely on tweaking the configuration all the time.
It looks you're loading a lot of data client-side. They may require to be cached in the client memory (especially if you use bidirectional cursors), and in a 32 bit application that could not be enough, depending on the average row size and how efficient is the library to store rows.
Usually the best way to accomplish database work is to perform that on the server directly, without retrieving data to the client. Usually databases have an efficient cache system and can write data out to disk when they don't fit in memory.
Why do you retrieve 150000 rows at once? You could use a mechanism to transfer data only when the user actually access them (sort of paging through data), to avoid large chunks of "wasted" memory.
This makes perfect sense (the fact you're having problems, not the specific errors). Think it through - you have the equivalent of 10 database connections (1 per thread) each receiving 150,000 rows of data (1,500,000 rows total) across a single network connection. Even if you're not using client-side cursors and the rows are small (just a few small columns), this is a HUGE flow of data across a single network interface, and a big hit on memory on the client computer.
I'd suspect the error messages are incorrect, in the same way that sometimes you have an access violation caused by a memory overwrite in another code location.
Depending on your DBMS, to help with the problem you could use the LIMIT/TOP sql clauses to limit the amountof data returned.
Things I would do:
write a very simple test application which only uses the necessary parts of the connection / query creation (with threads), this would eliminate all side effects caused by other parts of your software
use a different database access layer instead of ODBC, to find out if the ODBC driver is the root cause of the problem
it looks like the memory usage is no problem when the number of threads is low - to verify this, I would also measure / calculate the memory requirement of the records, compare it with the memory usage of the application in the operating system. For example if tests show that four threads can safely query 1.5 GB of total data without problems, but ten threads fail with under 0.5 GB of total data, I would say it is a threading problem

How To Interpret Siege and or Apache Bench Results

We have a MySQL driven site that will occasionally get 100K users in the space of 48 hours, all logging into the site and making purchases.
We are attempting to simulate this kind of load using tools like Apache Bench and Siege.
While the key metric seems to me number of concurrent users, and we've got our report results, we still feel like we're in the dark.
What I want to ask is: What kinds of things should we be testing to anticipate this kind of traffic?
50 concurrent users 1000 Times? 500 concurrent users 10 times?
We're looking at DB errors, apache timeouts, and response times. What else should we be looking at?
This is a vague question and I know there is no "right" answer, we're just looking for some general thoughts on how to determine what our infrastructure can realistically handle.
Thanks in advance!
Simultaneous users is certainly one of the key factors - especially as that applies to DB connection pools, etc. But you will also want to verify that the page rate (pages/sec) of your tests is also in the range you expect. If the the think-time in your testcases is off by much, you can accidentally simulate a much higher (or lower) page rate than your real-world traffic. Think time is the amount of time the user spends between page requests - reading the page, filling out a form, etc.
Depending on what other information you have on hand, this might help you calculate the number of simultaneous users to simulate:
Virtual User Calculators
The complete page load time seen by the end-user is usually the most important metric to evaluate system performance. You'll also want to look for failure rates on all transactions. You should also be on the lookout for transactions that never complete. Some testing tools do not report these very well, allowing simulated users to hang indefinitely when the server doesn't respond...and not reporting this condition. Look for tools that report the number of users waiting on a given page or transaction and the average amount of time those users are waiting.
As for the server-side metrics to look for, what other technologies is your app built on? You'll want to look at different things for a .NET app vs. a PHP app.
Lastly, we have found it very valuable to look at how the system responds to increasing load, rather than looking at just a single level of load. This article goes into more detail.
Ideally you are going to want to model your usage to the user, but creating simulated concurrent sessions for 100k users is usually not easily accomplished.
The best source would be to check out your logs for the busiest hour and try and figure out a way to model that load level.
The database is usually a critical piece of infrastructure, so I would look at recording the number and length of lock waits as well as the number and duration of db statements.
Another key item to look at is disk queue lengths.
Mostly the process is to look for slow responses either in across the whole site or for specific pages and then hone in on the cause.
The biggest problem for load testing is that is quite hard to test your network and if you have (as most public sites do) a limited bandwidth through your ISP, that may create a performance issue that is not reflected in the load tests.

Can shared Host/MySQl application support load of 500 users accessing the site at the same time

I have a new project and need some advice about managing the application’s load.
The application will have over 500 users with about 70% of them logging on between 5 & 7pm each night. The users will pull down about 50 records, update the records and the save the changes back to the data base. Estimate each record will be updated in about a minute and half. The data is all text, no images.
The application will be built in PHP on top of a MySql community edition data base, hosted at Dreamhost.
My question can a site supported on shared hosting company support this kind of traffic? How about MySql ? I know this is a big question but any advice about the app’s architecture ?
Thanks for your advice.
MySql should handle it just fine. I might be a bit worried about the shared hosting setup though. Most shared hosting setups have very restrictive policies on CPU utilization that you might run into.
Rather than letting your app drag down the performance of the server, they will often shut down your app for a few minutes if you exceed a certain CPU utilization threshold over some period of time. Of course they don't advertise what those limits are, but I've often run into this issue with various shared hosting accounts.
You might be better off going with a cheap VPS type setup. That will (generally) get you a dedicated CPU and no artificial limits. Your server might slow down a bit under heavy load, but at least they won't cut your site off entirely.