Multithreaded Delphi database application failing with large amounts of data - mysql

Overview of the application:
I have a Delphi application that allows a user to define a number of queries, and run them concurrently over multiple MySQL databases. There is a limit on the number of threads that can be run at once (which the user can set). The user selects the queries to run, and the systems to run the queries on. Each thread runs the specified query on the specified system using a TADOQuery component.
Description of the problem:
When the queries retrieve a low number of records, the application works fine, even when lots of threads (up to about 100) are submitted. The application can also handle larger numbers of records(150,000+) as long as only a few threads (up to about 8) are running at once. However, when the user is running more than around 10 queries at once (i.e. 10+ threads), and each thread is retrieving around 150,000+ records, we start getting errors. Here are the specific error messages that we have encountered so far:
a: Not enough storage is available to complete this operation
b: OLE error 80040E05
c: Unspecified error
d: Thread creation error: Not enough storage is available to process this command
e: Object was open
f: ODBC Driver does not support the requested properties
Evidently, the errors are due to a combination of factors: number of threads, amount of data retrieved per thread, and possibly the MySQL server configuration.
The main question really is why are the errors occurring? I appreciate that it appears to be in some way related to resources, but given the different errors that are being returned, I'd like to get my head around exactly why the errors are cropping up. Is it down to resources on the PC, or something to do with the configuration of the server, for example.
The follow up question is what can we do to avoid getting the problems? We're currently throttling down the application by lowering the number of threads that can be run concurrently. We can't force the user to retrieve less records as the queries are totally user defined and if they want to retrieve 200,000 records, then that's up to them, so there's not much that we can do about that side of things. Realistically, we don't want to throttle down the speed of the application because most users will be retrieving small amounts of data, and we don't want to make the application to slow for them to use, and although the number of threads can be changed by the user, we'd rather get to the root of the problem and try to fix it without having to rely on tweaking the configuration all the time.

It looks you're loading a lot of data client-side. They may require to be cached in the client memory (especially if you use bidirectional cursors), and in a 32 bit application that could not be enough, depending on the average row size and how efficient is the library to store rows.
Usually the best way to accomplish database work is to perform that on the server directly, without retrieving data to the client. Usually databases have an efficient cache system and can write data out to disk when they don't fit in memory.
Why do you retrieve 150000 rows at once? You could use a mechanism to transfer data only when the user actually access them (sort of paging through data), to avoid large chunks of "wasted" memory.

This makes perfect sense (the fact you're having problems, not the specific errors). Think it through - you have the equivalent of 10 database connections (1 per thread) each receiving 150,000 rows of data (1,500,000 rows total) across a single network connection. Even if you're not using client-side cursors and the rows are small (just a few small columns), this is a HUGE flow of data across a single network interface, and a big hit on memory on the client computer.
I'd suspect the error messages are incorrect, in the same way that sometimes you have an access violation caused by a memory overwrite in another code location.

Depending on your DBMS, to help with the problem you could use the LIMIT/TOP sql clauses to limit the amountof data returned.

Things I would do:
write a very simple test application which only uses the necessary parts of the connection / query creation (with threads), this would eliminate all side effects caused by other parts of your software
use a different database access layer instead of ODBC, to find out if the ODBC driver is the root cause of the problem
it looks like the memory usage is no problem when the number of threads is low - to verify this, I would also measure / calculate the memory requirement of the records, compare it with the memory usage of the application in the operating system. For example if tests show that four threads can safely query 1.5 GB of total data without problems, but ten threads fail with under 0.5 GB of total data, I would say it is a threading problem

Related

Perfomance issue (Nginx, NodeJs, Mysql)

I have the following problem.
Using REST, I am getting binary content (BLOBs) from a MySql database via a NodeJS Express app.
All works fine, but I am having issues scaling the solution.
I increased the number of NodeJS instances to 3 : they are running ports 4000,4001,4002.
On the same machine I have Nginx installed and configured to do a load balancing between my 3 instances.
I am using Apache Bench to do some perf testing.
Please see attached pic.
Assuming I have a dummy GET REST that goes to the db, reads the blob (roughly 600KB in size) and returns it back (all http), I am making 300 simultaneous calls. I would have thought that using nginx to distribute the requests would make it faster, but it does not.
Why is this happening?
I am assuming it has to do with MySql?
My NodeJs app is using a connection pool with a limit set to 100 connections. What should be the relation between this value and the max connection value in Mysql? If I increase the connection pool to a higher number of connections, I get worse results.
Any suggestion on how to scale?
Thanks!
"300 simultaneous" is folly. No one (today) has the resources to effectively do more than a few dozen of anything.
4 CPU cores -- If you go much beyond 4 threads, they will be stumbling over each over trying to get CPU time.
1 network -- Have you check to see whether your big blobs are using all the bandwidth, thereby being the bottleneck?
1 I/O channel -- Again, lots of data could be filling up the pathway to disk.
(This math is not quite right, but it makes a point...) You cannot effectively run any faster than what you can get from 4+1+1 "simultaneous" connections. (In reality, you may be able to, but not 300!)
The typical benchmarks try to find how many "connections" (or whatever) leads to the system keeling over. Those hard-to-read screenshots say about 7 per second is the limit.
I also quibble with the word "simultaneous". The only thing close to "simultaneous" (in your system) is the ability to use 4 cores "simultaneously". Every other metric involves sharing of resources. Based on what you say, ...
If you start about 7 each second, some resource will be topped out, but each request will be fast (perhaps less than a second)
If you start 300 all at once, they will stumble over each other, some of them taking perhaps minutes to finish.
There are two interesting metrics:
How many per second you can sustain. (Perhaps 7/sec)
How long the average (and, perhaps, the 95% percentile) takes.
Try 10 "simultaneous" connections and report back. Try 7. Try some other small numbers like those.

simultaneous connections to a mysql database

I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance.
Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL connection should be perfectly fine (since the total number of active connections will be equal to the number of requests you're handling).
You typically shouldn't worry about multiple connections since a single MySQL connection (and the server), can handle loads much higher than that (completely dependent upon the hardware of course). In which case, as #GeorgeDaniel said, it's more important that you focus on controlling how many active processes you have and making sure they don't strain your computer.
If you're running a single process:
Yet again, a single MySQL connection should be fast enough for all of those requests. If you want, you can look into grouping the inserts together, as well as multiple connections.
MySQL is fast and should be able to easily handle 200+ simultaneous connections that are writing/reading, regardless of how many active connections you have open. And yet again, the performance you get from MySQL is completely dependent upon your hardware.
Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each connection, and the desired response time.
The number of connections permitted is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server.
The important part is to properly handle the connections and closing them appropriately. You do not want redundant connections occurring, as it can cause slow-down issues in the long run. Make sure when coding that you properly close connections.

How to avoid DB requery on pagination using ASP Classic and MySQL?

I have a page that querying products from the database and displaying then in pages of 30 items. When I navigate to the next page, the application re-queries the DB and displays page no. 2 and so on.
How can I avoid this database re-query? Can I store the results somewhere? We are talking about 1500-2000 rows/query and when we have 400-450 users online, our dedicated server runs at 100% CPU capacity.
Do you have enough memory to pre-load your entire "catalog" (in Application level storage) and then have SQL return all results, but store only the index (in each Session).
Something like this:
On Application Start: create my read only Application-level cache
On Search: SQL returns all results (I assume you have to do SQL, so you can check business conditions
On results: build list of indices that map into Application cache
On Display Page: Read and display apropriate range from Application cache
If you don't have enough memory, then a "Result" table might provide some optimization: on a per-session basis, cache entire query result into a "flattened" table, to avoid potentially expensive (busines-logic-heavy) products query. You have to be careful to detect when the query changes, so you can discard cache, and also have some server-side logic to cleanup old, expired searches.
As I stated the main reason I was asking for a solution was to avoid CPU overload. It seemed unnatural for the server to be clogged up at 100% with only 500-600 users online. I discovered the optimize table MySQL command, which works on MyISAM tables and it totally solved the problem. Immediately after executing the command, the CPU usage went down to 10-12%.
So, if there is anyone else out there running MySQL applications that overload the CPU, you should first try the Optimize Table command and other maintenance tasks described here http://dev.mysql.com/doc/refman/5.5/en/optimize-table.html

MYSQL concatenating large string

I have a web crawler that saves information to a database as it crawls the web. While it does this, it also saves a log file of its actions, and any errors it encounters to a log field in a mysql database (field becomes anywhere from 64kb to 100kb. It accomplishes this by concatenating (using the mysql CONCAT function).
This seems to work fine, but I am concerned about the cpu useage / impact it has on the mysql database. I've noticed that the web crawling is performing slower than before I implemented saving the log to the database.
I view this log file from a management webpage, and the current implementation seems to work fine other than the slow loading. Any recommendations for speeding this up, or implementation recommendations?
Reading 100kb strings into memory numerous time then write them to disk via a db. Of course your going to experience slowdown! Every part of what you are doing is going to task memory, disk, and cpu (especially if memory usage hits the system max and you start swapping to disk). Let me count some of the ways your going to possibly decrease overall site performance:
Sql connections max out and back up as the time to store 100kb records increases time a single process holds a connection
Webserver processes eat up free process pool and max out and take longer to free up because they have to wait on db connections to free.
Web server processes begin to bloat and take more memory each, possibly more than the system can handle without swapping. This is compounded by using the max. Umber of processes due to #2
... A book could be written on your situation.

Necessity of static cache for mysql queries?

This seems to be a clear issue; but I was unable to find an explicit answer. Consider a simple mysql database with indexed ID; without any complicated process. Just reading a row with WHERE clause. Does it really need to be cached? Reducing mysql queries apparently satisfies every one. But I tested reading a text from a flat cache file and by mysql query in a for loop of 1 - 100,000 cycles. Reading from flat file was only 1-2 times faster (but needed double memory). The CPU usage (by rough estimate from top in SSH) was almost the same.
Now I do not see any reason for using flat file cache. Am I right? or the case is different in long term? What may make slow query in such a simple system? Is it still useful to reduce mysql queries?
P.S. I do not discuss internal QC or systems like memcached.
It is depending of how you see the problem.
There is a limit on number of mysql connection can be established at any one time.
Holding the mysql connection resources in a busy site could lead to max connection error.
Establish a connection to mysql via TCP is a resource eater (if your database is sitting in different server). In this case, access a local disk file will be much faster.
If your server is located outside the network, the cost of physical distance will be heavier.
If records are updated once daily, stored into cache is truly request once and reused for the day.