Resources consumed by a simple SELECT query in MySql - mysql

There a few large tables in one of the databases of a customer (each table is ~50M rows in size and is not too wide). The intent is to infrequently read these tables (completely). As there are no reasonable CDC indices present, the plan is to read the tables by querying them
SELECT * from large_table;
The reads will be performed using a jdbc driver. With the following fetch configuration present, the intent is to read the data approximately one record at a time (it may require a significant amount of time) so that the client code is never overwhelmed.
PreparedStatement stmt = connection.prepareStatement(queryString, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
I was going through the execution path of a query in High Performance MySQL, however some questions seemed unanswered:
Without the temp tables being explicitly created and the query cache being made use of, "how" are the stream reads tracked on the server?
Is any temporary data created (in main memory or files on disk) whatsoever? If so, where is it created and how much?
If temporary data is not created, how are the rows to be returned tracked? Does the query engine keep track of all the page files to be read for this query on this connection? In case there are several such queries running on the server, are the earliest "Tracked" files purged in favor of queries submitted recently?
PS: I want to understand the effect of this approach on the MySql server (not saying that there aren't better ways of reading the tables)

That simple query will not use a temp table. It will simply fetch the rows and transfer them to the client until it finishes. Nor would any possible index be useful. (If the real query is more complex, let's see it.)
The client may wait for all the rows (faster, but memory intensive) before it hands any to the user code, or it may hand them off one at a time (much slower).
I don't know the details in JDBC on specifying it.
You may want to page through the table. If so, don't use OFFSET, but use the PRIMARY KEY and "remember where you left off". More discussion: http://mysql.rjweb.org/doc.php/pagination
Your Question #3 leads to a complex answer...
Every query brings all the relevant data (and index entries) into RAM. The data/index is read in chunks ("blocks") of 16KB from the BTree structure that is persisted on disk. For a simple select like that, it will read the blocks 'sequentially' until finished.
But, be aware of "caching":
If a block is already in RAM, no I/O is needed.
If a block is not in the cache ("buffer_pool"), it will, if necessary, bump some block out and read the desired block in. This is very normal, and very common. Do not fear it.
Because of the simplicity of the query, only a few blocks ever need to be in RAM at any moment. Hence, if your buffer pool were only a few megabytes, it could still handle, say, a 1TB table. There would be a lot of I/O, and that would impact other operations.
As for "tracking", let me use the analogy of reading a long book in a single sitting. There is nothing to track, you are simply turning pages ('blocks'). You don't even need a 'bookmark' for tracking, it is next-next-next...
Another note: InnoDB uses "B+Tree", which includes a link from one block to the "next", thereby making the page turning efficient.
Another interpretation of tracking... "Transactions" and "ACID". When any query (read or write) touches a table, there is some form of lock applied to each row touched. For SELECT the lock is rather light-weight. For writes it can cause delays or even a "deadlock". The locks are unavoidable, but sometimes actions can be taken to minimize their impact.
Logically (but not actually), a "snapshot" of all rows in all tables is taken at the instant you start a transaction. This allows you to see a consistent view of everything, even if other connections are changing rows. The underlying mechanism is very lightweight on reading, but heavier for writes. Writes will make a copy of the row so that each connection sees the snapshot that it 'should' see. Also, the copy allows for ROLLBACK and recovery from a crash (eg power failure).
(Transaction "isolation" mode allows some control over the snapshot.) To get the optimal performance for your case, do nothing special.
Here's a way to conceptualize the handling of transactions: Each row has a timestamp associated with it. Each query saves the start time of the query. The query can "see" only rows that are older than that start time. A subsequent write in another connection will be creating copies of rows with a later timestamp, hence not visible to the SELECT. Hence, the onus is on writes to do extra work; reads are cheap.

Related

Long time open database connection and SQL join query memory

I'm currently working on a java application which performs following in a background thread.
opens database connection
Select some rows (100000+ rows)
perform a long running task for each row by calling ResultSet.next() with some buffer size defined by resultSet.setFetchSize()
finally after everything's done closes the connection
If the query does some sorting or joining it will create a temp table and will have some additional memory usage. My question is if my database connection is being opened for long time (let's say few hours) and fetch batch by batch slowly, will it cause performance trouble's in database due to memory usage? (If the database is concurrently used by other threads also.) Or databases are designed to handle these things effectively?
(In the context of both MySQL and Oracle)
From an Oracle perspective, opening a cursor and fetching from it periodically doesn't have that much of an impact if it's left open... unless the underlying data that the cursor is querying against changes since the query was first started.
If so, the Oracle database now has to do additional work to find the data as it was at the start of the query (since read-consistency!), so now it needs to query the data blocks (either on disk or from the buffer cache) and, in the event the data has changed, the undo tablespace.
If the undo tablespace is not sized appropriately and enough data has changed, you may find that your cursor fetches fail with an "ORA-01555: snapshot too old" exception.
In terms of memory usage, a cursor doesn't open a result set and store it somewhere for you; it's simply a set of instructions to the database on how to get the next row that gets executed when you do a fetch. What gets stored in memory is that set of instructions, which is relatively small when compared to the amount of data it can return!
this mechanism seems not good.
although both mysql(innodb engine) and oracle provides consistent read for select,
do such a long select may leads to performance downgrade due to build cr block and other work,
even ora-01555 in oracle.
i think you should query/export all data first,
then process the actual business one by one.
at last, query all data first will not reduce the memory usage,
but reduce the continus time for memory and temp sort segment/file usage.
or you shoud consider separete the whole work to small pieces,
this is better.

How to count page views in MySQL without performance hit

I want to count the amount of visitors of a page, similar to what stackoverflow is doing with the "views" of each question.
The current solution just increments a field of a InnoDB table:
UPDATE data SET readers = readers + 1, date_edited = date_edited WHERE ID = '881529' LIMIT 1
This is the most expensive query on the page since it is performing a write operation.
Is there a better solution to the problem? How do high traffic sites like stackoverflow handle this?
I am thinking to instead write to a table using the memory engine and writing that content to a innodb table every minute or so.
e.g.:
INSERT INTO mem_table (id,views_new)
VALUES (881525,1)
ON DUPLICATE KEY UPDATE views_new = views_new+1
Then I would run a cron job every minute to update the InnoDB table:
UPDATE data d, mem_table m
SET d.readers = d.readers + m.readers_new
WHERE d.ID = m.ID;
DELETE FROM mem_table;
Unfortunatelly this is not so good with replication and the application is using a MySQL Galera Cluster.
Thank you in advance for any suggestions.
There are ways to reduce the immediate performance hit by starting a separate thread to update your counters. When you have a high number of parallel users (so many parallel updates of your hit counters), it is advisable to use a queuing mechanism to prevent locking (so like your in memory table). Your queue will have both writes and reads, so you have to take the table and data design into account.
Alternative is keeping a counter related to the article in a separate file. This prevents congestion on the single table with hit counters or if you keep it in the table serving the articles: A high lock wait time out on that article table (resulting in all kind of front end errors). Keeping the data in separate files does not give you insight in the overall hits on your site, but for that you could just use a log graphing tool like awstats.
If you can batch 100 INSERTs/UPDATEs together in a single statement, you can run it 10 times as fast. (There is a risk of lock_wait_timeout and/or deadlock.)
What if you build a MEMORY table and lose the queued data in a power failure? I assume that is OK for this application? (If not, you have a much bigger problem.)
What are your client(s)? Can they queue up things before even touching the database?
I like ping-ponging a pair of tables for staging data into the database. Clients write to one table; a continuously running job (not a cron job) is working with the other table. When the latter finishes with inserts/updates, it swaps the tables with a single, atomic, RENAME TABLE so that the clients are oblivious. My Staging Table blog discusses this in further detail. It explains how to avoid the replication problems you encountered.
Another tip. Do not put the count and date in the main table. Put them in a 'parallel table' ('vertical partitioning'). This cuts down on the bulkiness in replication and decreases the interference with other processing.
For Galera, use a pair non-replicated tables (suggest MyISAM with no indexes). Have the continually running job run in one place, cycling through the 3 nodes. If you had 3 jobs, there would be several ways in which they are more likely to stumble over each other.
If this won't keep up, you need to Shard your data. (That's what the big folks do, sooner or later.)

When does a slow MySQL query on a given connection affect other connections?

I think I have a basic understanding of this, but am hoping that someone can give me more details as I am interested in learning more about database performance.
Lets say I have a very large database, with many millions of entries, the database supports many connections. Doing simple queries on the database will be slow as there's so much data. I'm trying to understand exactly when a query on a given connection starts to have a direct effect on the performance of queries running on other connections.
If one connection locks some elements, I understand that that will hold up queries running the other connections that need those elements . For example doing:
SELECT FOR UPDATE
will lock what you are selecting.
What happens when you do something simple like:
SELECT COUNT(*) FROM myTable
lets say we have a table with a billion rows so running the count is going to take some time (running on innodb). Will it affect queries running on other connections?
What if you select a large amount of data using SELECT and JOIN, like:
SELECT * FROM myTable1 JOIN myTable2 ON myTable1.id = myTable2.id;
does having a join lock anything for other queries?
I'm finding it hard to know which queries will have a direct effect on the performance of queries running on other connections.
Thanks
There are different angles:
Row locking: this shouldn't happen if you tune your architecture, so you should forget about it
Real performances issues and bottleneck. In our case, collateral effects.
About this second point, the problem is mainly divided in 3 areas:
Disk reads
Memory usage (buffer)
CPU usage.
About disk reads: the more data (in bytes) you will retrieve, the more the harddrive is going to be busy and slowdown any other activity using it. Reduce the size of selected rows to avoid disk overhead.
About memory usage: mysql manages an internal buffer, that can get stuck in some situations. I don't know enough about it to give you a proper answer, but I know this is definetly something you should keep an eye on.
About cpu usage: basically the cpu will get busy when it
has to calculate (joins, preparing statements, arithmetics...)
has to do all the peripheric stuff: moving bytes from disk to memory for instance.
Optimize your queries to reduce cpu overhead. (sounds silly but, well, it always turns out to be the problem anyway...)
So, now when to know when there's a collateral effect? By profiling your hardware...
How to profile?
absolute profiling: use SHOW INNODB STATUS or SHOW PROFILE to get useful informations about main mysql harddrive, cpu and memory watches.
relative profiling: use your favorite OS profiler. Under windows xp for instance, you can use the great perfmon.exe and watch for PRIVATE BYTES and VIRTUAL BYTES of the mysql process. I say relative, because afterall if a query is time consuming on your computer, it might not be on the NASA system...
Hope it helps, regards.
This is a very general question, so giving a precise answer is difficult.
You can think of the database as a pool of shared resources; especially because the underlying hardware your database runs on has physical limits. Most often the reason you see something like a select query that causes a performance impact on other queries it's because they're all competing for using those underlying physical resources like Disk IO or RAM access or CPU time and there isn't enough to go around.
So the actual results you wil see depend heavily on your database's physical hardware, and the configuration settings.
For instance in your select examples the variables might be: Is the data the query needs already in RAM? Can it look up the rows efficiently by an index? If it does have to do IO, how many other queries are asking to read data from disk? Are you using a secondary index and have to do multiple reads? Is the database doing read-ahead to buffer other pages? Is the query causing sequential or random io? Are any updates holding locks on the data? How much read IO can physical hardware support?
You would have to answer all those questions for all queries currently executing to know if they're going to affect performance of others queries.
This is why DBAs exist. Busy databases are complex system, and it's all about the interaction of a great many different operations, all with thousands of possible variables affecting them.
So what you generally do is optimize the things you can control as well as you know how (hardware, mysql configuration, schema and indexes) then start measuring the system as it runs to understand what is actually going on.
So in your case, I would say that it's infinitely more helpful to focus on simply optimizing your queries individually. The faster they execute, the less resources they are probably using and the less change they will impact others. Then you learn to analyze the system. Just look at one thing that's slow and ask "why is this slow?" Then fix it. That's the optimization process.
However, in the first case you wrote with SELECT ... FOR UPDATE explicit locks can and will be big performance issues. Be careful with those.
Read queries are only affected by isolation levels of other queries. They themselves do not block the table ever.
Isolation levels are designated transactional safety modes. If another query that uses locking does not allow dirty reads your reads will be held until the other query finishes writing or unlocks.
MVCC is a mechanism that allows databases to create a new version of the data when they need to update or delete. Which means that when you start a read on the current version of the data, it data won't get tainted by future updates/deletes.
When you start a write on current data despite the data being currently read by another process, you're in fact writing the new stuff somewhere else and marking them as the newest version. Which in the end means no blocking for the writing process (at least not because of the reading process).

Is there ever a reason to use a database transaction for read only sql statements?

As the question says, is there ever a reason to wrap read-only sql statements in a transaction? Obviously updates require transactions.
You still need a read-lock on the objects you operate on. You want to have consistent reads, so writing the same records shouldn't be possible while you're reading them...
If you issue several SELECT statements in a single transaction, you will also produce several read-locks.
SQL Server has some good documentation on this (the "read-lock" is called shared lock, there):
http://msdn.microsoft.com/en-us/library/aa213039%28v=sql.80%29.aspx
I'm sure MySQL works in similar ways
Yes, if it's important that the data is consistent across the select statements run. For instance if you were getting the balance of several bank accounts for a user, you wouldn't want the balance values read to be inconsistent. Eg if this happened:
With balance values B1=10 and B2=20
Your code reads B1= 10.
Transaction TA1 starts on another DB client
TA1 writes B1 to 20, B2 to 10
TA1 commits
Your code reads B2 = 10
So you now think that B1 is 10 and B2 is 10, which could be displayed to the user and that says that $10 has disappeared!
Transactions for reading will prevent this, since we would read B2 as 20 in step 5 (assuming a multiversioning concurrency control DB, which mysql+innodb is).
MySQL 5.1, with the innodb engine has a default transaction isolation level which is REPEATABLE READS. So if you perform your SELECT inside a transaction no Dirty reads or Nonrepeatable reads can happen. That means even with transaction commiting between two of your queries you'll always get a consistent database. In theory in REPEATABLE READS you couls only fear phantom reads, but with innodb this cannot even occurs. So by simply opening a Transaction you can assume database consistency (coherence) and perform as much select as you want without fearing parallel-running-and-ending write transactions.
Do you have any interest in having such a big consistency constraint? Well it depends of what you're doing with your queries. having inconsistent reads means that if one of your query is based on a result from a previous one you may have problems:
if you're performing only one query you do not care, at all
if none of your queries assumes a result from a previous one, do not care
if you never re-read a record in the same session, same thing
if you always read dependencies of your main record in the same query and do not use lazy loading, no problem
if a small inconsistency between your first and last query will not break your code, then forget about it. But be careful, this can make a very hard to debug application bug (and hard to reproduce). So get a robust application code, something which could maybe handle databases errors and crash nicely (or not even crash) when this occurs (2 time in one year?).
if you show critical data (I mean bank accounts and not blogs or chats), then you should maybe care about it
if you have a lot of write operations, then you increase the risk of inconsistent reads, you may need to add transactions at least on some key points
you may need to test impact on performances, having all read requests in transactions, when several write transactions are really altering the data, is certainly slowing the engine, he needs to handle several versions of the data. So you shoul dcheck if the impact is not too big for your application

In MySQL, why do time-consuming updates get slowed down by other processes waiting for the table(s) being updated?

Once in a while, I need to perform a massive update to a very large table. If users continue hitting the Web site while the update is being run, there will be a line-up of MySQL clients.
It appears that the longer the line-up, the slower the main operation gets (i.e. it updates fewer rows per unit time). Killing those processes can speed things up, but they're bound to come back.
Is there a way to address this (other than by bringing the site down)? I don't mind the users waiting a few minutes, but once the line-up has reached a certain size, the operation never completes.
This applies to UPDATE statements, as well as statements resulting in a temporary table being created (e.g. ALTER TABLE)
The waiting connections take up memory, and they're queueing up lock requests on your large and busy table. Eventually you're going to exhaust your maximum DB connections or one of your memory pools due to the number of connections held open. If I had to guess, I'd guess that your slowdown is due to memory exhaustion and the resultant swap-thrashing.
If the update you're doing doesn't require consistency between rows in the large table, you can try lowering the isolation level of the update transaction, using SET TRANSACTION ISOLATION LEVEL. This will greatly decrease the amount of locking and work that MySQL normally does to provided each client "repeatable reads" on a table being updated and read concurrently. You could also try partitioning your large table and running one update per partition, or otherwise breaking up the update operation into multiple pieces so that the table isn't locked for a long time at any one stretch.
If you do require consistency to be maintained between the rows, i.e. the whole table has to go from state X to X' in a single transaction with no intermediate states ever being visible, you're not going to be able to use the above techniques. You might try cloning the table, doing the update on the new table, then renaming the old table out of the way and renaming the new table into its place. Since it's a large table, this may require a significant increase in the runtime and storage needed for the operation. There are also caveats for doing this when triggers and constraints are present. The benefit is that you avoid holding a write lock on the table being updated, except during the relatively fast rename operations. Your users will only be delayed during that small swap window, and this will likely not take so long as to cause the major slowdowns you've experienced.