Concurrent mysql queries causing large query queue's - mysql

I have a large mysql database that receives large volumes of queries, each query takes around 5-10 seconds to perform.
Queries involve checking records, updating records and adding records.
I'm experiencing some significant bottle necks in the query executions, which I believe is due to incoming queries having to 'queue' whilst current queries are using records that these incoming queries need to access.
Is there a way, besides completely reformatting my database structure and SQL queries, to enable simultaneous use of database records by queries?

An INSERT, UPDATE, or DELETE operation locks the relevant tables - myISAM - or rows -InnoDB - until the operation completes. Be sure your query of this type are fastly commited .. and also chechck for you transacation isolating the part with relevant looking ..
For MySQL internal locking see: https://dev.mysql.com/doc/refman/5.5/en/internal-locking.html
Also remeber that in mysql there are differente storage engine with different features eg:
The MyISAM storage engine supports concurrent inserts to reduce
contention between readers and writers for a given table: If a MyISAM
table has no holes in the data file (deleted rows in the middle), an
INSERT statement can be executed to add rows to the end of the table
at the same time that SELECT statements are reading rows from the
table.
https://dev.mysql.com/doc/refman/5.7/en/concurrent-inserts.html
eventually take a look at https://dev.mysql.com/doc/refman/5.7/en/optimization.html

Related

Slow insert statements on SQL Server

A single insert statement is taking, occasionally, more than 2 seconds. The inserts are potentially concurrent, as it depends on our site traffic which can result in 200 inserts per minute.
The table has more than 150M rows, 4 indexes and is accessed using a simple select statement for reporting purposes.
SHOW INDEX FROM ouptut
How to speed up the inserts considering that all indexes are required?
You haven't provided many details but it seems like you need partitions.
An insertion operation in an database index has, in general, an O(logN) time complexity where N is the number of rows in the table. If your table is really huge even logN may become too much.
So, to address that scalability issue you can make use of index partitions to transparently split up your table indexes in smaller internal pieces and reduce that N without changing your application or SQL scripts.
https://dev.mysql.com/doc/refman/5.7/en/partitioning-overview.html
[EDIT]
Considering information initially added in the comments and now updated in the question itself.
200 potentially concurrent inserts per minute
4 indexes
1 select for reporting purposes
There are a few not mutually exclusive improvements:
Check the output of EXPLAIN for that SELECT and remove indexes not being used, or, otherwise, combine them in a single index.
Make the inserts in batch
https://dev.mysql.com/doc/refman/5.6/en/insert-optimization.html
https://dev.mysql.com/doc/refman/5.6/en/optimizing-innodb-bulk-data-loading.html
Partitioning still an option.
Alternatively, change your approach: save the data to a nosql database like redis and populate the mysql table asynchronously for reporting purpose.

Slow MySQL table

I am currently trying to figure out why the site I am working on (Laravel 4.2 framework) is really slow at times, and I think it has to do with my database setup. I am not a pro at all so I would assume that where the problem is
My sessions table has roughly 2.2 million records in it, when I run show processlist;, all the queries that take the longest relate to that table.
Here is a picture for example:
Table structure
Surerly I am doing something wrong or it's not index properly? I'm not sure, not fantastic with databases.
We don't see the complete SQL being executed, so we can't recommend appropriate indexes. But if the only predicate on the DELETE statements is on the last_activity column i.e.
DELETE FROM `sessions` WHERE last_activity <= 'somevalue' ;
Then performance of the DELETE statement will likely be improved by adding an index with a leading column of somevalue, e.g.
CREATE INDEX sessions_IX1 ON sessions (last_activity);
Also, if this table is using MyISAM storage engine, then DML statements cannot execute concurrently; DML statements will block while waiting to obtain exclusive lock on the table. The InnoDB storage engine uses row level locking, so some DML operations can be concurrent. (InnoDB doesn't eliminate lock contention, but locks will be on rows and index blocks, rather than on the entire table.)
Also consider using a different storage mechanism (other than MySQL database) for storing and retrieving info for web server "sessions".
Also, is it necessary (is there some requirement) to persist 2.2 million "sessions" rows? Are we sure that all of those rows are actually needed? If some of that data is historical, and isn't specifically needed to support the current web server sessions, we might consider moving the historical data to another table.

Two identical databases on same server with different running times

I have two databases that are identical except that in one I have about 500.000 entries (distributed over several tables) while the other database is empty.
If I run my program in the empty database then execution takes around 10mins while in the database with the 500k entries execution takes around 40mins. I now deleted some of the entries (about 250k entries) and it speeded up the execution by around 10mins. The strange thing is that these tables where not heavily queried (just some very simple inserts), so I wonder how this can have such an effect on the execution.
Also, all SQL statements that I do (I run a lot of them) are rahter simple (no complicated joins mainly inserts), so I wonder why some tables with 250k entries can have such an effect on the performance. Any ideas what could be the reason?
Following things could be the reason but for actual reasons you should look and profile your queries,
Though you think you are making simple inserts, its not a simple operation from DB perspective. (for every entry you insert following things may change and update
Index
Constraints
Integrity of DB (PK-FK) and there are many things to consider.above things look simple but they take time if volume is high
Check volume of queries (if high no. of insert queries are getting executed then as might be knowing Insert is exclusive operation i.e. it locks the table for updating and volume is high that means more locking time and waiting time.)
to avoid this probably you can try chaining or bulk operations
Is bulk update faster than single update in db2?
Data Distribution also plays important role. if you are accessing heavily loaded tables then parsing/accessing/fetching data from such tables will also take time (it doesn't matter for single query but it really hurts for large volume of similar queries). Try to minimize that by tuning your queries.

How to speed up MySQL INSERTs/UPDATEs?

I have three large MySQL tables. They are approaching 2 million records. Two of the tables are InnoDB and are currently around 500 MB in size. The other table is MyISAM and is about 2.5 GB.
We run an import script from FileMaker to insert and update records in these tables but lately it has become very slow - only inserting a few hundred records per hour.
What can I do to increase performance to make inserts and updates happen faster?
For INSERT it could have to do with the indexes you have defined on the tables (they have to be updated after each INSERT). Could you post more information about them? And are there triggers set on the tables?
For UPDATE it is a different story, it could be that not the record update is slow but finding the record is slow. Could you try to change the UPDATE into a SELECT and see if it is still slow? If yes, then you should investigate your indexes.
For the Innodb table, if it's an acceptable risk, I'd consider changing the innodb_flush_log_at_trx_commit level. Some more details in this blog post, along with some more Innodb tuning pointers.
For both engines, batching INSERTs together can speed things up to a point. See doc.
What version of MySQL are you running? There have been many improvements with the new InnoDB "Plugin" engine and concurrency of operations on servers with multiple processors.
Is the query slow when executed on MySQL from the command line?
If you're using the Execute SQL Script step from FileMaker, that connects and disconnects after every call, causing major slowdowns when executing large numbers of queries. We've had clients switch to our JDBC plugin (self-promotion disclaimer here) to avoid this, resulting in major speedups.
It turns out the reason for the slowness was from the FileMaker side of things. Exporting the FileMaker records to a CSV and running INSERT/UPDATE commands resulted in very fast execution.

Is MySQL InnoDB is appropriate for this scenario?

My MysQL database contains multiple MyISAM tables, with each table containing millions of rows. There is a heavy insert load on the database, so I cannot issue SELECTs on that live database. Instead, I create a replica of the database for queries and conduct analysis on that.
For the analysis, I need to issue multiple parallel queries. The queries are independent (i.e., the results of the queries are not combined together), but they operate on same tables most of the time. As far as I know, the entire MyISAM table is locked for each query, which means parallel independent queries would be slow. Ideally, I would prefer an engine that supports "NO LOCKING". I am assuming MySQL doesnt have such an engine, so should I use InnoDB? I might be missing lot of things here. Please suggest what is the right path to take here.
Thanks
MyISAM read locks are compatible, so the SELECT queries won't lock each other.
If your analysis queries on the replica database don't write, only read, then it's OK to use MyISAM.
You could stick to MyISAM and use INSERT DELAYED:
When a client uses INSERT DELAYED, it gets an okay from the server at once, and the row is queued to be inserted when the table is not in use by any other thread.
Another major benefit of using INSERT DELAYED is that inserts from many clients are bundled together and written in one block. This is much faster than performing many separate inserts.