Slow MySQL table - mysql

I am currently trying to figure out why the site I am working on (Laravel 4.2 framework) is really slow at times, and I think it has to do with my database setup. I am not a pro at all so I would assume that where the problem is
My sessions table has roughly 2.2 million records in it, when I run show processlist;, all the queries that take the longest relate to that table.
Here is a picture for example:
Table structure
Surerly I am doing something wrong or it's not index properly? I'm not sure, not fantastic with databases.

We don't see the complete SQL being executed, so we can't recommend appropriate indexes. But if the only predicate on the DELETE statements is on the last_activity column i.e.
DELETE FROM `sessions` WHERE last_activity <= 'somevalue' ;
Then performance of the DELETE statement will likely be improved by adding an index with a leading column of somevalue, e.g.
CREATE INDEX sessions_IX1 ON sessions (last_activity);
Also, if this table is using MyISAM storage engine, then DML statements cannot execute concurrently; DML statements will block while waiting to obtain exclusive lock on the table. The InnoDB storage engine uses row level locking, so some DML operations can be concurrent. (InnoDB doesn't eliminate lock contention, but locks will be on rows and index blocks, rather than on the entire table.)
Also consider using a different storage mechanism (other than MySQL database) for storing and retrieving info for web server "sessions".
Also, is it necessary (is there some requirement) to persist 2.2 million "sessions" rows? Are we sure that all of those rows are actually needed? If some of that data is historical, and isn't specifically needed to support the current web server sessions, we might consider moving the historical data to another table.

Related

Concurrent mysql queries causing large query queue's

I have a large mysql database that receives large volumes of queries, each query takes around 5-10 seconds to perform.
Queries involve checking records, updating records and adding records.
I'm experiencing some significant bottle necks in the query executions, which I believe is due to incoming queries having to 'queue' whilst current queries are using records that these incoming queries need to access.
Is there a way, besides completely reformatting my database structure and SQL queries, to enable simultaneous use of database records by queries?
An INSERT, UPDATE, or DELETE operation locks the relevant tables - myISAM - or rows -InnoDB - until the operation completes. Be sure your query of this type are fastly commited .. and also chechck for you transacation isolating the part with relevant looking ..
For MySQL internal locking see: https://dev.mysql.com/doc/refman/5.5/en/internal-locking.html
Also remeber that in mysql there are differente storage engine with different features eg:
The MyISAM storage engine supports concurrent inserts to reduce
contention between readers and writers for a given table: If a MyISAM
table has no holes in the data file (deleted rows in the middle), an
INSERT statement can be executed to add rows to the end of the table
at the same time that SELECT statements are reading rows from the
table.
https://dev.mysql.com/doc/refman/5.7/en/concurrent-inserts.html
eventually take a look at https://dev.mysql.com/doc/refman/5.7/en/optimization.html

mysql db engine when to use

I am trying to find out which MySQL table engine is best for each of our table and requirements.
The tables with many reads(SELECT queries) are MyISAM.
The tables with many writes(INSERT/UPDATE queries) are InnoDB. These are the only two types that we used, but now we have different scenarios and we do not know which DB engine is best.
1)We have a table users that we UPDATE/SELECT very often, like 1 row every second for SELECT and 1 row every 1 second for UPDATE, but the INSERTS are rare, like 1 every 300 seconds. For this we chose MyISAM.
2)We have a table users_data where we INSERT data as often as we do it in table users, like every 300 seconds, but we do not UPDATE this table too often, but we read from it once every 1 second. For this we chose MyISAM
3)We have a table transactions where we INSERT data very often, like 1 row every 4-5 seconds, and we SELECT large packs from this every 20-30 seconds (we make many SUM's often from this table based on userid). For this we chose MyISAM.
4)We have a table transactions_logs where we store id (which is the same as transactions table), merchant name, email and we INSERT data very often, like 1 row every 4-5 seconds, but we read this very rarely. For this we chose InnoDB.
Rarely we join table transactions and transactions_logs for statistics.
5)We have a table pages where we only SELECT data very often,like 1 row per second. For this we chose MyISAM and we turned on MySQL cache.
Questions:
a)We have another table with 1 INSERT every 100000 seconds, but many SELECT/UPDATE queries per second? What type should this be? We are using MyISAM for now for this type.
We read data from it, we modify it, then we update it and we do this once per 1-2 seconds. Is MyISAM the best option for this?
b)Do you think that we should've used InnoDB for all tables? I've read that since MySQL 5.6, InnoDB is the default table type and probably it was optimised a lot.
Fundamentally, I use the following two differences between MyISAM and InnoDB to choose which one to use in a specific scenario:
InnoDB supports transactions, MyISAM does not.
InnoDB has row-level locking, MyISAM has table-level locking.
(Source: MySQL 5.7 Reference Manual)
My rule of thumb is to use MyISAM when there are a high number of select queries and low number of update/insert queries. Whenever write performance, or data integrity are of importance I'll use InnoDB.
While the above is useful as a starting point, every database, and every application, are different. The specific details of your hardware and software setup will ultimately dictate which engine choice is best. When in doubt, test!
However, I will say that, based on the numbers provided, and assuming 'modern' server hardware, you're not anywhere near the performance limits of MySQL so either engine would suffice.
MyISAM Works great for read only loads and Write and Read forever loads. It handles co-currency with locking the entire table on writes. This can make it very slow on write heavy loads.
INNODB Is a little more complicated, adds some configuration options that must be configured somewhat properly. This adds support for row level locking, which is great for rows that are added, updated less than 1 per second ideally (giving plenty of time to read/write).

Updating MySQL Innodb Index Statistics

We have a large MySQL 5.5 database in which many rows are inserted daily and never deleted or updated. There are also users querying the live database. Tables are MyISAM.
But it is effectively impossible to run ANALYZE TABLES because it takes way too long. And so the query optimizer will often pick the wrong index. (15 hours, and sometimes crashes the tables.)
We want to try switching to all InnoDB. Will we need to run ANALYZE TABLES or not?
The MySQL docs say:
The cardinality (the number of different key values) in every index of a table
is calculated when a table is opened, at SHOW TABLE STATUS and ANALYZE TABLE and
on other circumstances (like when the table has changed too much).
But that begs the question: when is a table opened? If that means accessed during a connection then we need do nothing special. But I do not think that that is the case for InnoDB.
So what is the best approach? Run ANALYZE TABLE periodically? Perhaps with an increased dive count?
Or will it all happen automatically?
The query users use apps to get the data, so each run is a separate connection. They generally do NOT expect the rows to be up-to-date within just minutes.

MySQL partitioning for table with huge inserts and deletes

I am having a table in which we have some 20 million entries inserted(blind insertion without any constraints) per day. We have two foreign keys and one of it is a reference id to a table with some 10 million entries.
I am planning to delete all the data in this table older than a month, because this data is not needed anymore. But the problem is that with the huge number of insertions happening, if i start deleting, the table will be locked and insertions will be blocked.
I wanted to know if we can use partitioning on the table based on month. This way, i was hoping that when i try deleting all the data older than 2 months, this data should be in a different partition and insertions should be happening in a different partition, and the delete lock will not be blocking the read lock.
Please tell me if this is possible. I am fairly new to using DB, so please let me know if there is something wrong with my thought.
From the MySQL documentation
For InnoDB and BDB tables, MySQL uses table locking only if you
explicitly lock the table with LOCK TABLES. For these storage engines,
avoid using LOCK TABLES at all, because InnoDB uses automatic
row-level locking and BDB uses page-level locking to ensure
transaction isolation.
I'm not sure you even have an issue. Have you tested this and seen locking issues, or are you just theorizing about them right now?
MySQL has partitioning as of version 5.1.
You can run this query to verify if your version of MySQL supports partitioning:
SHOW VARIABLES LIKE 'have_partitioning';
Then you can read the manual to learn how to use it:
http://dev.mysql.com/doc/refman/5.5/en/partitioning.html

Optimize mysql table to avoid locking

How do I optimize mysql tables to not use locking? Can I alter table to 'turn off' locking all the time.
Situation:
I have app which use database of 15M records. Once weekly scripts doing some task (insert/update/delete) for 20 hours, and app servers that feed data to front end (web server), and that is fine, very small performance loss I see during that time.
Problem:
Once monthly I need to optimize table, since huge number of records is out there it take 1-2 hours to finish this task (starting optimize from mysql command line, or phpMyAdmin, same) and in that period mysql DOESN'T SERVE data to front end (I suppose it is about locking tables for optimize)
Question:
So how to optmize tables to avoid locking, since there is only reading of data (no insert or update) so I suppose 'unlocking' while optimize, in this case can't make any damage?
In case your table engine is InnoDB and MySQL version is > 5.6.17 - the lock won't happen. Actually there will be lock, but for VERY short period.
Prior to Mysql 5.6.17, OPTIMIZE TABLE does not use online DDL.
Consequently, concurrent DML (INSERT, UPDATE, DELETE) is not permitted
on a table while OPTIMIZE TABLE is running, and secondary indexes are
not created as efficiently.
As of MySQL 5.6.17, OPTIMIZE TABLE uses online DDL for regular and
partitioned InnoDB tables. The table rebuild, triggered by OPTIMIZE
TABLE and performed under the cover by ALTER TABLE ... FORCE, is
performed in place and only locks the table for a brief interval,
which reduces downtime for concurrent DML operations.
Optimize Tables Official Ref.
Just better prepare free space that is > than the space currently occupied by your table, because whole table copy can happen for index rebuild.