MySQL 8 - update query takes randomly long time - mysql

We just upgraded our database from mariadb 5.5.56 to MySQL 8. We are facing issue that some update queries to an InnoDB takes a long time randomly. Please note that 99% of the time, query is fast but then randomly suddenly it takes huge time and then again back to normal. We DO NOT face this issue in mariadb 5.5.56
CREATE TABLE `users` (
`uid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`pid` int(10) unsigned NOT NULL DEFAULT '0',
`ipaddr` int(10) unsigned DEFAULT '0',
PRIMARY KEY (`uid`,`pid`),
KEY `pid` (`pid`)
) ENGINE=InnoDB AUTO_INCREMENT=161089 DEFAULT CHARSET=utf8
/*!50100 PARTITION BY HASH (`pid`)
PARTITIONS 101
mysql> update users set ipaddr=2148888835 where uid=1 limit 1;
Query OK, 1 row affected **(39.51 sec)**
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update users set ipaddr=2148888835 where uid=1 limit 1;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
query is very simple and it uses primary key as shown below
mysql> explain update users set ipaddr=123 where uid=1;
+----+-------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+---------------+---------+---------+-------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+---------------+---------+---------+-------+------+----------+-------------+
| 1 | UPDATE | users | p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32,p33,p34,p35,p36,p37,p38,p39,p40,p41,p42,p43,p44,p45,p46,p47,p48,p49,p50,p51,p52,p53,p54,p55,p56,p57,p58,p59,p60,p61,p62,p63,p64,p65,p66,p67,p68,p69,p70,p71,p72,p73,p74,p75,p76,p77,p78,p79,p80,p81,p82,p83,p84,p85,p86,p87,p88,p89,p90,p91,p92,p93,p94,p95,p96,p97,p98,p99,p100 | range | PRIMARY | PRIMARY | 4 | const | 1 | 100.00 | Using where |
+----+-------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+---------------+---------+---------+-------+------+----------+-------------+
1 row in set, 1 warning (0.00 sec)
I enabled profiling and it shows updating takes long time.
mysql> SHOW PROFILE; +--------------------------------+-----------+
| Status | Duration |
+--------------------------------+-----------+
| starting | 0.000045 |
| Executing hook on transaction | 0.000005 |
| starting | 0.000005 |
| checking permissions | 0.000004 |
| Opening tables | 0.000017 |
| init | 0.000005 |
| System lock | 0.000193 |
| updating | 39.470708 |
| end | 0.000009 |
| query end | 0.000004 |
| waiting for handler commit | 0.038291 |
| closing tables | 0.000039 |
| freeing items | 0.000024 |
| cleaning up | 0.000028 |
+--------------------------------+-----------+
14 rows in set, 1 warning (0.00 sec)
Although the tables were freshly created using mysqldump, I even tried optimising the table but it didn't help.
mysql> optimize table users;
+--------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+--------------+----------+----------+-------------------------------------------------------------------+
| mesibo.users | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| mesibo.users | optimize | status | OK |
+--------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (2 min 3.07 sec)
Below is config file. Our server has 64GB ram, we allocated 16G to innodb, however changing this value or removing it from configuration does not help.
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
innodb_buffer_pool_size=16G
max_connections = 1024
Any clue what might be wrong here?

Related

Select count takes two minutes on a two column table

I have a MariaDB table with just under 100000 rows, and selecting the count takes a very long time (almost 2 minutes).
Selecting anything by id from the table though takes only 4 milliseconds.
The text field here contains on average 5000 characters.
How can I speed this up?
MariaDB [companies]> describe company_details;
+---------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+------------------+------+-----+---------+-------+
| id | int(10) unsigned | NO | PRI | NULL | |
| details | text | YES | | NULL | |
+---------+------------------+------+-----+---------+-------+
MariaDB [companies]> explain select count(id) from company_details;
+------+-------------+-----------------+-------+---------------+---------+---------+------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-----------------+-------+---------------+---------+---------+------+-------+-------------+
| 1 | SIMPLE | company_details | index | NULL | PRIMARY | 4 | NULL | 71267 | Using index |
+------+-------------+-----------------+-------+---------------+---------+---------+------+-------+-------------+
MariaDB [companies]> analyze table company_details;
+---------------------------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+---------------------------+---------+----------+----------+
| companies.company_details | analyze | status | OK |
+---------------------------+---------+----------+----------+
1 row in set (0.098 sec)
MariaDB [companies]> select count(id) from company_details;
+-----------+
| count(id) |
+-----------+
| 96544 |
+-----------+
1 row in set (1 min 43.199 sec)
This becomes an even bigger problem when I try to join the table.
For example, to find the number of companies which do not have associated details:
MariaDB [companies]> SELECT COUNT(*) FROM company c LEFT JOIN company_details cd ON c.id = cd.id WHERE cd.id IS NULL;
+----------+
| count(*) |
+----------+
| 42178 |
+----------+
1 row in set (10 min 28.846 sec)
Edit:
After running OPTIMIZE on the table, the select count has improved speed from 1min 43sec to just 5 sec, and the join has improved speed from 10 minutes to 25 seconds.
MariaDB [companies]> optimize table company_details;
+---------------------------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+---------------------------+----------+----------+-------------------------------------------------------------------+
| companies.company_details | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| companies.company_details | optimize | status | OK |
+---------------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (11 min 21.195 sec)
I think there's OPTIMIZE -command for rebuilding indexes.
OPTIMIZE company_details;
This usually takes some time to complete. More details: https://mariadb.com/kb/en/optimize-table/

Mysql Why is the count(*) performance very fast after locking a table

If I don't lock the table, the count(*) performance was bad like this:
mysql> select count(*) from titles;
+----------+
| count(*) |
+----------+
| 443308 |
+----------+
1 row in set (8.79 sec)
mysql> explain select count(*) from titles;
+----+-------------+--------+------------+-------+---------------+---------+---------+------+--------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+-------+---------------+---------+---------+------+--------+----------+-------------+
| 1 | SIMPLE | titles | NULL | index | NULL | PRIMARY | 209 | NULL | 442843 | 100.00 | Using index |
+----+-------------+--------+------------+-------+---------------+---------+---------+------+--------+----------+-------------+
1 row in set, 1 warning (0.00 sec)
It looks so normal. But when I lock the table, the count(*) performance was so fast.
mysql> lock tables titles read;
Query OK, 0 rows affected (0.00 sec)
mysql> select count(*) from titles;
+----------+
| count(*) |
+----------+
| 443308 |
+----------+
1 row in set (0.13 sec)
What happens to the operation of count after locking the table?
update at 2021-5-29:
mysql> show index from titles;
+--------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+--------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| titles | 0 | PRIMARY | 1 | emp_no | A | 301411 | NULL | NULL | | BTREE | | | YES | NULL |
| titles | 0 | PRIMARY | 2 | title | A | 441772 | NULL | NULL | | BTREE | | | YES | NULL |
| titles | 0 | PRIMARY | 3 | from_date | A | 442843 | NULL | NULL | | BTREE | | | YES | NULL |
| titles | 1 | idx_emp_no | 1 | emp_no | A | 300876 | NULL | NULL | | BTREE | | | YES | NULL |
+--------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
4 rows in set (0.04 sec)
mysql> explain select count(*) from titles;
+----+-------------+--------+------------+-------+---------------+------------+---------+------+--------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+-------+---------------+------------+---------+------+--------+----------+-------------+
| 1 | SIMPLE | titles | NULL | index | NULL | idx_emp_no | 4 | NULL | 442835 | 100.00 | Using index |
+----+-------------+--------+------------+-------+---------------+------------+---------+------+--------+----------+-------------+
1 row in set, 1 warning (0.00 sec)
mysql> select count(*) from titles;
+----------+
| count(*) |
+----------+
| 443304 |
+----------+
1 row in set (8.79 sec)
I create an smaller index. But it doesn't have any change.
I've found no relation between table locks and Query time, but looking into the Mysql Manual I've found some clues:
The COUNT(*) is calculated by scanning the smallest secondary index if any, otherwise it uses the clustered index (primary key or auto-generated)
Locking the table could lead to an update of all records into the buffer pool, so it does not need to perform additional scans and then returns quicky
Your primary index key length is 209 which makes the index scan slow, and any SELECT statement could be subject to different optimizations depending on the current table and buffer pool status. (try using a small numeric primary key with unique row id and additional index with all your fields in the current primary key)
In the following link you can find more information
https://dev.mysql.com/doc/refman/5.6/en/aggregate-functions.html#function_count
COUNT(expr) [over_clause]
As of MySQL 8.0.13, SELECT COUNT(*) FROM tbl_name query performance
for InnoDB tables is optimized for single-threaded workloads if there
are no extra clauses such as WHERE or GROUP BY.
InnoDB processes SELECT COUNT(*) statements by traversing the smallest
available secondary index unless an index or optimizer hint directs
the optimizer to use a different index. If a secondary index is not
present, InnoDB processes SELECT COUNT(*) statements by scanning the
clustered index.
Processing SELECT COUNT(*) statements takes some time if index records
are not entirely in the buffer pool.
If an approximate row count is sufficient, use SHOW TABLE STATUS.
InnoDB handles SELECT COUNT(*) and SELECT COUNT(1) operations in the
same way. There is no performance difference.
For MyISAM tables, COUNT(*) is optimized to return very quickly if the
SELECT retrieves from one table, no other columns are retrieved, and
there is no WHERE clause.
This optimization only applies to
MyISAM tables, because an exact row count is stored for this storage
engine and can be accessed very quickly. COUNT(1) is only subject to
the same optimization if the first column is defined as NOT NULL.
Strange. I get the opposite (6x slower with the LOCK):
mysql> SELECT COUNT(*) FROM allcountries;
+----------+
| COUNT(*) |
+----------+
| 11082175 |
+----------+
1 row in set (0.26 sec)
mysql> LOCK TABLES allcountries READ;
mysql> SELECT COUNT(*) FROM allcountries;
+----------+
| COUNT(*) |
+----------+
| 11082175 |
+----------+
1 row in set (1.59 sec)
mysql> UNLOCK TABLES;
mysql> LOCK TABLES allcountries READ;
mysql> SELECT COUNT(*) FROM allcountries;
+----------+
| COUNT(*) |
+----------+
| 11082175 |
+----------+
1 row in set (1.50 sec)
mysql> UNLOCK TABLES;
mysql> SELECT COUNT(*) FROM allcountries;
+----------+
| COUNT(*) |
+----------+
| 11082175 |
+----------+
1 row in set (0.28 sec)
mysql> SELECT ##version;
+-------------------------+
| ##version |
+-------------------------+
| 8.0.25-0ubuntu0.20.10.1 |
+-------------------------+
1 row in set (0.00 sec)
The table is not PARTITIONed. innodb_parallel_read_threads = 4.
Parallel execution
8.0 has a few cases where multiple threads are used for queries. Here are some entries from the Changelogs. (Still, it does not explain why LOCK TABLES is relevant.)
8.0.14 Parallel scanning of by PRIMARY KEY (cf innodb_parallel_read_threads) COUNT(*) w/o WHERE
8.0.17 Parallel scanning of partitions (cf innodb_parallel_read_threads)
8.0.20 Changes to parallel read threads functionality introduced in MySQL 8.0.17 caused a degradation in SELECT COUNT(*) performance. Pages were read from disk unnecessarily. (Bug #30766089)
What was the value of innodb_parallel_read_threads? (SHOW VARIABLES LIKE 'innodb_parallel_read_threads';)

MySQL ROLLBACK not deleting new records - sequelize

I'm using sequelize v5.1.0 to create a transaction in MySQL 5.7.25-0ubuntu0.18.04.2. It appears to be executing the correct commands according to the MySQL 5.7 Documentation, however, the record that was inserted and rolled back still exists in the database afterwards.
javascript
let promises = []
models.sequelize.transaction(function (t) {
promises.push(models.alert.create(setter, { transaction: t }))
promises.push(new Promise((resolve, reject) => {
reject(new Error('roll it back yall'))
}))
return Promise.all(promises)
}).then(function () {
console.log('SUCCESS!!! (will commit)')
}).catch(function (err) {
console.log('FAILURE !!! (will rollback)')
next(err)
})
SQL query log
| 2019-03-21 12:55:17.798200 | root[root] # [10.211.55.2] | 2151 | 0 | Query | START TRANSACTION |
| 2019-03-21 12:55:19.597304 | root[root] # [10.211.55.2] | 2151 | 0 | Prepare | INSERT INTO `alerts` (`id`,`user_id`,`alert_name`,`reading_type`,`reading_condition`,`reading_value`,`always_active`,`sensors_global`,`enabled`,`last_updated`,`updated`) VALUES (DEFAULT,?,?,?,?,?,?,?,?,?,?) |
| 2019-03-21 12:55:19.616278 | root[root] # [10.211.55.2] | 2151 | 0 | Execute | INSERT INTO `alerts` (`id`,`user_id`,`alert_name`,`reading_type`,`reading_condition`,`reading_value`,`always_active`,`sensors_global`,`enabled`,`last_updated`,`updated`) VALUES (DEFAULT,21,'Test Alert','temperature','below',60,1,0,1,'2019-03-21 12:55:17.781','2019-03-21 12:55:17') |
| 2019-03-21 12:55:19.619249 | root[root] # [10.211.55.2] | 2151 | 0 | Query | ROLLBACK
record in database after
mysql> select * from alerts where alert_name='Test Alert';
+-------+---------+------------+--------------+-------------------+---------------+---------------+---------------+----------------+---------+---------------------+---------------------+
| id | user_id | alert_name | reading_type | reading_condition | reading_value | alert_message | always_active | sensors_global | enabled | updated | last_updated |
+-------+---------+------------+--------------+-------------------+---------------+---------------+---------------+----------------+---------+---------------------+---------------------+
| 48689 | 21 | Test Alert | temperature | below | 60.00 | NULL | 1 | 0 | 1 | 2019-03-21 06:55:17 | 2019-03-21 12:55:18 |
+-------+---------+------------+--------------+-------------------+---------------+---------------+---------------+----------------+---------+---------------------+---------------------+
1 row in set (0.00 sec)
Update:
Poking around in MySQL CLI gives a warning:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into contacts ( user_id, contact_name ) values (21, 'Some Person' );
Query OK, 1 row affected (0.00 sec)
mysql> rollback;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+---------------------------------------------------------------+
| Level | Code | Message |
+---------+------+---------------------------------------------------------------+
| Warning | 1196 | Some non-transactional changed tables couldn't be rolled back |
+---------+------+---------------------------------------------------------------+
1 row in set (0.00 sec)
What makes some tables non-transactional?
It appears that the alerts table is using the MyISAM engine, which does not support transactions:
mysql> show table status like 'alerts';
+--------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+--------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+---------+
| alerts | MyISAM | 10 | Dynamic | 18 | 136 | 2712 | 281474976710655 | 2048 | 256 | 48690 | 2019-03-19 10:38:39 | 2019-03-21 06:55:19 | NULL | utf8_general_ci | NULL | | |
+--------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+---------+
1 row in set (0.00 sec)
To change the db Engine, follow the guidelines here and:
mysql> ALTER TABLE alerts ENGINE=InnoDB;

How to estimate SQL query timing?

I'm trying to get an rough (order-of-magnitude) estimate of how long time the following query could take:
mysql> EXPLAIN SELECT t1.col1, t1_col4 FROM t1 LEFT JOIN t2 ON t1.col1=t2.col1 WHERE col2=0 AND col3 IS NULL;
+----+-------------+--------------------+------+---------------+------------+---------+-----------------------------+---------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+------+---------------+------------+---------+-----------------------------+---------+--------------------------+
| 1 | SIMPLE | t1 | ref | foobar | foobar | 4 | const | 9715129 | |
| 1 | SIMPLE | t2 | ref | col1 | col1 | 4 | db2.t1.col1 | 42318 | Using where; Using index |
+----+-------------+--------------------+------+---------------+------------+---------+-----------------------------+---------+--------------------------+
2 rows in set (0.00 sec)
mysql>
This can be done when using SHOW PROFILES syntax.
When you open a MySQL session, you could set the variable "profiling" to 1 or ON.
mysql> SET profiling = 1;
So all the statements sent to the server will be profiled and stored in a historical and shown later by typing the command:
mysql> SHOW PROFILES;
See, from MySQL manual:
mysql> SET profiling = 1;
Query OK, 0 rows affected (0.00 sec)
mysql> DROP TABLE IF EXISTS t1;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> CREATE TABLE T1 (id INT);
Query OK, 0 rows affected (0.01 sec)
mysql> SHOW PROFILES;
+----------+----------+--------------------------+
| Query_ID | Duration | Query |
+----------+----------+--------------------------+
| 0 | 0.000088 | SET PROFILING = 1 |
| 1 | 0.000136 | DROP TABLE IF EXISTS t1 |
| 2 | 0.011947 | CREATE TABLE t1 (id INT) |
+----------+----------+--------------------------+
3 rows in set (0.00 sec)
mysql> SHOW PROFILE;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| checking permissions | 0.000040 |
| creating table | 0.000056 |
| After create | 0.011363 |
| query end | 0.000375 |
| freeing items | 0.000089 |
| logging slow query | 0.000019 |
| cleaning up | 0.000005 |
+----------------------+----------+
7 rows in set (0.00 sec)
mysql> SHOW PROFILE FOR QUERY 1;
+--------------------+----------+
| Status | Duration |
+--------------------+----------+
| query end | 0.000107 |
| freeing items | 0.000008 |
| logging slow query | 0.000015 |
| cleaning up | 0.000006 |
+--------------------+----------+
4 rows in set (0.00 sec)
mysql> SHOW PROFILE CPU FOR QUERY 2;
+----------------------+----------+----------+------------+
| Status | Duration | CPU_user | CPU_system |
+----------------------+----------+----------+------------+
| checking permissions | 0.000040 | 0.000038 | 0.000002 |
| creating table | 0.000056 | 0.000028 | 0.000028 |
| After create | 0.011363 | 0.000217 | 0.001571 |
| query end | 0.000375 | 0.000013 | 0.000028 |
| freeing items | 0.000089 | 0.000010 | 0.000014 |
| logging slow query | 0.000019 | 0.000009 | 0.000010 |
| cleaning up | 0.000005 | 0.000003 | 0.000002 |
+----------------------+----------+----------+------------+
References (updated at: 2014-09-04):
- SHOW PROFILE Syntax
- The INFORMATION_SCHEMA PROFILING Table
- How To Use MySQL Query Profiling (The Digital Ocean recently published a great article concerning this issue.)

In MySQL 5, SELECT COUNT(1) FROM table_name is very slow

I have a MySQL 5.0 database with a few tables containing over 50M rows. But how do I know this? By running "SELECT COUNT(1) FROM foo", of course. This query on one table containing 58.8M rows took 10 minutes to complete!
mysql> SELECT COUNT(1) FROM large_table;
+----------+
| count(1) |
+----------+
| 58778494 |
+----------+
1 row in set (10 min 23.88 sec)
mysql> EXPLAIN SELECT COUNT(1) FROM large_table;
+----+-------------+-------------------+-------+---------------+----------------------------------------+---------+------+-----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------------+-------+---------------+----------------------------------------+---------+------+-----------+-------------+
| 1 | SIMPLE | large_table | index | NULL | fk_large_table_other_table_id | 5 | NULL | 167567567 | Using index |
+----+-------------+-------------------+-------+---------------+----------------------------------------+---------+------+-----------+-------------+
1 row in set (0.00 sec)
mysql> DESC large_table;
+-------------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+---------------------+------+-----+---------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| created_on | datetime | YES | | NULL | |
| updated_on | datetime | YES | | NULL | |
| other_table_id | int(11) | YES | MUL | NULL | |
| parent_id | bigint(20) unsigned | YES | MUL | NULL | |
| name | varchar(255) | YES | | NULL | |
| property_type | varchar(64) | YES | | NULL | |
+-------------------+---------------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)
All of the tables in question are InnoDB.
Any ideas why this is so slow, and how I can speed it up?
Counting all the rows in a table is a very slow operation; you can't really speed it up, unless you are prepared to keep a count somewhere else (and of course, that can become out of sync).
People who are used to MyISAM tend to think that they get count(*) "for free", but it's not really. MyISAM cheats by not having MVCC, which makes it fairly easy.
The query you're showing is doing a full index scan of a not-null index, which is generally the fastest way of counting the rows in an innodb table.
It is difficult to guess from the information you've given, what your application is, but in general, it's ok for users (etc) to see close approximations of the number of rows in large tables.
If you need to have the result instantly and you don't care if it's 58.8M or 51.7M, you can find out the approximate number of rows by calling
show table status like 'large_table';
See the column rows
For more information about the result take a look at the manual at http://dev.mysql.com/doc/refman/5.1/en/show-table-status.html
select count(id) from large_table will surely run faster