I recently setup an RDS instance in AWS for MySQL 5.6 with the new Memcached InnoDB plugin. Everything works great and my app can store and retrieve cached items from the mapped table. When I store items I provide a timeout, and memcached correctly does not return the item once its TTL has expired. So far so good....
However when I look at the underlying table, it is full of rows which have already expired.
The MySQL documentation (http://dev.mysql.com/doc/refman/5.6/en/innodb-memcached-intro.html) indicates that item expiration has no effect when using the "innodb_only" caching policy (although it doesn't explicitly indicate which operation it is referring to). In any case my cache_policies table looks like this:
mysql> select * from innodb_memcache.cache_policies;
+--------------+------------+------------+---------------+--------------+
| policy_name | get_policy | set_policy | delete_policy | flush_policy |
+--------------+------------+------------+---------------+--------------+
| cache_policy | caching | caching | innodb_only | innodb_only |
+--------------+------------+------------+---------------+--------------+
1 row in set (0.01 sec)
So, per the docs the expiration field should be respected.
For reference my containers table looks like this:
mysql> select * from innodb_memcache.containers;
+---------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| name | db_schema | db_table | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |
+---------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
| default | sessions | userData | sessionID | data | c3 | c4 | c5 | PRIMARY |
+---------+-----------+-----------+-------------+---------------+-------+------------+--------------------+------------------------+
2 rows in set (0.00 sec)
And the data table is:
mysql> desc sessions.userData;
+-----------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------------+------+-----+---------+-------+
| sessionID | varchar(128) | NO | PRI | NULL | |
| data | blob | YES | | NULL | |
| c3 | int(11) | YES | | NULL | |
| c4 | bigint(20) unsigned | YES | | NULL | |
| c5 | int(11) | YES | | NULL | |
+-----------+---------------------+------+-----+---------+-------+
5 rows in set (0.00 sec)
One more detail, the MySQL docs state that after modifying caching policies you need to re-install the Memcached plugin, but I did not find a way to do this on RDS, so I removed the Memcached option group, rebooted, added the memcached option group again, rebooted again... but there was no apparent change in behavior.
So, to conclude, am I missing some step or configuration here? I would hate to have to create a separate process just to delete the expired rows from the table, since I was expecting the Memcached integration to do this for me.
I'm by no means an expert, as I've just started to play around with memcached myself. However, this is from the MySQL documentation for the python tutorial.
It seems to be saying that if you use the InnoDB memcached plugin, MySQL will handle cache expiration, and it really doesn't matter what you enter for the cache expire time.
And for the flags, expire, and CAS values, we specify corresponding
columns based on the settings from the sample table demo.test. These
values are typically not significant in applications using the InnoDB
memcached plugin, because MySQL keeps the data synchronized and there
is no need to worry about data expiring or being stale.
Related
I have the following table (mariadb 10.4) called p:
+----------------+----------------------------------------------------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+----------------------------------------------------------------------------------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| description | text | YES | | NULL | |
| url | text | YES | | NULL | |
| source | enum('source_a','source_b','source_c','source_d','source_e') | YES | | NULL | |
I currently have a couple of million rows on this table with the sources a, b, c, and d. Just recently we applied a migration to add source_e and we started getting the error ERROR 1265 (01000): Data truncated for column 'source' at row 1 when trying to inset a row with the source_e. The used command that yields the error is the following:
INSERT INTO p (description, url, `source`) VALUES ('test', 'https://google.com.br', 'source_e');
Insertions with any of the other sources are still working.
The behavior changes when editing a row that is already on the db, the error is not shown:
UPDATE `p` SET `source`='source_e' WHERE `id`='3';
Yields:
Query OK, 1 rows affected (0.001 sec)
Is there a way to debug this scenario? I've tried changing the log level of the db to get a better insight on the problem (SET GLOBAL log_warnings=3;) but the error message did not change.
I also tried changing the source_e name to source_e_, the error persisted.
Btw, i did change the name of the fields to comply with company policies.
It turns out it was my bad. We happen to have a trigger on insertions of this table that feeds a materialized view kind of table. All I had to do was add 'source_e' to the source field on the other table.
I am using mysql for the first time in years to help a friend out. The issue: a mysql table that gets updated a lot with INT and CHAR values. This web app site is hosted on a large generic provider, so I have no direct control of setup/parameters/etc. The performance has gotten really, really bad for this table, to the point where processing a data page that should take a max of 10 seconds is sometimes taking 15 minutes.
I initially tried running all updates as a single transaction, rather than the 50ish statements in a php loop in the web app (written several years ago). The problem, at least what I think, is that this app is running on a giant mysql instance with many other generic websites, and the disk speed just isn't able to handle so many updates.
I am able to use chron/batch jobs on this provider. The web app is mainly used during work hours, so I could limit access to the web app during overnight hours.
I normally work with postgresql or ms sql server, so my knowledge of mysql is fairly limited.
Would performance be increased if I force the table to be dropped and rewritten overnight? Is there some mysql function like postgres's vacuum? I have tried to search for information, but unfortunately using words like rewrite table just brings up references to sql syntax helpers or performance tuning.
Alternately, I guess that I could create a new storage mechanism in mysql, as long as it could be done via a php script. Would there be a better storage mode than the default storage engine for something frequently updated?
Performance of mysql depends on multiple factors that it's complicated enough to have a clear answer in every case. I think we can check the following steps to help figuring out on what to improve on INSERT data into mysql.
Database Engine.
There are 5 engine that you can use depends on your purposes: MyISAM, Memory, InnoDB, Archive, NDB.
Document
An engine which has Locking granularity as table will be slower than engine has its value as row because it will lock a table from changing when insert or update a single record, while Locking granularity as row mean locking only that row when you insert or update records.
When perform INSERT OR UPDATE record, engine has B-tree indexes attribute will be slower because it's have to rebuild it's indexes, so that you will have faster speed SELECT query. Therefore number of indexes in table will slow inserting and updating speed as well.
Indexes as CHAR will be slower than indexes as INT because it takes more time to figure out where to find the right node to store data in mysql.
MYSQL Statement
MYSQL has a estimation system that help you to discover performance of a query by add EXPLAIN before your mysql statement.
Example
EXPLAIN SELECT SQL_NO_CACHE * FROM Table_A WHERE id = 1;
Document
I worked on a web application, where we used mysql (it's really good !) to scale really large data.
In addition to what #Lam Nguyen said in his answer here is few things to consider,
Check which mysql engine you are using to see which locks it obtains during select, insert , update. To check which engine you are using here is a sample query with which you could run your litmus test.
mysql> show table status where name="<your_table_name>";
+-------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+-------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
| Login | InnoDB | 10 | Dynamic | 2 | 8192 | 16384 | 0 | 0 | 0 | NULL | 2019-04-28 12:16:59 | NULL | NULL | utf8mb4_general_ci | NULL | | |
+-------+--------+---------+------------+------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+
The default engine which comes with mysql installation is InnoDB. InnoDB does not acquire any lock while inserting a row.
SELECT ... FROM is a consistent read, reading a snapshot of the database and setting no locks unless the transaction isolation level is set to SERIALIZABLE.
A locking read, an UPDATE, or a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement.
InnoDB lock sets
Check for columns which you are indexing. Index the column which you would really query a lot. Avoid indexing char columns.
To check which columns of you table got indexed run,
mysql> show index from BookStore2;
+------------+------------+----------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+------------+------------+----------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Bookstore2 | 0 | PRIMARY | 1 | ISBN_NO | A | 0 | NULL | NULL | | BTREE | | | YES | NULL |
| Bookstore2 | 1 | SHORT_DESC_IND | 1 | SHORT_DESC | A | 0 | NULL | NULL | YES | BTREE | | | YES | NULL |
| Bookstore2 | 1 | SHORT_DESC_IND | 2 | PUBLISHER | A | 0 | NULL | NULL | YES | BTREE | | | YES | NULL |
+------------+------------+----------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
3 rows in set (0.03 sec)
Do not run inner query on a large data set in a table. To actually see what your query does run explain on your query and see the number of rows iter
mysql> explain select * from login;
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+
| 1 | SIMPLE | login | NULL | ALL | NULL | NULL | NULL | NULL | 2 | 100.00 | NULL |
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+
1 row in set, 1 warning (0.03 sec)
Avoid joining too may tables.
Make sure you are querying with a primary key in criteria or at least you are querying on your indexed column.
When your table grows too big make sure you split it across clusters.
With few tweaks, we would still be able to get query results in minimal time.
I have made the following table in MySQL:
mysql> use test;
Database changed
mysql> desc NeoTec_test;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| Product_Key | varchar(10) | NO | PRI | NULL | |
| Validation | date | YES | | NULL | |
| Expiry | date | YES | | NULL | |
+-------------+-------------+------+-----+---------+-------+
3 rows in set (0.03 sec)
mysql> select * from NeoTec_test;
+-------------+------------+------------+
| Product_Key | Validation | Expiry |
+-------------+------------+------------+
| GF427DHH5 | 2017-11-16 | 2017-11-17 |
| GFHJV75HG | 2017-11-16 | 2017-11-18 |
| GFJYFRTV5 | 2017-11-16 | 2017-11-20 |
+-------------+------------+------------+
3 rows in set (0.00 sec)
Now coming to the point, I need some help with a part of my project. I want MySQL to automatically delete the Product keys that have expired, i.e., I want to get the product keys deleted automatically on their expiry dates given under the "Expiry" Column of the table. How can I do so? I am a total newbie to MySQL events so I would appreciate the full code... Thank you! :-)
Earlier research I did was not fruitful, but I did found this, which was half helpful...:
How to delete a MySQL record after a certain time
You can use event scheduler to perform the task like below:
DELIMITER //
CREATE EVENT eventName
ON SCHEDULE EVERY 1 WEEK
STARTS 'Some Date to start'
ENDS 'End date If any'
DO
BEGIN
DELETE FROM NeoTec_test WHERE NOW() > Expiry
END//
DELIMITER ;
Thete is no functionality in mysql to automatically delete a record. You need to trigger the deletion either through a scheduler (mysql's as shown in the question you found, or an external scheduler such as cron), or via a database trigger.
The latter one is probably an overkill.
I would use a scheduler set to a convenient interval based on your business requirements to clean up the table.
I have a MySQL 5.5 DB of +-40GB on a 64GB RAM machine in a production environment. All tables are InnoDB. There is also a slave running as a backup.
One table - the most important one - grew to 150M rows, inserting and deleting became slow. To speed up inserting and deleting I deleted half of the table. This did not speed up as expected; inserting and deleting is still slow.
I've read that running OPTIMIZE TABLE can help in such a scenario. As I understand this operation will require a read lock on the entire table and optimizing the table might take quite a while on a big table.
What would be a good strategy to optimize this table while minimizing downtime?
EDIT The specific table to be optimized has +- 91M rows and looks like this:
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| channel_key | varchar(255) | YES | MUL | NULL | |
| track_id | int(11) | YES | MUL | NULL | |
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| posted_at | datetime | YES | | NULL | |
| position | varchar(255) | YES | MUL | NULL | |
| dead | int(11) | YES | | 0 | |
+-------------+--------------+------+-----+---------+----------------+
Percona Toolkit's pt-online-schema-change does this for you. In this case it worked very well.
300 ms to insert seems excessive, even with slow disks. I would look into the root cause. Optimizing this table is going to take a lot of time. MySQL will create a copy of your table on disk.
Depending on the size of your innodb_buffer_pool (if the table is innodb), free memory on the host, I would try to preload the whole table in the page cache of the OS, so that at least reading the data will be sped up by a couple orders of magnitude.
If you're using innodb_file_per_table, or if it's a MyISAM table, it's easy enough to make sure the whole file is cached using "time cat /path/to/mysql/data/db/huge_table.ibd > /dev/null". When you rerun the command, and it runs in under a few seconds, you can assume the file content is sitting in the OS page cache.
You can monitor the progress whilst the "optimize table" is running, by looking at the size of the temporary file. It's usually in the database data directory, with a temp filename starting with a dash (#) character.
This article suggests to first drop all indexes in a table, then optimize it, and then add indexes back. It claims the speed difference is 20 times compared to just optimize.
update your version of mysql, in 8.0.x version the optimize table is fastest than 5.5
a optimize in a table with 91 millions could take like 3 hours in your version, you can run in morning, like 3am to not disturb the users of your app.
I changed a name of the table from within phpMyAdmin, and immediately it crapped. after that when I try to connect using phpMyAdmin (/phpMyAdmin/index.php) I get error in log:
[Wed Aug 08 14:18:58 2012] [error] Query call failed: Table 'mydb.mychangedtbl' doesn't exist (1146)
mychangedtbl is the table whose name was changed. this issue is only in phpMyAdmin, I am able to access the database and tables find from CLI. I restarted mySQL, but that did not fix. Seems like something is stuck for phpMyAdmin. I restarted browser also but that didnt help either.
when i rename this particular table back to what it was using command line, myphphAmin works fine again. here is the structure of this table:
mysql> DESCRIBE mychangedtbl;
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| userid | char(6) | NO | PRI | NULL | |
| userpass | varchar(40) | NO | | NULL | |
| userlevel | char(3) | NO | | o | |
| userpcip | varchar(45) | NO | | NULL | |
+-----------+-------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
mysql>
column userpass has Collation = asci_bin which it does not show in above output, other columns are ascii_general_ci
pl advice.
ty.
Rajeev
this was due to the reason that, apache was using the same table to do mysql authentication. i changed apache config and restart. that let me change table name. all good again.