Sql : Side effects of creating/deleting databases and tables on the fly - mysql

I need to simulate sql by creating a wrapper over mysql(customer requirement :P), and hence my application requires to create/drop tables(and possibly databases) during runtime.
The frequency of such create/drop operations will not be very high. I'm not a database expert, but I believe that such operations could lead to some side-effects over long term.
Is it advisable to do go ahead with these creation/deletion of databases and what are the possible complications I can run into?

This is only a problem under two scenarios
SCENARIO #1
For InnoDB tables, the innodb buffer pool should be optimally set to the sum of all data pags and index pages that make up InnoDB tables.
Even worse can be that innodb_file_per_table is disabled (default)
This will produce a file called /var/lib/mysql/ibdata1 which can grow and never shrink. This is true no matter how many times you drop and create databases.
If one forgets to make the necessary changes in /etc/my.cnf, this could also expose innodb buffer pool to under-utilization until the data fills back up.
Changes to make for InnoDB are straightforward.
Run this query
SELECT CONCAT(KeyBuf,'M') BufferPoolSetting FROM (SELECT CEILING(SumInnoDB/POWER(1024,2)) KeyBuf FROM (SELECT SUM(data_length+index_length) SumInnoDB FROM information_schema.tables WHERE engine='InnoDB' and table_schema NOT IN ('information_schema','mysql')) A) AA;
The output of this query should be used as the innodb_buffer_pool_size in /etc/my.cnf just before you drop all databases and create new ones.
SCENARIO #2
For MyISAM tables, the key buffer should be optimally set to the sum of all .MYI files.
If one forgets to make the necessary changes in /etc/my.cnf, this could also expose MyISAM key cache (key buffer) to under-utilization until the data fills back up.
Changes to make for MyISAM are straightforward.
Run this query
SELECT CONCAT(KeyBuf,'M') KeyBufferSetting FROM (SELECT CEILING(SumIndexes/POWER(1024,2)) KeyBuf FROM (SELECT SUM(index_length) SumIndexes FROM information_schema.tables WHERE engine='MyISAM' and table_schema NOT IN ('information_schema','mysql')) A) AA;
The output of this query should be used as the key_buffer_size in /etc/my.cnf just before you drop all databases and create new ones.

Related

How to optimize the tables in information_Schema database

I got the free space (fragmentation issues) in my information_Schema database.
Alert shows that there are 1500% free space in some tables like COLUMNS , ROUTINES.
I am worried how this is possible because i don't have any routines in my database and how i can optimize the information_schema because its memory based database and created on the starting of mysql service.
Also when i query "SHOW CREATE TABLE" on any of the information_schema table it gives me innodb as engine of these table, but i think it should be memory.
Any idea to optimize these tables without restart?
Thanks
When you have innodb_file_per_table = OFF, InnoDB tables are created in the system 'tablespace', ibdata1. It could be that you have created and manipulated a lot of tables there.
Data_free is a confusing term in SHOW CREATE TABLE and certain tables in information_schema...
For MyISAM tables, it is an accurate amount of the space that could be recovered from the .MYD file (but not the .MYI file).
For InnoDB it means one of 2 things...
If the table you are looking at was created with innodb_file_per_table = ON, then Data_free is some of the unused space. Often, not all of it can be recovered by any means.
If the table you are looking at was created with innodb_file_per_table = OFF, then Data_free is the free space in ibdata1. That free space will be used for new inserts and new tables, thereby decreasing Data_free. However, the size of ibdata1 cannot be shrunk, at least not without a lot of effort (dump everything, remove, reload).

If i change mysql engine from Myisam to innodb, will it affect on my data

I am new to Mysql .
will it affect my data on server if i change mysql engine from Myisam to innodb.
Thanks
Changing engine from MyISAM to INNODB should not affect your data, but safe side you can easily take backup of your table before changine engine.
Taking backup:
CREATE TABLE backup_table LIKE your_table;
INSERT INTO backup_table SELECT * FROM your_table;
It may affect the performance of your queries. You need to configure Innodb specific System variables. e.g.
innodb_buffer_pool_size = 16G
innodb_additional_mem_pool_size = 2G
innodb_log_file_size = 1G
innodb_log_buffer_size = 8M
Changing engine to INNODB:
ALTER TABLE table_name ENGINE=INNODB;
I found two caveats when converting MyISAM tables to InnoDB: row size and support for full-text indexes
I ran into an issue converting some tables from an off-the-shelf application from MyISAM to InnoDB because of the maximum record size. MyISAM supports longer rows than InnoDB does. The maximum in a default InnoDB installation is about 8000 bytes. You can work around this by TRUNCATE'ing a table that fails, but this will bite you later on the INSERT. You might have to break your data up into multiple tables or restructure it with variable length column types such as TEXT (which can be slower).
A stock Innodb installation doesn't support FULLTEXT indexes. This may or may not impact your application. For one application's table I was converting, we decided to look in other fields for the data we needed rather than doing full text scans. (I did an "ALTER TABLE DROP INDEX..." on it to remove the FULLTEXT index before converting to InnoDB.) I wouldn't recommend full-text indexes for a write-heavy table anyway.
If converting a big table full of data with "ALTER TABLE..." works on the first try, you're probably okay.
http://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html
Lastly, if you want to convert from MyISAM to InnoDB on a running system that is read heavy (where UPDATEs and INSERTs are rare), you can run this without interrupting your users. (The RENAME runs atomically. It will back out all of the changes if any of them don't work.)
CREATE TABLE mytable_ind AS SELECT * FROM mytable;
ALTER TABLE mytable_ind ENGINE=InnoDB;
RENAME TABLE mytable TO mytable_myi, mytable_ind TO mytable;
DROP TABLE mytable_myi;

Performance difference between Innodb and Myisam in Mysql

I have a mysql table with over 30 million records that was originally being stored with myisam. Here is a description of the table:
I would run the following query against this table which would generally take around 30 seconds to complete. I would change #eid each time to avoid database or disk caching.
select count(fact_data.id)
from fact_data
where fact_data.entity_id=#eid
and fact_data.metric_id=1
I then converted this table to innoDB without making any other changes and afterwards the same query now returns in under a second every single time I run the query. Even when I randomly set #eid to avoid caching, the query returns in under a second.
I've been researching the differences between the two storage types to try to explain the dramatic improvement in performance but haven't been able to come up with anything. In fact, much of what I read indicates that Myisam should be faster.
The queries I'm running are against a local database with no other processes hitting the database at the time of the tests.
That's a surprisingly large performance difference, but I can think of a few things that may be contributing.
MyISAM has historically been viewed as faster than InnoDB, but for recent versions of InnoDB, that is true for a much, much smaller set of use cases. MyISAM is typically faster for table scans of read-only tables. In most other use cases, I typically find InnoDB to be faster. Often many times faster. Table locks are a death knell for MyISAM in most of my usage of MySQL.
MyISAM caches indexes in its key buffer. Perhaps you have set the key buffer too small for it to effectively cache the index for your somewhat large table.
MyISAM depends on the OS to cache table data from the .MYD files in the OS disk cache. If the OS is running low on memory, it will start dumping its disk cache. That could force it to keep reading from disk.
InnoDB caches both indexes and data in its own memory buffer. You can tell the OS not to also use its disk cache if you set innodb_flush_method to O_DIRECT, though this isn't supported on OS X.
InnoDB usually buffers data and indexes in 16kb pages. Depending on how you are changing the value of #eid between queries, it may have already cached the data for one query due to the disk reads from a previous query.
Make sure you created the indexes identically. Use explain to check if MySQL is using the index. Since you included the output of describe instead of show create table or show indexes from, I can't tell if entity_id is part of a composite index. If it was not the first part of a composite index, it wouldn't be used.
If you are using a relatively modern version of MySQL, run the following command before running the query:
set profiling = 1;
That will turn on query profiling for your session. After running the query, run
show profiles;
That will show you the list of queries for which profiles are available. I think it keeps the last 20 by default. Assuming your query was the first one, run:
show profile for query 1;
You will then see the duration of each stage in running your query. This is extremely useful for determining what (e.g., table locks, sorting, creating temp tables, etc.) is causing a query to be slow.
My first suspicion would be that the original MyISAM table and/or indexes became fragmented over time resulting in the performance slowly degrading. The InnoDB table would not have the same problem since you created it with all the data already in it (so it would all be stored sequentially on disk).
You could test this theory by rebuilding the MyISAM table. The easiest way to do this would be to use a "null" ALTER TABLE statement:
ALTER TABLE mytable ENGINE = MyISAM;
Then check the performance to see if it is better.
Another possibility would be if the database itself is simply tuned for InnoDB performance rather than MyISAM. For example, InnoDB uses the innodb_buffer_pool_size parameter to know how much memory should be allocated for storing cached data and indexes in memory. But MyISAM uses the key_buffer parameter. If your database has a large innodb buffer pool and a small key buffer, then InnoDB performance is going to be better than MyISAM performance, especially for large tables.
What are your index definitions, there are ways in which you can create indexes for MyISAM in which your index fields will not be used when you think they would.

Mysql - Finding cause of temp disk tables

I've recently noticed my MySQL server is creating a reasonably large number of disk tables [created temp disk tables: 67, created temp tables: 304].
I've been trying to identify what queries are creating these tables, but I've had no luck. I've enabled the slow query log for queries taking more than 1 second, but the queries showing up in there don't make sense. The only queries showing up regularly in the slow query log are updates to a single row on a user table, using the primary key as the where clause.
I've run 'explain' on all the queries that run regularly, and I'm coming up blank on the culprit.
The EXPLAIN report may say "Using filesort" but that's misleading. It doesn't mean it's writing to a file, it only means it's sorting without the benefit of an index.
The EXPLAIN report may say "Using temporary" but this doesn't mean it's using a temporary table on disk. It can make a small temp table in memory. The table must fit within the lesser of max_heap_table_size and tmp_table_size. If you increase tmp_table_size, you should also increase max_heap_table_size to match.
See also http://dev.mysql.com/doc/refman/5.1/en/internal-temporary-tables.html for more info on temporary table management.
But 4 gigs for that value is very high! Consider that this memory can potentially be used per connection. The default is 16 megs, so you've increased it by a factor of 256.
So we want to find which queries caused the temp disk tables.
If you run MySQL 5.1 or later, you can SET GLOBAL long_query_time=0 to make all queries output to the slow query log. Be sure to do this only temporarily and set it back to a nonzero value when you're done! :-)
If you run Percona Server, the slow query log is extended with additional information and configurability, including whether the query caused a temp table or a temp disk table. You can even filter the slow-query log to include only queries that cause a temp table or temp disk table (the docs I link to).
You can also process Percona Server's slow-query log with mk-query-digest and filter for queries that cause a temp disk table.
mk-query-digest /path/to/slow.log --no-report --print \
--filter '($event->{Disk_tmp_table }||"") eq "Yes"'
If you are using MySQL 5.6 or above, you can use the performance schema. Try something like:
select * from events_statements_summary_by_digest where SUM_CREATED_TMP_DISK_TABLES>0\G
Often queries using an ORDER BY will have to use temp table(s). If you run EXPLAIN on those queries you might see:
using filesort ; using temporary tables
Look for queries with an ORDER BY

MySQL optimization tips specific to a completely static database?

I have a database that is about 20 GB in size. I want to know if there are any optimization tips specific to working with a database that is static. When I mean static, I don't mean changes infrequently, I mean won't change at all. Are there any extreme settings for values or other things that normally you stay away from with a volatile database, that can benefit a truly static database; especially considering there will only be SELECT statements and absolutely no INSERT statements? I'm using MyISAM tables.
-- roschler
Since everything is MyISAM, you need to focus on two major things:
KEY CACHE
The main mechanism used for caching is the key cache. It only caches index pages from .MYI files. To size your key cache, run the following query:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(PowerOfTwo<0,0,IF(PowerOfTwo>3,0,PowerOfTwo)))+0.4999),
SUBSTR(' KMG',IF(PowerOfTwo<0,0,IF(PowerOfTwo>3,0,PowerOfTwo))+1,1)) recommended_key_buffer_size
FROM (SELECT LEAST(POWER(2,32),KBS1) KBS FROM
(SELECT SUM(index_length) KBS1 FROM information_schema.tables
WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql')) AA) A,
(SELECT 2 PowerOfTwo) B;
This will give the Recommended Setting for MyISAM Key Cache (key_buffer_size) given your current data set (the query will cap the recommendation at 4G (4096M)).For 32-bit OS, 4GB is the limit. For 64-bit, 8GB.
FULLTEXT Indexes
You should change the stopword list. You may want to change the stop words because MySQL will not index this list of 643 words. Try creating your own stopword list and changing the min word length.
Step 1) Create a stop word list of your own. You could add 'a','an', and 'the'.
echo "a" > /var/lib/mysql/custom_stopwords.txt<BR>
echo "an" >> /var/lib/mysql/custom_stopwords.txt<BR>
echo "the" >> /var/lib/mysql/custom_stopwords.txt
Step 2) Add these options to /etc/my.cnf
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/custom_stopwords.txt
Step 3) service mysql restart
Step 4) Create new FULLTEXT indexes. Any existing FULLTEXT indexes before restart of mysql should be reindexed.
Give These Ideas a Try !!!