I have a database and that contain many tables , but all of them store text and full tables size is less than 10 MB , but the database file on server have larger than 1 GB
The tables uses MyIASM , where is the problem ?
thanks
You can use OPTIMIZE TABLE for reducing disk usage on MyISAM tables (usually after deleting lot of data from them or doing major changes in structure).
For InnoDB, that is quite more challenging, see Howto: Clean a mysql InnoDB storage engine?
Related
Both MyRocks (MySql) and Cassandra uses LSM architecture to store their data. So I have populated around 5 million rows in MySql with MyRocks as storage engine and also in Cassandra. In Cassandra it takes only 1.7 GB of disk space while in MySql with MyRocks as storage engine, it takes 19 GB.
Am I missing something? Both use the same LSM mechanism. But why do they differ in data size?
Update:
I guess it has something to do with the text column. My Table Structure is (bigint,bigint,varchar,text).
Rows populated: 300 000
In MyRocks the data size 185MB
In Cassandra - 13 MB.
But if I remove the text column then:
MyRocks - 21.6 MB
Cassandra - 11 MB
Any idea about this behaviour?
Well the reason for the above behaviour is due to the rocksdb_block_size set to 4kb. Due to smaller data blocks the compressor finds lesser amount of data to compress. Setting it to 16kb solved the issue. Now I get the similar data size as of cassandra.
Not 100% on MyRocks. But Cassandra is LSM and also Key value store. Which means if your column is 'null' it won't be stored on disk. Traditionally RDBMS will still consume some space (varchars, null characters pointers etc) so this may account for your lost space.
Additionally cassandra compresses data. Try:
ALTER myTable WITH compression = { 'enabled' : false };
Here is the situation I am stuck in,
Situation
We want to move from MyISAM to InnoDB Engine, so that there will be no table level locks.
Catch
We can get max of 1 hour service downtime and not a minute more than that.
Our DB Machine H/W spec is very low. 8 GB RAM.
Learnings
Recently we learnt that, migrating our DB Engine would take 3 - 4 Hours, including DB Engine Conversion and Re-Indexing. (This was emulated with live DB Dump in offline environment).
This is because DB Engine migration will re-create the schema with InnoDB as the Engine and re-enter all table data into new schema.
What I found
One interesting fact I found is, after the MySQL Dump file is created, If I replace the text MyISAM with InnoDB in the Dump file and then import it into new DB, the max time taken was 50 Mins and all tables were converted to InnoDB along with right indexes.
My Question
Is the approach I took correct?
Does it lead to any data corruption or index corruption?
I did it. No problem. Beware of the features which are only for MyISAM as multiple auto-increment columns, or fulltext indexing.
I'm trying to create a Wikipedia DB copy (Around 50GB), but having problems with the largest SQL files.
I've split the files of size in GB using linux split utility into chunks of 300 MB. e.g.
split -d -l 50 ../enwiki-20070908-page page.input.
On average 300MB files take 3 hours at my server. I've ubuntu 12.04 server OS and Mysql 5.5 Server.
I'm trying like following:
mysql -u username -ppassword database < category.sql
Note: these files consist of Insert statements and these are not CSV files.
Wikipedia offers database dumps for download, so everybody can create a copy of Wikipedia.
You can find example files here: Wikipedia Dumps
I think the import is slow because of the settings for my MySQL Server, but I don't know what I should change. I'm using the standard Ubuntu MySQL config on a machine with a decent processor and 2GB RAM. Could someone help me out with a suitable configuration for my system?
I've tried to set innodb_buffer_pool_size to 1GB but no vains.
Since you have less than 50GB of memory (so you can't buffer the entire database in memory), the bottleneck is the write speed of your disk subsystem.
Tricks to speed up imports:
MyISAM is not transactional, so much faster in single threaded inserts. Try to load into MyISAM, then ALTER the table to INNODB
Use ALTER TABLE .. DISABLE KEYS to avoid index updates line by line (MyISAM only)
Set bulk_insert_buffer_size above your insert size (MyISAM only)
Set unique_checks = 0 so that unique constrains are not checked.
For more, see Bulk Data Loading for InnoDB Tables in MySQL Manual.
Note: If the original table have foreign key constraints, using MyISAM as an intermediate format is a bad idea.
Use MyISAM, usually much faster than InnoDB, if your data base isnt transaction oriented. Did you research into using any table partitioning/sharding techniques?
Converting huge MyISAM into InnoDB will again run into performance issues, so I am not sure I would do that. But disabling and re-enabling keys could be of help...
I have several databases in MySQL with InnoDB engine. All together they have around 30 GB size on filesystem. A couple of days ago, I removed a lot of data from those databases (~10-15 GB) but the used space on filesystem is the same and also reading data_length and index_length from information_schema.TABLES give almost the old size.
I dumped a 3,3 GB database and imported it on my workstation where it takes only 1,1 GB (yes, it is a 1:1 copy). So, how can I calculate the size a InnoDB database needs if I would reimport it in a new system?
Optimize your tables after deleting large amouts of data.
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
InnoDB doesn't free disk space; it's a PITA. Basically, you can free it by dropping the database and restoring it from a backup (as you've noticed by chance) - see here for examples.
So, you can't calculate how big a database will be after you restore a backup. But it will never be bigger than the un-backed up one (because the unbacked up version still has space from any deleted data and the restored backup will not have that space).
This can be worked around to some extent using the file per table option; more details in the first link from this post.
I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables.
Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system?
Thanks
What about a RAM disk?
it's not about the FS but it can improve your SELECTs. did you evaluated the mysql table partitioning ?