I'm having a problem with .ibd MySQL files.
Scenario:
I'm having a ubuntu server of 200GB and deployed an application of Django and using MySQL server.
The nature of my application is to store huge data and do some x type of processing on it. I have one table which has 5 to 6 million data recrods. This Table has acquired almost 60GB of space (The space occupied by tablename.ibd file).
I tried running Optimize table tablename but the .ibd file doesn't get shrunk.
The InnoDb is true.
PROBLEM
Firstly the storage is running out as the file getting too much large.
Secondly when I try to migrate the migration for adding a column on this table while running the server gets out of space because on running migration the .ibd file starts getting bigger and the server eventually runs out of space.
I will be very thankful If someone helps me out of this.
Note:(I could not purge data from the table as data is very important for me)
(UPDATED)
SHOW CREATE TABLE tablename
| Table | Create Table |
+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| table_name | CREATE TABLE `table_name` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(255) DEFAULT NULL,
`price` double DEFAULT NULL,
`item_identifier` varchar(20) NOT NULL,
`upc` varchar(20) DEFAULT NULL,
`mpn` varchar(100) DEFAULT NULL,
`weight` double DEFAULT NULL,
`weight_unit` varchar(10) DEFAULT NULL,
`main_category` varchar(50) DEFAULT NULL,
`sub_category` varchar(50) DEFAULT NULL,
`category_tree` varchar(500) DEFAULT NULL,
`description` varchar(3800) DEFAULT NULL,
`color` varchar(50) DEFAULT NULL,
`brand` varchar(150) DEFAULT NULL,
`main_image` varchar(2048) DEFAULT NULL,
`secondary_images` varchar(255) DEFAULT NULL,
`shipping` double,
`stock` int(11) NOT NULL,
`sale_rank` varchar(100) DEFAULT NULL,
`itemHeight` double DEFAULT NULL,
`itemLength` double DEFAULT NULL,
`itemWeight` double DEFAULT NULL,
`itemWidth` double DEFAULT NULL,
`manufacturer` varchar(100) DEFAULT NULL,
`product_model` varchar(150) DEFAULT NULL,
`variations` longtext,
`pack_count` int(11),
`size` varchar(100) DEFAULT NULL,
`flavor` varchar(100) DEFAULT NULL,
`successfully_stored` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `item_identifier` (`item_identifier`),
KEY `table_name_upc_3ca3d702` (`upc`)
) ENGINE=InnoDB AUTO_INCREMENT=7279139 DEFAULT CHARSET=latin1 |
+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
SHOW TABLE STATUS LIKE 'tablename'\G
*************************** 1. row ***************************
Name: table_name
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 7439966
Avg_row_length: 8807
Data_length: 65530740736
Max_data_length: 0
Index_length: 323633152
Data_free: 5242880
Auto_increment: 7279139
Create_time: 2021-06-11 21:26:17
Update_time: 2021-06-12 18:08:06
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.01 sec)
InnoDB disk space is 2-3 times as much as you might think. This is because of several different "overhead" things. They provide performance and features; live with it.
60GB / 5M = 12KB per row. Sounds like you have one more big TEXT or BLOB columns? Please provide SHOW CREATE TABLE so we can further discuss the layout of the table.
(OPTIMIZE TABLE is rarely of any use; don't bother using it.)
Sizes
Bill covered most of the size-related things (DOUBLE->FLOAT, etc); alas they will shrink the disk footprint by only a few percent in your case.
It seems that variations must be the bulkiest column. What do you get from SELECT AVG(LENGTH(variations)) FROM table_name; ? I suspect it is a few thousand. Most "text" can easily be compressed 3:1 by standard compression libraries. If the average is 3000, then the potential savings is about 2KB which is something like 20-30% of the table. (It may save more due to the "off-record" storage mechanism, but the computation is complex.)
Compressing a single column requires the cooperation of the client. That is, code in Django needs to compress and uncompress the column between client and server.
Using ROW_FORMAT=COMPRESSED gives about 2:1 compression for the whole table and is transparent to the client. So, overall, this is probably better.
As Bill points, out, all of this is a temporary fix -- you will run out of disk space as the table grows. That is, Optimize, smaller datatypes, and compressions are only temporary fixes. You really need more disk space.
Get a server with larger storage volumes.
Alternative: Get a second server running MySQL Server, and move some of the data in your current instance to that new instance.
Re your update with the table definition and status:
The table status shows that the data length, that is, the rows, use about 61 GiB, and the secondary indexes use about 0.3 GiB. So it's unlikely that you can save space by dropping indexes.
The average row size is 8807 bytes (this is an estimate, it's just the data_length divided by the number of rows). You might be able to reduce the average row size a little bit by changing some data types.
For example, each double takes 8 bytes. Could you get enough precision using float or numeric(9,2) instead? These take 4 bytes each. Similarly, there are some int columns that might be able to be smallint and still store the range of values you need.
You should read about the storage requirements of each data type and make decisions about how best to define your columns. See https://dev.mysql.com/doc/refman/8.0/en/storage-requirements.html
The variable-length data types like varchar and longtext already store only the length of the content in the column on each row, not the max length allowed. So for example changing varchar(200) to varchar(100) doesn't make any difference if the strings in them are already shorter than 100 characters.
There are some cases of varchar that might be replaced by an integer reference to a lookup table. An integer may take less space than repeating the same string on every row.
You could use the InnoDB COMPRESSED row format. This has variable results depending on your data, but it might shrink strings by about half.
Changing data types and the row format do require you to run ALTER TABLE, so there needs to be enough storage space for the copy of the table temporarily, similar to running OPTIMIZE TABLE. If you don't have enough space to do that, then you can't alter the table.
Even with these techniques, your table will still be quite large, and databases tend to grow over time as we store more rows of data in them. Even if you shrink it a bit today, you will still need a plan for getting a larger storage volume eventually.
Related
Looking for some guidance on how to best tackle partitioning on some database tables for the purpose of archiving/deleting data over a certain age. The main reason for this is to resolve some issues in database size.
You can think of the data akin to telemetry data where is is growing over time, but once it enters the database it doesn't change outside of the first 10-15 minutes in the event there is any form of conflicting data that requires the application to update a recent record (max 15 mins).
Current database size is approaching 500GB and is sitting on NVMe storage across a 3x Node Galera cluster in three cities. Backups are becoming increasingly larger and if an SST is needed between nodes this can take a couple of hours to complete which is less than ideal.
The plan to deal with this is by way of archiving, where we plan to off-board historical data to another server (say once a year) with slower storage that can then be backed up once and won't change for 12 months. The historical data will be rarely accessed, and in the event it is our application will handle querying the archive server if older than a certain date instead of the production servers that are relied on heavily for "recent" data.
We have 3x tables per customer, and they reference each other in a sort of heirarchy. There are no foreign keys in the tables, but they do hold references to one another and are used in JOIN queries. Eg. summary table is the top of the hierarchy and holds one record per "event". Under this is the details table and there could be 1-10 detail records sitting under the summary event. Under details is the digits table that could include 0-10 records per detailed record.
CREATE TABLE data below;
CREATE TABLE `summary_X` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`start_utc` datetime DEFAULT NULL,
`end_utc` datetime DEFAULT NULL,
`total_duration` smallint(6) DEFAULT NULL,
`legs` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `start_utc` (`start_utc`)
) ENGINE=InnoDB
CREATE TABLE `details_X` (
`xid` bigint(20) NOT NULL AUTO_INCREMENT,
`id` int(11) NOT NULL,
`duration` smallint(6) DEFAULT NULL,
`start_utc` timestamp NULL DEFAULT NULL,
`end_utc` timestamp NULL DEFAULT NULL,
`event` varchar(2) DEFAULT NULL,
`event_time` smallint(6) DEFAULT NULL,
`event_a` varchar(7) DEFAULT NULL,
`event_b` varchar(7) DEFAULT NULL,
`ani` varchar(20) DEFAULT NULL,
`dnis` varchar(10) DEFAULT NULL,
`first_time` varchar(30) DEFAULT NULL,
`final_time` varchar(30) DEFAULT NULL,
`digits_count` int(2) DEFAULT 0,
`sys_a` varchar(3) DEFAULT NULL,
`sys_b` varchar(3) DEFAULT NULL,
`log_id_a` varchar(12) DEFAULT NULL,
`seq_a` varchar(1) DEFAULT NULL,
`log_id_b` varchar(12) DEFAULT NULL,
`seq_b` varchar(1) DEFAULT NULL,
`assoc_log_id_a` varchar(12) DEFAULT NULL,
`assoc_log_id_b` varchar(12) DEFAULT NULL,
PRIMARY KEY (`xid`),
KEY `start_utc` (`start_utc`),
KEY `end_utc` (`end_utc`),
KEY `event_a` (`event_a`),
KEY `event_b` (`event_b`),
KEY `id` (`id`),
KEY `final_digits` (`final_digits`),
KEY `log_id_a` (`log_id_a`),
KEY `log_id_b` (`log_id_b`)
) ENGINE=InnoDB
CREATE TABLE `digits_X` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`leg_id` bigint(20) DEFAULT NULL,
`sequence` int(2) NOT NULL,
`digits` varchar(30) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `digits` (`digits`),
KEY `leg_id` (`leg_id`)
) ENGINE=InnoDB
My first thought was to partition on Year, sounds easy enough but we don't have a date column on the digits table, so records here could be orphaned away from their mapped details record and no longer match in a JOIN on the archive server.
We then can also have a similar issue with summary and the timestamps on the "details" records could span multiple years. Eg. Summary event starts at 2021-12-31 23:55:00. First detail record is same timestamp, and then the next detail under the same event could be 2022-01-01 00:11:00. If 2021 partition was archived off to the other server, the 2022 detail would be orphaned and no longer JOIN to the 2021 summary event.
One alternative could be not to partition at all and do SELECT/INSERT/DELETE which isn't practical with the volume of data. Some tables have 30M-40M rows per year so this would be very resource taxing. There are also 400+ customers each with their own sets of tables.
Another I thought of was to add a column to the three tables as a "Year" column we can partition on but would include the Year of first event across all, so all related records can be on the same partitions/server, but this seems like a waste of space and there should be a better way.
Any thoughts or guidance would be appreciated.
To add PARTITIONing will require copying the entire table over. That will involve downtime and disk space. If you can live with that, then...
PARTITION BY RANGE(...) where the expression involves, say, TO_DAYS(...) or possibly TO_SECONDS(...). Then set up cron jobs to add a new partition periodically. (There is nothing automated for such.) And to detach the oldest partition. See Partition for a discussion of the details. (TO_DAYS avoids the need for a 'year' column.)
Note that Partitioning is implemented as several sub-tables under a table. With "transportable tablespaces", you can detach a partition from the big table, turning it into a table unto itself. At that point, you are free to move it to another server of something.
In a situation like yours, I might consider the following.
Write the raw data to a file (perhaps one per day) for archiving;
Insert into a table that will live only briefly; this will be purged by some means frequently;
Update "normalization" tables
"Summarize" the data into Summary Tables, where each set of rows covers one hour (or whatever makes sense).
Write "reports" from the summary table(s).
Be aware that each Partition takes an extra 5.5MB (average), so do not make many partitions. Or do you need only 2, each containing 15 minutes' data?
Meanwhile, I would look carefully at the schema. Can an INT (4 bytes) be turned into a SMALLINT (2 bytes). Can more things be Normalized.
digits_count int(2) -- that is a 4-byte INT; the (2) has no meaning and has been removed in MySQL 8. (MariaDB may follow suit someday.) It sounds like you need only a 1-byte TINYINT UNSIGNED (range: 0..255).
Since this is log info, be aware of Daylight Savings wrt DATETIME. (One hour per year is missing; another hour repeats.) This problem does not occur with TIMESTAMP. Each one takes 5 bytes (unless you include fractional seconds.)
(I can't advise on unnecessary indexes without seeing the queries.) SHOW TABLE STATUS will tell you how much space is being consumed by all the indexes.
Are the 3 tables of similar size?
Re "orphaning" -- You need at least 2 partitions -- one being filled (0-100% full) and an older partition (100% full)
"30M-40M rows per year" times 400 customers. Does that add up to 500 rows inserted per second? Are they INSERTed one row at a time? High speed ingestion
Are there more deletes and selects than inserts? And/or do they involve more than single rows? (I'm fishing for more info go help with some other issues you either have or are threatening to have.) Even with Deletes and no Partitioning, the disk growth will slow down as free space is generated, then reused. ("Rince and repeat.")
Without partitioning, see Huge Deletes . But... DELETEing data from a table does not shrink it disk footprint. However if each 'customer' has 1/400th of the data; and (of course) you do each customer separately, then there may not be any disk problem
I've given you a lot to think about. Answer some of my questions; I may have more advice.
We would like to add a normal index to field 3 & 4 of the following MySQL table and would like to understand the impact to the server performance before doing so. E.g. will the index take up additional RAM and slow down the database as a result?
we understand it will take time initially to create the index. we're not concerned about that. rather, we want to know if we need to upgrade our server to anticipate for the potential increase in loading/memory pressure to the database after adding the index. our dba insists that we must increase RAM from 16GB to 48GB as he believes the new index will be kept in the RAM causing the server to run out of memory for other operations. would be great to confirm if that's necessary.
Thanks in advance for your expert advice.
MySQL version: 5.5.30
OS: CentOS
Hardware config: 8 Core, 32G RAM, 1TB Disk
Table size: 490GB
No. of rows: 67M
CREATE TABLE `mytable` (
`field_1` text NOT NULL,
`field_2` varchar(200) NOT NULL,
`field_3` varchar(100) NOT NULL,
`field_4` text NOT NULL,
`field_5` char(8) NOT NULL,
`field_6` varchar(100) NOT NULL DEFAULT '',
`field_7` varchar(100) DEFAULT '',
`field_8` varchar(20) NOT NULL,
`field_9` char(16) NOT NULL,
`field_0` varchar(25) NOT NULL,
`field_a` varchar(50) NOT NULL DEFAULT '',
`field_b` varchar(20) DEFAULT '',
`field_c` varchar(35) DEFAULT '',
`field_d` varchar(35) DEFAULT '',
`field_e` varchar(30) NOT NULL DEFAULT '',
`field_f` varchar(30) DEFAULT '',
`field_g` varchar(3) NOT NULL DEFAULT 'xx',
`field_h` varchar(50) DEFAULT '',
`field_i` varchar(100) DEFAULT '',
`field_j` char(8) NOT NULL,
`field_k` varchar(10) NOT NULL DEFAULT '',
`field_l` datetime NOT NULL,
PRIMARY KEY (`field_9`),
KEY `field_j_idx` (`field_j`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
First of all, indexes are stored on the disk, not in the memory. Both MyISAM and innodb may cache certain index blocks into the memory to enable faster access to the most commonly used blocks. For innodb the size of this buffer is controlled by the innodb_buffer_pool_size server system variable.
As you can see from the description, the setting of this variable is not affected by the addition or removal of indexes. So, unless you decide to increase the size of this variable, there is no direct impact of adding new index on MySQL memory usage.
Obviously, adding a new index to a large existing table will have a performance impact during the creation of the index. There will be an obvious impact after the index is added on any insert / update / delete operations, since MySQL will have to update the additional index data as well.
It depends. What version of MySQL do you have? With newer versions, ALGORITHM=INPLACE makes adding a secondary, non-unique, index relatively fast and painless.
You have another potential problem looming. If this table is really half the size of disk, if you do need to do an ALTER that cannot be done with INPLACE, it will probably crash for lack of disk space. Consider getting a bigger disk before this happens, and/or think about ways to shrink the table.
CHAR(8) -- what kind of data is in it? If it is always hex or plain letters, it should be declared CHARACTER SET ascii (or latin1), not utf8 -- which takes 24 bytes. Field_j already takes double that because of the index.
If some of the columns have repeated values, consider "normalizing" them. Then replace the bulky string with MEDIUMINT UNSIGNED (3 bytes, 16M max) or INT UNSIGNED.
(I understand your need for obfuscation of the column names, but it makes it hard to give you concrete suggestions.)
field_4 is TEXT, which cannot be indexed. Please describe further what type of text is in it; we may be able to suggest workarounds.
I assume innodb_file_per_table=ON when you built the table? And is still ON? Else, all hope is lost.
I have a query that runs really slow (15 20 seconds) when is not on memory and quite fast when is on memory (2s - 0.6s)
select count(distinct(concat(conexiones.tMacAdres,date_format(conexiones.fFecha,'%Y%m%d')))) as Conexiones,
sum(if(conexiones.tEvento='megusta',1,0)) as MeGusta,sum(if(conexiones.tEvento='megusta',conexiones.nAmigos,0)) as ImpactosMeGusta,
sum(if(conexiones.tEvento='checkin',1,0)) as CheckIn,sum(if(conexiones.tEvento='checkin',conexiones.nAmigos,0)) as ImpactosCheckIn,
min(conexiones.fFecha) Fecha_Inicio, now() Fecha_fin,datediff(now(),min(conexiones.fFecha)) as dias
from conexiones, instalaciones
where conexiones.idInstalacion=instalaciones.idInstalacion and conexiones.idInstalacion=190
and (fFecha between '2014-01-01 00:00:00' and '2016-06-18 23:59:59')
group by instalaciones.tNombre
order by instalaciones.idCliente
This is Table SCHEMAS:
Instalaciones with 1332 rows:
CREATE TABLE `instalaciones` (
`idInstalacion` int(10) unsigned NOT NULL AUTO_INCREMENT,
`idCliente` int(10) unsigned DEFAULT NULL,
`tRouterSerial` varchar(50) DEFAULT NULL,
`tFacebookPage` varchar(256) DEFAULT NULL,
`tidFacebook` varchar(64) DEFAULT NULL,
`tNombre` varchar(128) DEFAULT NULL,
`tMensaje` varchar(128) DEFAULT NULL,
`tWebPage` varchar(128) DEFAULT NULL,
`tDireccion` varchar(128) DEFAULT NULL,
`tPoblacion` varchar(128) DEFAULT NULL,
`tProvincia` varchar(64) DEFAULT NULL,
`tCodigoPosta` varchar(8) DEFAULT NULL,
`tLatitud` decimal(15,12) DEFAULT NULL,
`tLongitud` decimal(15,12) DEFAULT NULL,
`tSSID1` varchar(40) DEFAULT NULL,
`tSSID2` varchar(40) DEFAULT NULL,
`tSSID2_Pass` varchar(40) DEFAULT NULL,
`fSincro` datetime DEFAULT NULL,
`tEstado` varchar(10) DEFAULT NULL,
`tHotspot` varchar(10) DEFAULT NULL,
`fAlta` datetime DEFAULT NULL,
PRIMARY KEY (`idInstalacion`),
UNIQUE KEY `tRouterSerial` (`tRouterSerial`),
KEY `idInstalacion` (`idInstalacion`)
) ENGINE=InnoDB AUTO_INCREMENT=1332 DEFAULT CHARSET=utf8;
Conexiones with 2370365 rows
CREATE TABLE `conexiones` (
`idConexion` int(10) unsigned NOT NULL AUTO_INCREMENT,
`idInstalacion` int(10) unsigned DEFAULT NULL,
`idUsuario` int(11) DEFAULT NULL,
`tMacAdres` varchar(64) DEFAULT NULL,
`tUsuario` varchar(128) DEFAULT NULL,
`tNombre` varchar(64) DEFAULT NULL,
`tApellido` varchar(64) DEFAULT NULL,
`tEmail` varchar(64) DEFAULT NULL,
`tSexo` varchar(20) DEFAULT NULL,
`fNacimiento` date DEFAULT NULL,
`nAmigos` int(11) DEFAULT NULL,
`tPoblacion` varchar(64) DEFAULT NULL,
`fFecha` datetime DEFAULT NULL,
`tEvento` varchar(20) DEFAULT NULL,
PRIMARY KEY (`idConexion`),
KEY `idInstalacion` (`idInstalacion`),
KEY `tMacAdress` (`tMacAdres`) USING BTREE,
KEY `fFecha` (`fFecha`),
KEY `idUsuario` (`idUsuario`),
KEY `insta_fecha` (`idInstalacion`,`fFecha`)
) ENGINE=InnoDB AUTO_INCREMENT=2370365 DEFAULT CHARSET=utf8;
This is EXPLAIN
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE instalaciones const PRIMARY,idInstalacion PRIMARY 4 const 1
1 SIMPLE conexiones ref idInstalacion,fFecha,insta_fecha idInstalacion 5 const 110234 "Using where"
Thanks !
(Edited)
SHOW TABLE STATUS LIKE 'conexiones'
Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment
conexiones InnoDB 10 Compact 2305296 151 350060544 0 331661312 75497472 2433305 28/06/2016 22:26 NULL NULL utf8_general_ci NULL
Here's why it is so slow. And I will end with a possible speedup.
First, please do
SELECT COUNT(*) FROM conexiones
WHERE idInstalacion=190
and fFecha >= '2014-01-01'
and fFecha < '2016-06-19
in order to see how many rows we are dealing with. The EXPLAIN suggests 110234, but that is only a crude estimate.
Assuming there are 110K rows of conexiones involved in the query, and assuming the rows were (approximately) inserted in chronological order by fFecha, then...
There are a lot of rows to work with, and
They are scattered around the table on disk, hence
The query takes a lot of I/O, unless it is cached.
Let's further check on my last claim... How much RAM do you have? What is the value of innodb_buffer_pool_size? It should be about 70% of available RAM. Use a lower percentage if you have less than 4GB of RAM.
Assuming that conexiones is too big to be 'cached' in the 'buffer_pool', we need to find a way to decrease the I/O.
There are 1332 different values for idInstalacion. Perhaps you insert 1332 rows every few minutes/hours into conexiones? Since the PRIMARY KEY merely an AUTO_INCREMENT, those rows will be 'appended' to the end of the table.
Now let's look at where the idInstalacion=190 rows are. A new one of them occurs every 1332 (or so) rows. That means they are spread out. It means that (probably) no two rows are in the same block (16KB in InnoDB). That means that the 110234 will be in 110234 different blocks. That's about 2GB. If the buffer_pool is smaller than that, then there will be I/O. Even if it is bigger than that, that's a lot of data to touch.
But what to do about it? If we could arrange the =190 rows to be consecutive in the table, then the 2GB might drop to, say, 20MB -- a much more manageable and cacheable size. But how can that be done? By changing the PRIMARY KEY.
PRIMARY KEY(idInstalacion, fFecha, idConexion),
INDEX(idConexion)
and DROP any other indexes starting with idInstalacion or idConexion. To explain:
Since the PK is "clustered" with the data, all idInstalacion=190 rows over any consecutive fFetcha range will be consecutive in the data. So, fetching one block will get about 100 rows -- much less I/O.
A PK must be unique. Assuming (idInstalacion, fFecha) is not unique, I tacked on idConexion to make it unique.
I added INDEX(idConexion) to make AUTO_INCREMENT happy.
Potential drawback... Since this change rearranges the order of the data, other queries, including the INSERTs may be slowed down. The INSERTs will be scattered, but not really slowed down. 1332 "hots spots" would be accepting the new rows; that many blocks can easily be cached.
Arithmetic... If you have spinning drives, I would expect the existing structure to take about 1102 seconds (perhaps under 110 seconds for SSD) for 110234 rows. Since it is taking under 20 seconds, I suspect there is some caching (or you have SSDs) or the 110234 is grossly overestimated. My suggested change should decrease the "worst" time significantly, and slightly improve the "in memory" time. This "slight improvement" comes from being able to use the PK instead of a secondary key.
Caveat: Since 110234 * 1332 is nowhere near 2370365, much of my numerical analysis is probably nowhere near correct. For example, 2370365 rows with that schema is possible less than 1GB. Please provide SHOW TABLE STATUS LIKE 'conexiones'.
Addenda
"server has 2GB Ram and innodb_buffer_pool_size is 5368709120" -- Either that is a typo or it is terrible. Since the buffer_pool needs to reside in RAM, do not set the buffer_pool to 5GB. 500MB might be OK for your tiny 2GB of RAM.
The SHOW TABLE STATUS confirms that it (data + indexes) won't quite fit in 500M, so you may periodically experience I/O bound queries with 500M.
Increasing your RAM and buffer_pool would temporarily (until the data gets bigger) help performance.
Before putting this into production, test the ALTER and time the various queries you use:
ALTER TABLE conexiones
DROP PRIMARY KEY,
DROP INDEX insta_fecha,
DROP INDEX idInstalacion,
PRIMARY KEY(idInstalacion, fFecha, idConexion),
INDEX(idConexion)
Caution: The ALTER will need about 1GB of free disk space.
When timing, run with the Query Cache off, and run twice -- the first may involve I/O; the second is the 'in memory' as you mentioned.
Revised analysis: Since the bigger table has 300MB of data and some amount of indexes in use, and assuming 500MB buffer pool, I suspect that blocks are bumped out of the buffer pool some of the time. This fits well with your initial comment on the query's speed. My suggested index changes should help avoid the speed variance, but may hurt the performance of other queries.
Try to use a multi column index:
CREATE idx_nn_1 ON conexiones(idInstalacion,fFecha);
You might need to have it the other way around depending on the data, so test both. This avoids reading all the records for between condition on fFecha matching the idInstalacion condition, and should improve performance.
Try the following:
Either delete the idInstalacion INDEX or tell the engine to use the correct key in the from clause:
from conexiones use index (insta_fecha), instalaciones
And you don't need to JOIN, GROUP or ORDER. You are joining on a constant value (190) with one row. And you don't use any column from instalaciones.
So all you need is this:
select count(distinct(concat(conexiones.tMacAdres,date_format(conexiones.fFecha,'%Y%m%d')))) as Conexiones,
sum(if(conexiones.tEvento='megusta',1,0)) as MeGusta,sum(if(conexiones.tEvento='megusta',conexiones.nAmigos,0)) as ImpactosMeGusta,
sum(if(conexiones.tEvento='checkin',1,0)) as CheckIn,sum(if(conexiones.tEvento='checkin',conexiones.nAmigos,0)) as ImpactosCheckIn,
min(conexiones.fFecha) Fecha_Inicio, now() Fecha_fin,datediff(now(),min(conexiones.fFecha)) as dias
from conexiones -- use index (insta_fecha)
where conexiones.idInstalacion=190
and (fFecha between '2014-01-01 00:00:00' and '2016-06-18 23:59:59')
However - it doesn't mean it will be faster. MySQL will probably optimize all that stuff away.
We are having a Analytics product. For each of our customer we give one JavaScript code, they put that in their web sites. If a user visit our customer site the java script code hit our server so that we store this page visit on behalf of this customer. Each customer contains unique domain name.
we are storing this page visits in MySql table.
Following is the table schema.
CREATE TABLE `page_visits` (
`domain` varchar(50) DEFAULT NULL,
`guid` varchar(100) DEFAULT NULL,
`sid` varchar(100) DEFAULT NULL,
`url` varchar(2500) DEFAULT NULL,
`ip` varchar(20) DEFAULT NULL,
`is_new` varchar(20) DEFAULT NULL,
`ref` varchar(2500) DEFAULT NULL,
`user_agent` varchar(255) DEFAULT NULL,
`stats_time` datetime DEFAULT NULL,
`country` varchar(50) DEFAULT NULL,
`region` varchar(50) DEFAULT NULL,
`city` varchar(50) DEFAULT NULL,
`city_lat_long` varchar(50) DEFAULT NULL,
`email` varchar(100) DEFAULT NULL,
KEY `sid_index` (`sid`) USING BTREE,
KEY `domain_index` (`domain`),
KEY `email_index` (`email`),
KEY `stats_time_index` (`stats_time`),
KEY `domain_statstime` (`domain`,`stats_time`),
KEY `domain_email` (`domain`,`email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
We don't have primary key for this table.
MySql server details
It is Google cloud MySql (version is 5.6) and storage capacity is 10TB.
As of now we are having 350 million rows in our table and table size is 300 GB. We are storing all of our customer details in the same table even though there is no relation between one customer to another.
Problem 1: For few of our customers having huge number of rows in table, so performance of queries against these customers are very slow.
Example Query 1:
SELECT count(DISTINCT sid) AS count,count(sid) AS total FROM page_views WHERE domain = 'aaa' AND stats_time BETWEEN CONVERT_TZ('2015-02-05 00:00:00','+05:30','+00:00') AND CONVERT_TZ('2016-01-01 23:59:59','+05:30','+00:00');
+---------+---------+
| count | total |
+---------+---------+
| 1056546 | 2713729 |
+---------+---------+
1 row in set (13 min 19.71 sec)
I will update more queries here. We need results in below 5-10 seconds, will it be possible?
Problem 2: The table size is rapidly increasing, we might hit table size 5 TB by this year end so we want to shard our table. We want to keep all records related to one customer in one machine. What are the best practises for this sharding.
We are thinking following approaches for above issues, please suggest us best practices to overcome these issues.
Create separate table for each customer
1) What are the advantages and disadvantages if we create separate table for each customer. As of now we are having 30k customers we might hit 100k by this year end that means 100k tables in DB. We access all tables simultaneously for Read and Write.
2) We will go with same table and will create partitions based on date range
UPDATE : Is a "customer" determined by the domain? Answer is Yes
Thanks
First, a critique if the excessively large datatypes:
`domain` varchar(50) DEFAULT NULL, -- normalize to MEDIUMINT UNSIGNED (3 bytes)
`guid` varchar(100) DEFAULT NULL, -- what is this for?
`sid` varchar(100) DEFAULT NULL, -- varchar?
`url` varchar(2500) DEFAULT NULL,
`ip` varchar(20) DEFAULT NULL, -- too big for IPv4, too small for IPv6; see below
`is_new` varchar(20) DEFAULT NULL, -- flag? Consider `TINYINT` or `ENUM`
`ref` varchar(2500) DEFAULT NULL,
`user_agent` varchar(255) DEFAULT NULL, -- normalize! (add new rows as new agents are created)
`stats_time` datetime DEFAULT NULL,
`country` varchar(50) DEFAULT NULL, -- use standard 2-letter code (see below)
`region` varchar(50) DEFAULT NULL, -- see below
`city` varchar(50) DEFAULT NULL, -- see below
`city_lat_long` varchar(50) DEFAULT NULL, -- unusable in current format; toss?
`email` varchar(100) DEFAULT NULL,
For IP addresses, use inet6_aton(), then store in BINARY(16).
For country, use CHAR(2) CHARACTER SET ascii -- only 2 bytes.
country + region + city + (maybe) latlng -- normalize this to a "location".
All these changes may cut the disk footprint in half. Smaller --> more cacheable --> less I/O --> faster.
Other issues...
To greatly speed up your sid counter, change
KEY `domain_statstime` (`domain`,`stats_time`),
to
KEY dss (domain_id,`stats_time`, sid),
That will be a "covering index", hence won't have to bounce between the index and the data 2713729 times -- the bouncing is what cost 13 minutes. (domain_id is discussed below.)
This is redundant with the above index, DROP it:
KEY domain_index (domain)
Is a "customer" determined by the domain?
Every InnoDB table must have a PRIMARY KEY. There are 3 ways to get a PK; you picked the 'worst' one -- a hidden 6-byte integer fabricated by the engine. I assume there is no 'natural' PK available from some combination of columns? Then, an explicit BIGINT UNSIGNED is called for. (Yes that would be 8 bytes, but various forms of maintenance need an explicit PK.)
If most queries include WHERE domain = '...', then I recommend the following. (And this will greatly improve all such queries.)
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
domain_id MEDIUMINT UNSIGNED NOT NULL, -- normalized to `Domains`
PRIMARY KEY(domain_id, id), -- clustering on customer gives you the speedup
INDEX(id) -- this keeps AUTO_INCREMENT happy
Recommend you look into pt-online-schema-change for making all these changes. However, I don't know if it can work without an explicit PRIMARY KEY.
"Separate table for each customer"? No. This is a common question; the resounding answer is No. I won't repeat all the reasons for not having 100K tables.
Sharding
"Sharding" is splitting the data across multiple machines.
To do sharding, you need to have code somewhere that looks at domain and decides which server will handle the query, then hands it off. Sharding is advisable when you have write scaling problems. You did not mention such, so it is unclear whether sharding is advisable.
When sharding on something like domain (or domain_id), you could use (1) a hash to pick the server, (2) a dictionary lookup (of 100K rows), or (3) a hybrid.
I like the hybrid -- hash to, say, 1024 values, then look up into a 1024-row table to see which machine has the data. Since adding a new shard and migrating a user to a different shard are major undertakings, I feel that the hybrid is a reasonable compromise. The lookup table needs to be distributed to all clients that redirect actions to shards.
If your 'writing' is running out of steam, see high speed ingestion for possible ways to speed that up.
PARTITIONing
PARTITIONing is splitting the data across multiple "sub-tables".
There are only a limited number of use cases where partitioning buys you any performance. You not indicated that any apply to your use case. Read that blog and see if you think that partitioning might be useful.
You mentioned "partition by date range". Will most of the queries include a date range? If so, such partitioning may be advisable. (See the link above for best practices.) Some other options come to mind:
Plan A: PRIMARY KEY(domain_id, stats_time, id) But that is bulky and requires even more overhead on each secondary index. (Each secondary index silently includes all the columns of the PK.)
Plan B: Have stats_time include microseconds, then tweak the values to avoid having dups. Then use stats_time instead of id. But this requires some added complexity, especially if there are multiple clients inserting data. (I can elaborate if needed.)
Plan C: Have a table that maps stats_time values to ids. Look up the id range before doing the real query, then use both WHERE id BETWEEN ... AND stats_time .... (Again, messy code.)
Summary tables
Are many of the queries of the form of counting things over date ranges? Suggest having Summary Tables based perhaps on per-hour. More discussion.
COUNT(DISTINCT sid) is especially difficult to fold into summary tables. For example, the unique counts for each hour cannot be added together to get the unique count for the day. But I have a technique for that, too.
I wouldn't do this if i were you. First thing that come to mind would be, on receive a pageview message, i send the message to a queue so that a worker can pickup and insert to database later (in bulk maybe); also i increase the counter of siteid:date in redis (for example). Doing count in sql is just a bad idea for this scenario.
I have a high CPU problem with MYSQL using "top" ( linux ) shows cpu peaks of 90%.
I was trying to find the source of the problem, turned on general log and slow query log,
The slow query log did not find anything.
The Db contains a few small tables and one large table that contains almost 100k rows, Database Engine is MyIsam. strange thing i have noticed that on the large table, select, insert are very fast but update takes 0.2 - 0.5 secs.
already used optimize and repair and no improvement.
the table is being updated frequently, could this be the source of the high CPU% ?
What can i do to improve this?
Does your MySQL server have ganglia setup on it? Regular ganglia metrics along with the mysql_stats plugin for ganglia might reveal what's going on.
I found mytop extremely helpful.
First of all, could you define which is the query that is overloading the server? In that case, please paste it here and may be we can give you a hand on it.
Also, please look at the table structure. Tables with many indexes are likely to have slow updating timespans.
I also recommend you to give us more data about the problem.
Hope that helps,
The first thing that pops into mind is indexing but that doesn't fit since your selects and inserts are fast. It's usually inserts and updates that will slow down on an "overindexed" table. That leaves triggers... do you have an update trigger on that table that could be doing a lot of work and causing the spike?
a query that takes .5 secs won't showup in top cpu of 100%. Its too small.
Also try "show full processlist"; verify you my.cnf and even try reducing the slow query timeout. slow query log can catch anything that is slow long enough.
Any update statement on that table based on the table's key is slow.
for example UPDATE customers SET CustMoney = 1 WHERE CustUID = 'someid'
CREATE TABLE IF NOT EXISTS `customers` (
`CustFullName` varchar(45) NOT NULL,
`CustPassword` varchar(45) NOT NULL,
`CustEmail` varchar(128) NOT NULL,
`SocialNetworkId` tinyint(4) NOT NULL,
`CustUID` varchar(64) character set ascii NOT NULL,
`CustMoney` bigint(20) NOT NULL default '0',
`LastIpAddress` varchar(45) character set ascii NOT NULL,
`LastLoginTime` datetime NOT NULL default '1900-10-10 10:10:10',
`SmallPicURL` varchar(120) character set ascii default '',
`LargePicURL` varchar(120) character set ascii default '',
`LuckyChips` int(10) unsigned NOT NULL default '0',
`AccountCreationTime` datetime NOT NULL default '2009-11-11 11:11:11',
`AccountStatus` tinyint(4) NOT NULL default '1',
`CustLevel` int(11) NOT NULL default '0',
`City` varchar(32) NOT NULL default '',
`State` varchar(32) NOT NULL default '0',
`Country` varchar(32) NOT NULL default '',
`Zip` varchar(16) character set ascii NOT NULL,
`CustExp` bigint(20) NOT NULL default '0',
PRIMARY KEY (`CustUID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Again im not sure that this is the cause for the high CPU Usage but it seems to me that its not normal for an update statement to take that long. ( 0.5 sec)
The table is being updated up to 5 times in a sec at the moment and in the future it will update more frequently.
What kind of server is this? I've seen slooow writes and relatively fast reads on virtual machines. What does http://en.wikipedia.org/wiki/Hdparm has to say?
What cpu/ram you have on it? What the load avg?