I have found myself looking after an old testlink installation, all the people responsible have left and it is years since I did any serious SQL work.
The underlying database is version 5.5.24-0ubuntu0.12.04.1
I do not have all the passwords, but I have enough rights to do a backup without locks;
mysqldump --all-databases --single-transaction -u testlink -p --result-file=dump2.sql
I really do not want to a attempt to restore the data!
We need to increase the length of the name field in testlink, various pages lead me to increasing the length of a field in the nodes_hierarchy table.
The backup yielded this;
CREATE TABLE `nodes_hierarchy` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(100) DEFAULT NULL,
`parent_id` int(10) unsigned DEFAULT NULL,
`node_type_id` int(10) unsigned NOT NULL DEFAULT '1',
`node_order` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `pid_m_nodeorder` (`parent_id`,`node_order`)
) ENGINE=MyISAM AUTO_INCREMENT=184284 DEFAULT CHARSET=utf8;
I have only really one chance to get this right and cannot lose any data. Does this look exactly right?
ALTER TABLE nodes_hierarchy MODIFY name VARCHAR(150) DEFAULT NULL;
That is the correct syntax.
Backup
You should backup the database regardless how safe this operation is. It seems like you are already planning on it. It is unlikely you will have problems. Backup is just an insurance policy to account for unlikely occurrences.
Test table
You seem to have ~200K records. I'd recommend you make a copy of this table by just doing:
CREATE TABLE `test_nodes_hierarchy` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(100) DEFAULT NULL,
`parent_id` int(10) unsigned DEFAULT NULL,
`node_type_id` int(10) unsigned NOT NULL DEFAULT '1',
`node_order` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `test_pid_m_nodeorder` (`parent_id`,`node_order`)
) ENGINE=MyISAM AUTO_INCREMENT=184284 DEFAULT CHARSET=utf8;
Populate test table
Populate the test table with:
insert into test_nodes_hierarchy
select *
from nodes_hierarchy;
Run alter state on this test table
Find how long the alter statement will take on the test table.
ALTER TABLE test_nodes_hierarchy
MODIFY name VARCHAR(150) DEFAULT NULL;
Rename test table
Practice renaming the test table using:
RENAME TABLE test_nodes_hierarchy TO test2_nodes_hierarchy;
Once you know the time it takes, you know what to expect on the main table. If something goes awry, you can replace the drop the nodes_hierarchy table and just rename test_nodes_hierarchy table.
That'll just build confidence around the operation.
Related
MySql: AUTO_INCREMENT is missing from some tables after running for about one month.
Initially: (show create table Foo)
CREATE TABLE `Foo` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`name` varchar(10) NOT NULL,
`type` tinyint(2) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8
After one month:
CREATE TABLE `Foo` (
`id` bigint(20) NOT NULL,
`name` varchar(10) NOT NULL,
`type` tinyint(2) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
AUTO_INCREMENT is missing. What is the cause?
Mysql Server version: 5.6.25, Linux
Someone must have changed it. This change does not happen spontaneously.
I can reproduce this change myself:
CREATE TABLE Foo ( id BIGINT AUTO_INCREMENT, ...
ALTER TABLE Foo MODIFY COLUMN id BIGINT;
SHOW CREATE TABLE Foo\G
*************************** 1. row ***************************
Table: foo
Create Table: CREATE TABLE `foo` (
`id` bigint(20) NOT NULL,
`name` varchar(10) NOT NULL,
`type` tinyint(2) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Now the column shows it is BIGINT but not AUTO_INCREMENT.
Every time you MODIFY COLUMN or CHANGE COLUMN, you must repeat all the column options like NOT NULL and AUTO_INCREMENT and DEFAULT, or else it will revert to defaults (i.e. not auto-increment).
So I would interpret this shows that someone did an ALTER TABLE and didn't remember to include the AUTO_INCREMENT column option.
Just a thought.
If you have binary logs, you may see the alter query on the logs and when it was run. :)
Check if the binary log is enabled by
show variable like 'log_bin';
If binary log is enabled, find the likely period that the query could have been executed and then use mysqlbinlog to help you find it.
If binary log is not enabled, bad luck - as the previous post by Bill Karwin has suggested mysql does not change it on its own - someone must have changed it.
I have this table
CREATE TABLE llegada (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`emc_id` int(10) unsigned DEFAULT NULL,
`cuartel_id` int(10) unsigned DEFAULT NULL,
`fecha` datetime DEFAULT NULL,
`nro_entrada` int(10) unsigned DEFAULT NULL,
`valor` varchar(10) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `ind_llegada` (`emc_id`,`cuartel_id`,`fecha`,`nro_entrada`)
) ENGINE=MyISAM AUTO_INCREMENT=18822145 DEFAULT CHARSET=latin1
This tablet has approximately 100000000 records. And to improve the performance I would like to partition to this table, in 6 parts depending on the year. But first two problems happens I'm not sure how to do it and not know if would modify the queries made to the table. My idea would not have to modify the query page that accesses the database.
Thanks for advance.
I have never heard of a way to partition sql itself without just creating multiple databases. As you said, you'll need to modify the query page or the way information is stored to the various databases on your site, which you should do anyway because that's going to be a lot of wasted processing time. I'm surprised it hasn't already affected your user experience.
When I create a table in MySQL specifying smallint as a column, but then use show create table or even mysqldump, MySQL has added (5) after the smallint definition, as below.
I'm guessing it doesn't really matter as far as the data is concerned, but can anyone explain why and if/how I can stop it doing this?
As an aside, I am attempting to change an existing database table to exactly match that of a new sql script. I could always alter the new sql script, but I'd prefer to alter the existing table if possible (think software install versus software upgrade).
DROP TABLE IF EXISTS `test`;
CREATE TABLE `test` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`status` varchar(100) NOT NULL DEFAULT '',
`port` smallint unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
SHOW CREATE TABLE test;
CREATE TABLE `test` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`status` varchar(100) NOT NULL DEFAULT '',
`port` smallint(5) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
No, you can't stop the SHOW CREATE TABLE from including the display width attribute for integer types.
If a value for the display width is not included in the column declaration of an integer type, MySQL supplies a default value for it. A value of 5 is the default value for SMALLINT UNSIGNED.
The display width doesn't have any affect on the values that can be stored or retrieved. Client applications can make use of the value for formatting a resultset.
Reference: http://dev.mysql.com/doc/refman/5.6/en/numeric-type-attributes.html
tMySQL is simply setting the (displayed) length of the column to match he data type (max value 65535, five digits). To change this, you can write:
port smallint (3) unsigned NOT NULL DEFAULT '0',
if you like.
Try this start adding values in your table.
<mysql> CREATE TABLE test(
-> ID SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> Name VARCHAR(100) NOT NULL
-> );>
I have a mysql DB with a table that holds version information for multiple other tables. In order to link to the same family of versions I have a version_master table that holds a primary key to the family of versions that the link refers to. I was wondering if there was a more elegant solution without the need for a version_master table.
CREATE TABLE IF NOT EXISTS `version` (
`version_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`version_master_id` int(10) unsigned NOT NULL,
`major` int(10) unsigned NOT NULL DEFAULT '0',
`minor` int(10) unsigned NOT NULL DEFAULT '0',
`patch` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`version_id`));
CREATE TABLE IF NOT EXISTS `version_master` (
`version_master_id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY);
CREATE TABLE IF NOT EXISTS `needs_versions` (
`needs_versions_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`name` varchar(128) COLLATE utf8_unicode_ci NOT NULL,
`description` text COLLATE utf8_unicode_ci NOT NULL,
`version_master_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`needs_versions_id`));
In this example you can certainly eliminate the version_master table and use the combination of version_id and version_master_id fields as an index. I think you can just drop it, because nothing seem to refer to it with a foreign key.
However, having version_master would be a good idea if you had additional information associated with each family of versions.
Also, you are trying to make a primary key out of the undefined column offer_type_id. It is not clear whether you can logically merge needs_versions with version_master or not. The name itself is not very descriptive. I would recommend not to use verbs in table names.
The other common way to do this is to use SEQUENCEs.
But MySQL does not seem to support them, at least the MySQL manual contains a section on how to simulate sequences using a one row, one column table:
Create a table to hold the sequence
counter and initialize it:
CREATE TABLE sequence (id INT NOT NULL);
INSERT INTO sequence VALUES (0);
Use the table to generate sequence
numbers like this:
UPDATE sequence SET id=LAST_INSERT_ID(id+1);
SELECT LAST_INSERT_ID();
The following query is using temporary and filesort. I'd like to avoid that if possible.
SELECT lib_name, description, count(seq_id), floor(avg(size))
FROM libraries l JOIN sequence s ON (l.lib_id=s.lib_id)
WHERE s.is_contig=0 and foreign_seqs=0 GROUP BY lib_name;
The EXPLAIN says:
id,select_type,table,type,possible_keys,key,key_len,ref,rows,Extra
1,SIMPLE,s,ref,libseq,contigs,contigs,4,const,28447,Using temporary; Using filesort
1,SIMPLE,l,eq_ref,PRIMARY,PRIMARY,4,s.lib_id,1,Using where
The tables look like this:
libraries
CREATE TABLE `libraries` (
`lib_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`lib_name` varchar(30) NOT NULL,
`method_id` int(10) unsigned DEFAULT NULL,
`lib_efficiency` decimal(4,2) unsigned DEFAULT NULL,
`insert_avg` decimal(5,2) DEFAULT NULL,
`insert_high` decimal(5,2) DEFAULT NULL,
`insert_low` decimal(5,2) DEFAULT NULL,
`amtvector` decimal(4,2) unsigned DEFAULT NULL,
`description` text,
`foreign_seqs` tinyint(1) NOT NULL DEFAULT '0' COMMENT '1 means the sequences in this library are not ours',
PRIMARY KEY (`lib_id`),
UNIQUE KEY `lib_name` (`lib_name`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1;
sequence
CREATE TABLE `sequence` (
`seq_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`seq_name` varchar(40) NOT NULL DEFAULT '',
`lib_id` int(10) unsigned DEFAULT NULL,
`size` int(10) unsigned DEFAULT NULL,
`add_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`sequencing_date` date DEFAULT '0000-00-00',
`comment` text DEFAULT NULL,
`is_contig` int(10) unsigned NOT NULL DEFAULT '0',
`fasta_seq` longtext,
`primer` varchar(15) DEFAULT NULL,
`gc_count` int(10) DEFAULT NULL,
PRIMARY KEY (`seq_id`),
UNIQUE KEY `seq_name` (`seq_name`),
UNIQUE KEY `libseq` (`lib_id`,`seq_id`),
KEY `primer` (`primer`),
KEY `sgitnoc` (`seq_name`,`is_contig`),
KEY `contigs` (`is_contig`,`seq_name`) USING BTREE,
CONSTRAINT `FK_sequence_1` FOREIGN KEY (`lib_id`) REFERENCES `libraries` (`lib_id`)
) ENGINE=InnoDB AUTO_INCREMENT=61508 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;
Are there any changes I can do to make the query go faster? If not, when (for a web application) is it worth putting the results of a query like the above into a MEMORY table?
First strategy: make it faster for mySQL to locate the records you want summarized.
You've already got an index on sequence.is_contig. You might try indexing on libraries.foreign_seqs. I don't know if that will help, but it's worth a try.
Second strategy: see if you can get your sort to run in memory, rather than in a file. Try making the sort_buffer_size parameter bigger. This will consume RAM on your server, but that's what RAM is for.
Third strategy: IF your application needs to do this query a lot but updates the underlying data only a little, take your own suggestion and create a summary table. Perhaps use an EVENT to remake the summary table., and run it once every few minutes. If you're going to follow that strategy, start by creating a view with this table in it and have your app retrieve information from the view. Then get the summary table stuff working, drop the view, and give the summary table the same name as the view. That way your data model work and your application design work can proceed independently of each other.
Final suggestion: If this is truly slowly-changing summary data, switch to myISAM. It's a little faster for this kind of data wrangling.