"Cannot convert to a SELECT statement" What does this mean - mysql

I'm trying to make our mySQL database run faster and I've analyzed our slow query log and the most common slow query is this:
CREATE TABLE IF NOT EXISTS `wp_bad_behavior` (
`id` INT(11) NOT NULL auto_increment,
`ip` TEXT NOT NULL,
`date` DATETIME NOT NULL default '0000-00-00 00:00:00',
`request_method` TEXT NOT NULL,
`request_uri` TEXT NOT NULL,
`server_protocol` TEXT NOT NULL,
`http_headers` TEXT NOT NULL,
`user_agent` TEXT NOT NULL,
`request_entity` TEXT NOT NULL,
`key` TEXT NOT NULL,
INDEX (`ip`(15)),
INDEX (`user_agent`(10)),
PRIMARY KEY (`id`) );
I'm trying to understand why this query keeps getting called because after the table is setup it should not keep happening.
The EXPLAIN result for this is: Cannot convert to a SELECT statement.
Any ideas on this would be fantastic!
Paul

EXPLAIN will only work on SELECT queries, that's why it complains. As to why the query is in the slow query log: Either it is deleted and recreated - check the regular query log for DROP TABLEs - or it simply blocks because the table/database is busy - check the other slow queries first, especially the ones on the same table.

Of course you can't convert a CREATE statement to a SELECT statement...
The question therefore is, why a creation statement is called so frequently. It uses mysql's IF NOT EXISTS, so it might not even be a design issue, the table would only get created once despite the query is called bazillion times. Maybe your system that uses the database issues this statement from every single method, as a way to make sure actual database structure matches its expectations. Just a foolprof in case somebody deletes this super important table.

Its probably there as an easy way to recover from someone deleting the table. A good reason why someone would delete the table is that it's an easy way to get rid of old log files. However this obviously affects performance and so you should check that none of the code is deleting the table and then remove this check. Then you will be required to manually recreate the table when deleting old logs.

Related

Mysqldump fails on dumping virtual column

I have a largish (4GB) database, that I would like to dump, but when using the mysqldump tool (the MariaDB version, Ver 10.19 Distrib 10.4.21-MariaDB, for Linux (x86_64)), my dumping process has always failed at the same table, with the not so helpful error message:
mysqldump: Couldn't execute 'SHOW CREATE TABLE `AffiliateProgramsCampaigns`': Lost connection to MySQL server during query (2013)
I've tried to debug this error, but none of the obvious solutions worked for me, so I did a little experimenting, and found the culprit of my problem. The table in question, contains a VIRTUAL column, which strangely, if I remove, the dump finishes succesfully. I've digged a little more, but found no such error anywhere else relating to dumping MariaDB databases with virtual columns. Adding the --verbose option to the dump, is not helping either, as it gives me no other significant information.
As the query fails at the SHOW CREATE TABLE part, I've figured it has something to do with the structure of the CREATE TABLE query, but when I only try to dump the structure of this database, everything works like a charm. So I am stuck at the moment, trying to solve this issue. I could give up on the virtual column in this specific table, but if there would be any alternative, even a different dump tool, I would more likely go with that solution. Any advice, on how to fix this, or at least how to debug the problem more throughly would be appreciated!
Here are some other debug informations, that could be helpful:
This is the end of the --verbose dump output:
-- Retrieving view structure for table ActionLogReferences...
-- It's base table, skipped
-- Retrieving view structure for table ActionLogs...
-- It's base table, skipped
-- Retrieving view structure for table AffiliatePrograms...
-- It's base table, skipped
-- Retrieving view structure for table AffiliateProgramsCampaigns...
mysqldump: Couldn't execute 'SHOW CREATE TABLE `AffiliateProgramsCampaigns`': Lost connection to MySQL server during query (2013)
And here is the CREATE TABLE syntax for the table in question:
CREATE TABLE `AffiliateProgramsCampaigns` (
`AffiliateProgramsCampaignId` bigint(20) NOT NULL AUTO_INCREMENT,
`Name` varchar(255) NOT NULL,
`Description` tinytext NOT NULL,
`StartDate` datetime NOT NULL,
`EndDate` datetime NOT NULL,
`IsActivated` tinyint(1) NOT NULL DEFAULT 0 COMMENT 'This column shows if this campaign was manually activated.',
`Status` tinyint(4) GENERATED ALWAYS AS (if(`IsActivated`,if(curdate() between `StartDate` and `EndDate`,1,0),0)) VIRTUAL COMMENT 'The final, computed status of the campaign. When querying, you should use this to check the status.',
`affiliatePrograms_AffiliateProgramId` mediumint(9) NOT NULL,
`images_ImageId_BaseImage` bigint(20) DEFAULT NULL COMMENT 'The id of the base image.',
`images_ImageId_CoverImage` bigint(20) DEFAULT NULL COMMENT 'The id of the cover image.',
PRIMARY KEY (`AffiliateProgramsCampaignId`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
The query that is reported by mysqldump btw runs every single time I try it, both from phpymadmin and from the command line mysql interface. I also tried dumping with different users, even with the root user, but I always get the same error, at the same spot.
The problem was with the CURDATE() function that was used in the virtual column. By changing the function, to CURRENT_TIMESTAMP(), the issue is solved.
Also posted a bug report on the official boards: https://jira.mariadb.org/browse/MDEV-26619

Is there a way to compare two create table queries and alter the existing table with adding new columns?

So in this case, I will get the whole database schema multiple times. But everytime the tables structure might be slightly different than the previous one. Since I already have data inside, is there a way to write a query to compare with the existing table and just adding new columns?
For example I already have this table in my database.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
And in the new schema that I get has the same table but with different columns.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL,
`AgeLimit` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
In this case the column AgeLimit will be add to the existing table.
You should be able to do it by looking at the table definitions in the metadata tables (information_schema).
You can always look into the existing schema using the information_schema database, which holds the metadata.
You can then import your new schema into a temporary database, creating all tables according to the new schema and then again look into the metadata.
You might be able to use dynamic sql inside a stored procedure to execute alter table statements created from that differences at runtime
But I think, this is a lot easier from the backend nodejs server, because you can easily do step 1 and 2 also from nodejs (it's in fact just querying a bunch of tables) and you have way more possibilities to calculate the differences, create and execute the appropriate queries.
EDIT 1
If you don't have the possiblility of creating a temporary database from the new schema, you will have to find some other way, to extract information from it. I suspect you have a sql-script with (among others) a bunch of CREATE TABLE ... statements, because that's typically what mysqldump creates. So you'll have to parse this script. Again, this seems to be way easier in javascript, if it even is possible in a MySQL stored procedure. If your schema is as well structured as your examples, it's actually just a few lines of code.
EDIT 2
And maybe you can event get some inspiration from here: Compare two MySQL databases There are some tools mentioned which do a synchronization between databases.

mysql 5.6 adjust varchar length to longer value without table locking

We have a varchar column right now that is 255 chars in length. We're about to up it to 400 using this statement:
ALTER TABLE `resources` CHANGE `url` `url` varchar(400) NOT NULL;
I've read the docs about online ddl which states
Operation In-Place? Copies Table? Allows Concurrent DML? Allows Concurrent Query?
---------------------------|-----------|---------------|-----------------------|---------------------------------
Change data type of column No Yes No Yes
And I have these two questions:
does changing the col from varchar(255) to varchar(400) constitute a changing of data type?
will this lock the table for writes?
I guess on question two, it just seems unclear what concurrent DML really means. Does it mean I can't write to this table at all, or that the table goes through the copy/swap process?
We only have about 2.5 million rows in this table, so the migration only takes about 30 seconds, but I'd prefer the table not be locked out during the time period.
I had the same question and ran some tests based on advice from Percona. Here are my findings:
ALTER TABLE `resources` CHANGE `url` `url` varchar(400), ALGORITHM=INPLACE, LOCK=NONE;
Running this on 5.6 should produce something similar to:
[SQL]ALTER TABLE `resources` CHANGE `url` `url` varchar(400), ALGORITHM=INPLACE, LOCK=NONE;
[Err] 1846 - ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.
What this means is that you cannot perform this operation as MySQL is considering this to be a column type change, and therefore a full table copy must be performed.
So let's try to use the COPY algorithm as suggested in the output, but set LOCK=NONE:
ALTER TABLE `resources` CHANGE `url` `url` varchar(400), ALGORITHM=COPY, LOCK=NONE;
And we get:
[SQL]ALTER `resources` CHANGE `url` `url` varchar(400), ALGORITHM=COPY, LOCK=NONE;
[Err] 1846 - LOCK=NONE is not supported. Reason: COPY algorithm requires a lock. Try LOCK=SHARED.
Trying to set LOCK=SHARED and attempting an insert on the table results in the query waiting for a metadata lock.
I believe you are trying this in production and you wish not to hamper your system.
You can do this in another way. Lets say, you want to change a column url(varchar 255) to url(varchar 400).
create another column, url2(varchar 400)
copy all data from url-> url2
rename "url" column to "url3"
rename "url2" column to name
run query for 2rd and 4th step together and it will take milli seconds to execute.
So There will be no table lock and your application will run smoothly.
I'm 99% certain that an alter against any table regardless of engine type will result in the table being locked until the op is complete, even with InnoDB's 'row level' locking capabilities.
If you can stand a 30-45 second outage where some connections will may and ultimately lost, then the easiest choice is to just pull the trigger. Or you could implement on of the following:
Put your site into 'maintenance mode' a few minutes before op, execute the op, then take the site out of maintenance mode.
Or, if you have a master master replication setup with a floating ip and dns, you could stop do this:
Stop replication on standby master
Run alter
Switch floating ip to standby master
Stop replication on primary master
Run alter
Restart replication on both masters
Switch floating ip back to primary master

What is the best way to store single non-repeating data to a database?

What is best practice for storing data in a database which ever only requires a single entry. An example would be configuration data which relates to the entire application/website. Is it common to create a table for this which has only a single entry?
I'm asking under the context of a MongoDB database though I think the question is also valid for SQL databases.
An example of an auxiliary table commonly found in databases would be called Constants and may hold such values of pi, the idea begin that all applications using the database are required to use the same scale and precision. In standard SQL, to ensure they is at most one row e.g. (from Joe Celko):
CREATE TABLE Constants
(
lock CHAR(1) DEFAULT 'X' NOT NULL PRIMARY KEY,
CHECK (lock = 'X'),
pi FLOAT DEFAULT 3.142592653 NOT NULL,
e FLOAT DEFAULT 2.71828182 NOT NULL,
phi FLOAT DEFAULT 1.6180339887 NOT NULL,
...
);
Because mySQL doesn't support CHECK constraint then a trigger is required to achieve the same.
A table would be fine, no reason why not to use it just because it will have only one row.
I just had the weirdest idea (I wouldn't implement it but for some reason I thought of that). You can create a hard-coded view like this:
create view myConfigView
as
select 'myConfigvalue1' as configValue1, 'myConfigvalue2' as configValue2
and do select * from myConfigView :)
but again, no reason why not to use a table just because it will have only one row
If you are using a SQL DB, you will probably have columns like key name, and value and each attribute will be stored as a row.
In MongoDB, you can store all related configuration as a single JSON document
I use a config table with a name (config_name) and a value (config_value). I even add a help field so that users can see what the name/value pair is intended for, or where it is used.
CREATE TABLE config (
config_id bigint unsigned NOT NULL auto_increment,
config_name varchar(128) NOT NULL,
config_value text NOT NULL,
config_help text COMMENT 'help',
PRIMARY KEY (config_id),
UNIQUE KEY ix_config_name (config_name),
) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Then following php code recovers the value for a key, or returns an empty string. Assumes $db is an open database connection. All entries are forced to lower case.
function getConfigValue($name) {
$retval='';
$db = $this->db;
$sql = 'select config_value from config where LOWER(config_name)="'.strtolower($name).'"';
$result = $db->Query($sql);
if ($result) {
$row = $db->FetchAssoc($result);
$retval = $row['config_value'];
}
return $retval;
}
All mysql/php in this instance, but the general principle remains.
For MongoDB databases, I usually just make a new "table", but, for SQL databases, that entails a lot more (especially when others are also working on the same database; SQL isn't as malleable), so, you might want to be a bit more careful with it.
I would just create table for configurations, as rainecc told, and use cache then to take that all table to memory :) and use it from there (cache). It will be the best.

mysql, how to create table and automatically track users who add or delete rows/tables

I would like some kind of revision control history for my sql database.
I would like this table to keep updating with a record of who, deleted what, etc, when.
I am connecting to MySQL using Perl.
Approach 1: Create a separate "audit" table and use triggers to populate the info.
Here's a brief guide for MySQL (and Postrges): http://www.go4expert.com/forums/showthread.php?t=7252
Approach 2: Populate the audit info from your Perl database access code. Ideally, as part of the same transaction. There's no significant win over the first approach and many downsides (you don't catch changes made OUTSIDE of your code, for one)
**Disclaimer: I faced this situation in the past, but in PHP. Concepts are for PHP but could be applied to perl with some thought.
I played with the idea of adding triggers to each table AFTER INSERT, AFTER UPDATE, AFTER DELETE
to accomplish the same thing. The problem with this was:
the trigger didn't know the 'admin' user, just the db user (CURRENT_USER)
Biggest issue was that it wasn't feasible to add these triggers to all my tables (I suppose I could have written a script to add the triggers).
Maintainability of the triggers. If you change how things are tracked, you'd have to update all triggers. I suppose having the trigger call a stored procedure would mostly fix that issue.
Either way, for my situation, I found the best course of action was in the application layer (not DB layer):
create a DB abstraction layer if you haven't already (Class that handles all the interaction with the database).
create function for each action (insert, update, delete).
in each of these functions, after a successful query call, add another query that would insert the relevant information to your tracking table
If done properly, any action you perform to update any table will be tracked. I had to add some overrides for specific tables to not track (what's the point of tracking inserts on the 'track_table' table, for instance). Here's an example table tracking schema:
CREATE TABLE `track_table` (
`id` int(16) unsigned NOT NULL,
`userID` smallint(16) unsigned NOT NULL,
`tableName` varchar(255) NOT NULL DEFAULT '',
`tupleID` int(16) unsigned NOT NULL,
`date_insert` datetime NOT NULL,
`action` char(12) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `userID` (`userID`),
KEY `tableID` (`tableName`,`tupleID`,`date_insert`)
) ENGINE=InnoDB