Error Importing SQL To MySQL Cluster. Missing Primary Key - mysql

I hope someone can help me with this. I'm moving a MySQL database (WordPress) to a MySQL Cluster running Master-Master replication. When I'm trying to import the SQL to the database I get the following error message;
Plugin group_replication reported: 'Table wpmk_actionscheduler_actions does not have any PRIMARY KEY. This is not compatible with Group Replication.'
Ok, so I know what this means or I thought I did. When I inspect this table in PHPmyAdmin, I can see this table does have a primary key. I ran the following command to find the tables without a primary key;
SELECT
tab.table_schema AS database_name,
tab.table_name AS table_name,
tab.table_rows AS table_rows
FROM information_schema.tables tab
LEFT JOIN information_schema.table_constraints tco
ON (tab.table_schema = tco.table_schema
AND tab.table_name = tco.table_name
AND tco.constraint_type = 'PRIMARY KEY')
WHERE
tab.table_schema NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys')
AND tco.constraint_type IS NULL
AND tab.table_type = 'BASE TABLE';
But this doesn't return any tables (because they all have keys). And yet, my damn cluster is saying a key is missing when importing the SQL. I'm totally stuck.
I attached a screenshot of the table before I exported it. What am I missing here?

I fixed this myself by downloading MySQL Workbench, opening the .SQL and editing the create statements to include an ID column and set it as the primary key. Then I removed the alter statements which were supposed to add primary keys. I guess these are updates from developers.

Related

MySQL idempotent version of add column failing

MySQL here. Trying to add a column to a table in idempotent fashion. In reality it will be a SQL script that gets ran as part of an application data migration, so it will be ran over and over and I want to make sure that we only run it if the column does not already exist.
My best attempt so far:
IF NOT EXISTS (SELECT 1
FROM information_schema.COLUMNS
WHERE
TABLE_SCHEMA = 'myapp'
AND TABLE_NAME = 'mytable'
AND COLUMN_NAME = 'fizzbuzz')
BEGIN
alter table myapp.mytable
add column fizzbuzz tinyint(1) not null default false;
END
yields a vague syntax error:
"IF" is not valid at this position, expecting EOF, ALTER, ANALYZE, BEGIN, BINLOG, CACHE, ...
Can anyone spot where my syntax is going awry?
use:
ALTER TABLE myapp.mytable
ADD COLUMN IF NOT exists fizzbuzz TINYINT(1) NOT NULL DEFAULT FALSE;

MYSQL - set default as NULL to all columns, where default is not set

I have about 12 databases, each with 50 tables and most of them with 30+ columns; the db was running in strict mode as OFF, but now we had to migrate the db to cleardb service which by default has strict mode as ON.
all the tables that had "Not Null" constraint, the inserts have stopped working, just because the default values are not being passed; while in case of strict mode as OFF if the value are not provided, the MYSQL will presume the default value of the column datatype.
Is there a script I can use to get the metadata about all the columns of all tables and generate a script to alter all the tables with such columns to change the default to "Null"
You should consider using the information_schema tables to generate DDL statements to alter the tables. This kind of query will get you the list of offending columns.
SELECT CONCAT_WS('.',TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME) col
FROM information_schema.COLUMNS
WHERE IS_NULLABLE = 0
AND LENGTH(COLUMN_DEFAULT) = 0
AND TABLE_SCHEMA IN ('db1', 'db2', 'db3')
You can do similar things to generate ALTER statements to change the tables. But beware, MySQL likes to rewrite tables when you alter certain things. It might take time.
DO NOT attempt to UPDATE the information_schema directly!
You could try changing the strict_mode setting when you connect to the SaaS service, so your software will work compatibly.
This is a large project and is probably important business for cleardb. Why not ask them for help in changing the strict_mode setting?
This is what I came up with on the base of #Ollie-jones script
https://gist.github.com/brijrajsingh/efd3c273440dfebcb99a62119af2ecd5
SELECT CONCAT_WS('.',TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME) col,CONCAT('alter table ',TABLE_NAME,' MODIFY COLUMN ', COLUMN_NAME,' ',DATA_TYPE,'(',CHARACTER_MAXIMUM_LENGTH,') NULL DEFAULT NULL') as script_col
FROM information_schema.COLUMNS
WHERE is_nullable=0
and length(COLUMN_DEFAULT) is NULL and
CHARACTER_MAXIMUM_LENGTH is not NULL and
table_schema = 'immh'
Union
SELECT CONCAT_WS('.',TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME) col,CONCAT('alter table ',TABLE_NAME,' MODIFY COLUMN ', COLUMN_NAME,' ',DATA_TYPE,' NULL DEFAULT NULL') as script_col
FROM information_schema.COLUMNS
WHERE is_nullable=0
and length(COLUMN_DEFAULT) is NULL and
CHARACTER_MAXIMUM_LENGTH is NULL and
table_schema = 'immh'

Convert MyISAM to InnoDB database

I'm trying to convert a whole database from MyISAM to InnoDB with this statement:
use information_schema;
SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;')
FROM information_schema.tables
WHERE engine = "MyISAM" AND table_type = "BASE TABLE" AND table_schema = "database";
and while I get a result that every table is changed for example:
ALTER TABLE database.action ENGINE=InnoDB;
when I check the table engines they're still MyISAM. The weird thing is that if I run the command separately
ALTER TABLE action ENGINE='InnoDB';
it works fine for that table.
Any tips on how to do the conversion for the whole database?
The SELECT statement you are running only generates strings; the strings it generates are not being executed as SQL statements. You'd need to take the resultset from that query, and then actually execute those statements as a separate step.

Query to find foreign keys

I have a database where I need to drop some foreign keys, but I don't know beforehand whether the foreign keys still exist.
I've found some stored procedures (http://forums.mysql.com/read.php?97,218825,247526) that does the trick, but I don't want to create a stored procedure for this.
I've tried to use the query inside the stored procedure, but I get an error using "IF EXISTS (SELECT NULL FROM etc.. etc...
Can I only use IF EXISTS in stored procedures?
right now, the only thing I can run is
SELECT * FROM information_schema.TABLE_CONSTRAINTS
WHERE information_schema.TABLE_CONSTRAINTS.CONSTRAINT_TYPE = 'FOREIGN KEY'
AND information_schema.TABLE_CONSTRAINTS.TABLE_SCHEMA = 'myschema'
AND information_schema.TABLE_CONSTRAINTS.TABLE_NAME = 'mytable';
and I've tried this too
IF EXISTS (SELECT NULL FROM information_schema.TABLE_CONSTRAINTS WHERE CONSTRAINT_SCHEMA = DATABASE() AND CONSTRAINT_NAME = parm_key_name) THEN
(...) do something (...)
END IF;
but I get a You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF' at line 1
I've looked for examples in forums with simple queries and I can't make sense of why this isn't working.
NOTE: Edit to correct broken link
You need to connect to the Information scheme and you can find all the information about the primary key and foreign keys in this table
SELECT * FROM information_schema.TABLE_CONSTRAINTS T;
you need to be a ROOT user to access the information_schema.
USING this table you can find the table, db and whether it has foreign key.
Hope this helps if you dont wanna use IF EXIST and Stored Procedure. But I am Sure you can use IF EXIST can be used for non stored procedure queries....
Why don't You use the table "INFORMATION_SCHEMA" to this?
SELECT *
FROM information_schema.TABLE_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'FOREIGN KEY'
You need to connect to the Information scheme and you can find all the information about the primary key and foreign keys in this table
select
concat(table_name, '.', column_name) as 'foreign key',
concat(referenced_table_name, '.', referenced_column_name) as 'references'
from
information_schema.key_column_usage
where
referenced_table_name is not null;
HELP: see this link list-foreign-keys-in-mysql

Is there a way to ignore columns that don't exist on INSERT?

I'm using a MySQL GUI to migrate some sites to a new version of a CMS by selecting certain tables and running the INSERT statement generated from a backup dump into an empty table (the new schema). There are a few columns in the old tables that don't exist in the new one, so the script stops with an error like this:
Script line: 1 Unknown column 'user_id' in 'field list'
Cherry-picking the desired columns to export, or editing the dump file would be too tedious and time consuming. To work around this I'm creating the unused columns as the errors are generated, importing the data by running the query, then dropping the unused columns when I'm done with that table. I've looked at INSERT IGNORE, but this seems to be for ignoring duplicate keys (not what I'm looking for).
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
To clarify, I'm working with a bunch of backup files and importing the data to a local database for testing before moving it to the live server. Example of the kind of solution I'm hoping for:
-- Don't try to insert data from columns that don't exist in "new_table"
INSERT INTO `new_table` {IGNORE UNKNOWN COLUMNS} (`id`, `col1`, `col2`) VALUES
(1, '', ''),
(2, '', '');
If something like this simply doesn't exist, I'm happy to accept that as an answer and continue to use my current workaround.
Your current technique seems practical enough. Just one small change.
Rather than waiting for error and then creating columns one by one, you can just export the schema, do a diff and find out all the missing columns in all the tables.
That way it would be less work.
Your gui will be capable of exporting just schema or the following switch on mysqldump will be useful to find out all the missing columns.
mysqldump --no-data -uuser -ppassword --database dbname1 > dbdump1.sql
mysqldump --no-data -uuser -ppassword --database dbname2 > dbdump2.sql
Diffing the dbdump1.sql and dbdump2.sql will give you all the differences in both the databases.
you can write a store function like that:
sf_getcolumns(table_name varchar(100))
return string contatining the filed list like this:
'field_1,field_2,field_3,...'
then create a store procedure
sp_migrite (IN src_db varchar(50), IN target_db varchar(50))
that runs trugh the tables and for each table gets the filed lists and then creates a string like
cmd = 'insert into ' || <target_db><table_name> '(' || <fileds_list> || ') SELECT' || <fileds_list> || ' FROM ' <src_db><table_name>
then execute the string for each table
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
No, there is no "painless" way to do so.
Instead, you must explicitly handle those columns which do not exist in the final tables. For example, you must remove them from the input stream, drop them after the fact, play dirty tricks (engine=BLACKHOLE + triggers to INSERT only what you want to the true target schema), whatever.
Now, this doesn't necessarily need to be manual -- there are tools (as Devart noted) and ways to query the db catalog to determine column names. However, it's not as easy as simply annotating your INSERT statements.
Perhaps the CMS vendor can supply a reasonable migration script?
dbForge Studio for MySQL will give you an opportunity to compare and synchronize data between two databases.
By default data comparison is performed only for the objects with the same names; you can use automatic mapping or map database objects manually. dbForge Studio allows you to customize mapping of tables and columns, so you can compare data of objects with non-equal names, owners, and structure. You may also map columns with different types, however this may result in data truncation, rounding, and errors during synchronization for certain types.
I carefully read all these posts because I have the same challenge. Please review my solution:
I did it in c# but you can do it any language.
Check the insert statement for columnnames. If any is missing from your actual table ADD them as a TEXT column coz TEXT can eat anything.
When finished inserting into that table, remove the added columns.
Done.
found your question interesting.
I knew that there was a way to select the column names from a table in MySQL; it's show columns from tablename. What I'd forgotten was that all of the MySQL table data is held in a special database, called "information_schema".
This is the logical solution, but it doesn't work:
mysql> insert into NEW_TABLE (select column_name from information_schema.columns where table_name='NEW_TABLE') values ...
I'll keep looking, though. If it's possible to grab a comma-delimited value from the select column_name query, you might be in business.
Edit:
You can use the select ... from ... into command to generate a one-line CSV, like the following:
mysql> select column_name from information_schema.columns where table_name='NEW_TABLE' into outfile 'my_file' fields terminated by '' lines terminated by ', '
I don't know how to get this to output to the MySQL CLI stdout, though.
If your tables and databases are on the same server and you know that the common columns are compatible, you can easily do this with GROUP_CONCAT, INFORMATION_SCHEMA.COLUMNS, and a prepared statement.
This example creates two similar tables
and inserts the data from the common columns in
table_a into table_b. Since the two tables have
a common primary key column, it is excluded. (Note: the database [table_schema] I am using is called 'test')
create table if not exists `table_a` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
`d` varchar(2) default null,
PRIMARY KEY (`id`)
);
create table if not exists `table_b` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
PRIMARY KEY (`id`)
);
insert into table_a
(a, b, c, d)
values
('a1', 'b1', 'c1', 'd1'),
('a2', 'b2', 'c2', 'd2'),
('a3', 'b3', 'c3', 'd3');
-- This creates a comma delimited list of common
-- columns in table_a and table_b. It also excludes
-- any columns you don't want from table_a
set #common_columns = (
select
group_concat(column_name)
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_a'
and
column_name not in ('id')
and
column_name in (
select
column_name
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_b'
)
);
set #stmt = concat(
'insert into table_b (', #common_columns, ') ',
'select ', #common_columns, ' from table_a;'
);
prepare stmt from #stmt;
execute stmt;
deallocate prepare stmt;
select * from table_b;
The prepared statement ends up looking like this:
insert into table_b (a, b, c)
select a, b, c from table_a;
Don't forget to change the values for table_name and table_schema to match your tables and database (table_schema). If it is useful you could create a stored procedure to do this task.