MATCH in MYSQL 5.5.24 not working - mysql

I am writing the following query :
select * from student where match(name,middle name) against('amar');
I am getting error as : The used table type doesn't support FULLTEXT indexes.
I am using mysql version 5.5.24 on wamp server.
How to solve this issue.
Thank you

before Mysql 5.6, full text search is supported by only myisam engine not innodb, it seems you are using innodb engine for this table.
Even it seems that you did not create full text index on table otherwise you will get error at that time also...
full text index is different from btree default index.

Related

How to migrate MYSQL 5.7 to 8 without recreate table with dangling FULLTEXT index?

I'm trying to upgrade my RDS MySql from version 5.7 to 8 but I'm getting errors the precheck log that tell me I have issues with the fulltext index.
I tried to delete the fulltext index but I'm still getting this error:
Table xxxx contains dangling FULLTEXT index. Kindly recreate the
table before upgrade.
It's really big table and I can't recreate it so easy.
Can someone have a workaround I can use without the need to recreate this table?
Thanks
This is an error specific to AWS Aurora. It is not a MySQL error (I searched the MySQL source tree and there is no occurrence of the word "kindly").
This AWS documentation page describes the error: https://docs.aws.amazon.com/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.mysql80-upgrade-procedure.html
Their recommended fix:
First, we go back to the original cluster. Then we run OPTIMIZE TABLE tbl_name [, tbl_name] ... on the tables causing the following error:
Table `tbl_name` contains dangling FULLTEXT index. Kindly recreate the table before upgrade.
They also describe creating a new, empty table, and copying the old data to the new table. This is nearly the same operation, and takes just as long.

MySQL Error: Index column size too large. The maximum column size is 767 bytes

I've unsuccessfully been through the AWS forum and Stack Overflow trying to find a solution to the following error:
Index column size too large. The maximum column size is 767 bytes
I am running a WordPress website with 1.5M records in the postmeta table. I recently added an index to the postmeta table, and all was testing ok. However I had an incident with my server today (botnet scan dwindled my RDS credits), and restarted both my Lightsail instance and my RDS MySQL instance. After the restart I noticed that the site wasn't working properly and upon further investigation found the postmeta table was returning the error Index column size too large. The maximum column size is 767 bytes.
I'm running on MySQL 8.0.20
The table is:
Engine = InnoDB
Charset = utf8mb4
Collation = utf8mb4_0900_ai_ci
Row Format = Compact
Many existing "solutions" talk about recreating the table, however I need the data that's currently in the table.
Unfortunately this issue is present in my oldest AWS RDS Snapshot, so back ups don't appear to be an option.
Every time I try run an ALTER or SELECT statement, I get the same error, Index column size too large. The maximum column size is 767 bytes.
I've tried:
Changing the ROWFORMAT=DYNAMIC
Converting the charset and records to utf8
Changing the meta_value column from 255 to 191
Removing the custom index
Dumping the table
I can see that the default ROWFORMAT is now "DYNAMIC", however this table is still "COMPACT" from when it was running on MySQL 5.7
I've also tried updating the AWS RDS MySQL from 8.0.20 to 8.0.23, however the update fails cause it reports the table is corrupt in PrePatchCompatibility.log.
Ref: https://dba.stackexchange.com/questions/234822/mysql-error-seems-unfixable-index-column-size-too-large#answer-283266
There are some other suggestions about modifying the environment and file system, and running "innodb_force_recovery".
https://dba.stackexchange.com/questions/116730/corrupted-innodb-table-mysqlcheck-and-mysqldump-crash-server
However being an RDS instance, I don't have access to this lower level of the instance.
I suspect this issue is the column length and utf8mb4, however my main priority is getting the data from the currently in the table.
I also understand that changing the ROWFORMAT to DYNAMIC should fix this issue - however getting the same error.
Ref: http://mysql.rjweb.org/doc.php/limits#767_limit_in_innodb_indexes
I have also tried the "RDS Export to S3" option with no luck.
Please help, I'm lost as to what else to try.
I had, and solved, the same problem. Here's the situation.
In legacy MySQL table formats, the maximum size of an index on a VARCHAR or blob column is 767 bytes (not characters). These wp_somethingmeta WordPress tables have key columns (like meta_key) with the VARCHAR(255) datatype. When utf8mb4 is the character set each character can take up to four of those 767 bytes. that means indexes have to be defined as prefix. meta_key(191).
What makes a MySQL table into a legacy table?
MyISAM access method, or
An old version (5.5, early 5.6) of MySQL which only supports the older InnoDB Antelope ondisk table format and not the newer Barracuda file format, or
InnoDB and the ROW_FORMAT is COMPACT (or REDUNDANT).
So, to get away from prefix indexes on the varchar(255) columns, the table needs to be InnoDB and use the DYNAMIC (or COMPRESSED) ROW_FORMAT.
There's no need to rebuild a legacy table from scratch. You can convert it by saying
ALTER TABLE whatever ENGINE=InnoDB, ROW_FORMAT=DYNAMIC;
Then you stop having the prefix-key (191) issue.
Back up your database before you do this kind of thing. You knew that.
And, upgrade to a recent version of MySQL or MariaDB. Seriously. MySQL 5.6 long-term support ended on 1-February-2021, and the newer versions are better. (GoDaddy! I'm looking at you.)
WordPress' wp_postmeta table normally has an index on its meta_key column, which is varchar(255). That's too long.
First, drop the index that is too large.
SHOW CREATE TABLE wp_postmeta; -- to verify the name of the index
ALTER TABLE wp_postmeta DROP KEY meta_key;
I'm assuming the name of the index will be meta_key, which is the default name for an index on that column. But double-check the index name to be sure.
Then, add the index back, but make it a prefix index such that it's not larger than 767 bytes.
Since you're using utf8mb4, which allows multibyte characters up to 4 bytes per character, you can define the index with a prefix length of floor(767/4), or 191.
ALTER TABLE wp_postmeta ADD KEY (meta_key(191));
That index length will be permitted by the COMPACT row format, and it should be more than long enough to make the index just as useful as it was before. There's virtually no chance that you have a lot of meta key values that have the same leading characters and differ only after the 191th character.
Another alternative is to create a new table:
CREATE TABLE wp_postmeta_new (
meta_id BIGINT UNSIGNED,
post_id BIGINT UNSIGNED,
meta_key VARCHAR(255),
meta_value LONGTEXT,
KEY (post_id),
KEY (meta_key)
) ROW_FORMAT=DYNAMIC;
Double-check that it created this table with DYNAMIC rowformat.
Copy all the old data into it:
INSERT INTO wp_postmeta_new SELECT * from wp_postmeta;
Then swap the tables:
RENAME TABLE wp_postmeta TO wp_postmeta_old,
wp_postmeta_new TO wp_postmeta;
I'm assuming there are no new posts being created while you're doing this operation. You'd have to ensure no one is adding content to this WP instance so you don't miss any data in the process.

Error 2013: lost connection to mariadb when adding full text index

I am using Maria DB 10.1.8 latest stable version available and i have dumped around 15 Million records into table more_bar_codes table. When i tried to alter table to add fulltext index onto one of its column, am getting error
2013: lost connection to mysql server during query.
The syntax used is:
Alter table more_bar_codes add fulltext index dl_full_text_bar_code (bar_code);
Any idea how to fix this one?
No timeout worked for me. One workaround is to create temp table like old table and add full text index and then copy data to it. That has resolved my issue.

How to fix The used table type doesn't support FULLTEXT indexes without loosing data?

Today I try to convert my wordpress blog MySQL database table (only wordpress system table) engines from MyISAM to InnoDB. I can convert all the wordpress system tables, except _posts table. When I run this command,
ALTER TABLE table_prefix_here_posts ENGINE=InnoDB;
I get following error.
#1214 - The used table type doesn't support FULLTEXT indexes
I search on Google and found that I can fix it by doping the table. But in my situation, as far as I know, if I drop the _posts table, I lose all my blog posts. Therefore are there anyway to convert my _posts table to InnoDB without loosing my posts (data)?
FULLTEXT indexes are supported in InnoDB tables only starting from MYSQL 5.6 so try to update MYSQL and after that alter table's engine

Changing Table Engine in MySQL

I am using mysql and mysql workbench. I created 5 tables with innodb engine. I checked their engine and it was innodb before I insert data into them. I inserted data from 5 MyISAM tables and now my innodb tables are MyISAM. I can't change them. I used the alter table engine=innodb but it doesn't work.
From the manual: http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
For example, to convert a table to be an InnoDB table, use this statement:
ALTER TABLE t1 ENGINE = InnoDB;
The outcome of attempting to change a table's storage engine is affected by whether the desired storage engine is available and the setting of the NO_ENGINE_SUBSTITUTION SQL mode, as described in Section 5.1.11, “Server SQL Modes”.
https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_no_engine_substitution
When you create the table do you get any warning about the Engine type being unavailable?
It's not obvious. If you edit the table and then select the column tab the engine widget is not immediately visible. On the upper right of the edit window you will see two down pointing chevrons. Select the arrow once and additional widgets will appear. In the upper right hand corner there will now be widgets for the schema and engine.