Adding Fulltext index crashes MySQL Service - mysql

I'm trying to add a Fulltext index to a table. When I run the query,
ALTER TABLE thisTable ADD FULLTEXT(thisText);
I get the message
SQL Error (2013): Lost connection to MySQL server during query
and the mysql service will indeed be stopped. If I restart the service and try to add the index again I get another error.
SQL Error (1813): Tablespace for table 'thisTable/#sql-ib21134' exists. Please DISCARD the tablespace before IMPORT.
The engine is InnoDb and I run MySQL 5.6.12, so Fulltext index should be supported. The column is a TEXT column.
I'd be very grateful if someone could point me in the right direction where the error is coming from.

The problem is related to sort buffer size. It is known bug of mysql/mariadb/percona.
Even after many months I have reported this bug it was not fixed (we are using latest Mariadb)

The second error happens because the table (or the fulltext index table) was partially modified (or created) when the server crashed. Drop and re-create your table from scratch.
Now, why did the server crash? Hard to tell for sure, but it is likely that some buffer reached capacity. The usual suspect is innodb_buffer_pool_size. Try to increase it progressively.

On Ubuntu server 20.04,
despite reasonable logic to increase innodb_buffer_pool_size I have idenified that OOM Killer killed mysql when that buffer is huge.
OOM Killer is a function of the Linux kernel that is intended to kill rogue processes that are requesting more memory that the OS can allocate, so that the system can survive.
Excerpt from syslog:
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/mysql.service,task=mysqld,pid=98524,uid=114
Jan 10 06:55:40 vps-5520a8d5 kernel: [66118.230690] Out of memory:
Killed process 98524 (mysqld) total-vm:9221052kB, anon-rss:5461436kB,
file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:11232kB oom_score_adj:0
So this means, due tu huge innodb_buffer_pool_size setting, it is trying to allocate too big chunks of memory, too much agressiveley.
So using that fact, logic dictates to reduce size or set default values for innodb_buffer_pool_size, I have completley removed it from /etc/mysql/mysql.conf.d/mysqld.conf then restarted server with
systemctl restart mysql
and now adding fulltext on my 61M table is not crashing.

Bad luck pal...
InnoDB tables do not support FULLTEXT indexes.
Source - http://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html

Related

How to clear the database cache in mysql 8 innodb

I am attempting to run the same query multiple times on the same mysql 8 database and table.
I need to carry out experiments to determine if tweaking the query and or table itself improves performance. However after the first attempt the response time is much faster, i assume becuase the data is cached.
mysql 8 innodb
What options do i have to clear the cache so the data is fetched from scratch.
It appears the answers that have been proposed before are all related to mysql 5 and not mysql 8. Most of the commands seem to now be deprecated.
Clear MySQL query cache without restarting server
The question you link to is about the query cache, which is removed in MySQL 8.0 so there's no need to clear it anymore.
Your wording suggests you are asking about the buffer pool, which is not the same as the query cache. The buffer pool caches data and index pages, whereas the query cache (when it existed) cached results of queries.
There is no command to clear the buffer pool without restarting the MySQL Server. Pages remain cached in the buffer pool until they are evicted by other pages.
The buffer pool is in RAM so its contents are cleared if you restart the MySQL Server process. So if you want to start from scratch, you would need to restart that process (you don't need to reboot the whole OS, just restart the MySQL service).
The caveat is that in MySQL 8.0, the contents of the buffer pool are not entirely cleared when you restart. A percentage of the content of the buffer pool is saved during shutdown, and reloaded automatically on startup. This feature is enabled by default, but you can optionally disable it.
Read more about this:
https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_buffer_pool_dump_at_shutdown

MariaDB 10.3.22 innodb_status_output keeps turning on

MariaDB 10.3.22 innodb_status_output keeps turning on automatically
Per MySQL docs, https://dev.mysql.com/doc/refman/8.0/en/innodb-troubleshooting.html,
"InnoDB temporarily enables standard InnoDB Monitor output under the following conditions:
1.A long semaphore wait
2.InnoDB cannot find free blocks in the buffer pool
3.Over 67% of the buffer pool is occupied by lock heaps or the adaptive hash index "
MariaDB docs don't mention "InnoDB temporarily enables standard InnoDB Monitor", https://mariadb.com/kb/en/xtradb-innodb-monitors/
Running the commands below does turn off the monitors, but they come back on, probably due to the conditions mentioned above:
SET GLOBAL innodb_status_output=OFF;
SET GLOBAL innodb_status_output_locks=OFF;
I'd like to prevent MariaDB from temporarily turning on InnoDB Monitor. I understand we could fix our db to prevent the conditions above, but we'd like to not have InnoDB Monitor turned on automatically. -Thanks for the help.
I didn't find an exact solution. I had to turn off all logging to stop the output of "standard InnoDB Monitor". I could not find a way to output error logging without MariaDB automatically turning on "standard InnoDB Monitor output". Note that "standard InnoDB Monitor" outputs millions of lines per day for a medium active db. I'm still interested if anyone should find a solution, thanks -K

mysql table crashes only when replica server running

I have a table that crashes often, but only seems to crash when the replica is running.
The table is MyISAM. The table has 2 mediumtext fields. The error I get when making a delete statement is this: "General error: 1194 Table 'outlook_emails' is marked as crashed and should be repaired".
I wonder if this has to do with the binary log. However, it doesn't seem to happen when the binary log is running but the replica is down.
Any idea what is happening or what I can do to solve it or investigate further?
Table '...' is marked as crashed and should be repaired".
That error occurs (usually) when the MySQL server as been rudely restarted, and the table is ENGINE=MyISAM.
The temporary fix is to run CHECK TABLE, which will then suggest that you run REPAIR TABLE. The tool myisamchk is a convenient way to do them, especially since there could be several tables so marked.
The Engine InnoDB has radically different internals. It avoids the specific issue that MyISAM has, and does a much more thorough job of recovering from crashes.
Switching to InnoDB requires ALTERing your tables. Here is more discussion: http://mysql.rjweb.org/doc.php/myisam2innodb

how to perform hourly backup of mysql MyISAM tables

I have a wamp local server with only MyISAM tables, Total size is 350 MB and increases by about 10 MB per day. Would like to have a hourly backup solution. As recently WAMP server crashed and lost data.Should I create a trigger or schedule a task or use some utility it would be great if someone can guide. So that database backup is takeon either on local network or on different partition of server HDD. Have mirror RAID enabled on the server.
Since InnoDB does not lose data in a crash, and since you are talking about backing up a lot of data, and you are in the 'early stages', the only real answer to your question is to switch to InnoDB.
When the server crashes, MyISAM almost always leaves some indexes in need of REPAIR TABLE. This problem vanishes with InnoDB.
If the server crashes in the middle of a multi-row INSERT, UPDATE, or DELETE on a MyISAM table, some rows will be done and other rows won't be. And you have no way to know how much was done. With InnoDB, the entire query is "rolled back", making it as if it had not occurred. Very predictable.
See my tips
on converting.
For better recovery from a crash where the server cannot be restarted at all, consider Replication.

Sql databases get corrupt after every backup

I have a problem with my unix server. This started a week ago. One day after a backup (I used to keep 3 backup files) I visited a website on the server but it wouldn't work. I restarted the server and it seemed to be working fine except the mysql service. My attempts to restart it failed. Then I figured that was because the server was full, so I deleted one of the backups, cleaned up some space and the mysql service restarted successfully. Than I figured tables in one of the databases (MYIsam tables) were corrupt. So I repaired them through myisamchk command via ssh and all worked fine. However, the very next day I woke up they were corrupt again (despite mysql was working fine), and this time there was no disk space problem on the server. I repaired them again. The next day the same thing happenned; and this time innodb tables that were part of another database were corrupt as well. I've fixed them too, so now all is working well but I guess the same thing will happen after tonight's backup.
I can't identify the problem and I don't know what logs to look into to understand the problem. Can anyone please help me out? Thanks very much in advance.
No easy answer here. My immediate thought is that the dbase is still busy when the backups commence, possibly corrupting indexes, interferring with caches, etc. Turn on full logging and check for problems when the backup starts happens. Maybe you will find something.
Look for the my.cnf file. On my CentOs it is located in /etc/my.cnf. It will have a config setting for the location of the error log.
My strongest suspect is OOM kill by the kernel or some other issue that results from running the system out of memory. Try this:
Start top on the server and press M to sort by memory so the biggest memory user is at the top.
note the pid of mysqld
manually perform the backup as you observe the value of the RES column in the top output (resident memory size)
once the backup is over see if the pid of mysqld has changed
If the pid has changed (meaning restart took place), and you saw the memory footprint of mysqld take up something comparable to the total amount of system memory, then my suspicion is correct, and we need to lower some settings in my.cnf to make it use less memory, e.g key_buffer_size and innodb_buffer_pool_size.
EDIT - From the log you posted, there are additional issues although it is not clear how they could be contributing to the table corruption. Your server appears to be running with --skip-innodb and your backup script is not able to deal with the absence of InnoDB storage engine printing exception error messages, but nevertheless continuing. It is also attempting to do a repair, which is failing due to the lack of system privileges (error 1 is Operation not permitted). It is possible that encountering those errors triggers some faulty logic in your backup script that leaves the tables corrupted.
At this point I would recommend disabling MySQL backup using the cPanel tool, and using mysqldump or some other solution (e.g. Xtrabackup (https://www.percona.com/doc/percona-xtrabackup/2.3/index.html)) from a cron job instead.
EDIT2 - from the test results. The manual backup does not run the system out of memory and does not crash the server. The jury is still out on the automatic one.
Don't kill mysqld; shut it down gracefully.
Switch from MyISAM to InnoDB; the latter does not suffer from that 'error'.