What mysql command makes all data inaccessable? - mysql

My database is periodicly being "deleted" by an automated command from the server (because the table is too big). What happens is that all data in a certain table becomes unaccessable with e.g. select. But if I do a "repair" on the table, all data comes back. I would like to stop this nonesense, but I can't find the command that does this. Any help?
Edit: I should note that the DB is on an external machine that I do not have access to.
I have now tried to do a "select" when the db was in this curious state. The table says it has 0 entries, but take 2.5 gb of storrage space. When I selected all I got one tuple, no errors.

Its likely your DB is becoming corrupt somehow. There's no command that does that (I hope).

Do yourself a favor and alter each and every one of your tables so they use the InnoDB engine instead of MyISAM. It's still be MySQL, but it'll be a lot less prone to data corruption.
And if changing DB altogether is an option, look into using PostgreSQL instead.

Related

I am Not sure if Analyze/Optimize table in MySQL(InnoDB) processed or not?

Steps:
1. Tried to rebuild index using the following options in MySQL(InnoDB) Workbench.
2. On click of the "Analyze table"/ "Optimize" table we immediately got a ok response within seconds without any background process
3. Not sure if indexes has been built accordingly.
How can we understand if rebuild index process has been completed and how can we validate it?
Compare the value of SHOW TABLE STATUS before and after the OPTIMIZE. (ANALYZE does not rebuild the index.) If the indexes were rebuilt, the values may change.
If the table is only a few thousand rows, the rebuild will be so fast that you will have trouble noticing it.
You can get finer control by using the mysql commandline tool instead of Workbench.
OPTIMIZE is essentially never needed for Innodb tables.

Deleting Specified Number of Top Rows from Database (Java SQL)

I have a table PRI in database and I just realized that I have entered wrong data in the first 100 rows so I want to delete them. Now I don't have anything to ORDER the rows, so how should I go about the deletion process?
If TOP is an actual keyword, you are on the wrong DBMS. Else you have to read again on how to delete rows.
Generel tip:
If you mess up, use an external DB tool (SQLDeveloper, HeidiSQL, etc.) and connect to your database. Do your clean up until your have a sane database state again.
Then continue coding. Not before. Never use code to undo your failures.

Mysql myisam insert SLOWER than innodb

I have a mysql server with production database and a staging database. I am using innodb for the production tables, but I don't need the features for staging which just moves data around. So I decided to switch those tables to myisam expecting an increase in insert performance.
The results were horrible. It looked more like inserting into an innodb table without autocommit disabled. If I keep the table innodb and turn off autocommit during the insert I can get several thousand inserts per second. As soon as I change the table to myisam I'm getting maybe a couple dozen inserts per second.
I thought maybe it was because I'm getting the data via our legacy backend using SSIS but that doesn't appear to be the issue. Using SSIS and going from our production db to the staging db (mysql to mysql) I still see the same results... innodb(no autocommit) far out performs myisam.
This makes no sense to me. If nothing else, in my experience, myisam should at least be comparable, hopefully even better.
Is there anything obvious I am overlooking? I haven't included specifics in hopes that it is something in general I am missing.
EDIT:
This appears to be related to SSIS and the ODBC Destination component. I'm using an ODBC Source that has my select statement and the output goes to the ODBC Destination which is table on the same server, but a different DB. Since the DBs are on the same server I ran, in SqlYog, an INSERT using the same SELECT query as the ODBC Source and it finished in a couple seconds. I will see if I can find a solution.

Converting a big MyISAM to InnoDB

I'm trying to convert a 10million rows MySQL MyISAM table into InnoDB.
I tried ALTER TABLE but that made my server get stuck so I killed the mysql manually. What is the recommended way to do so?
Options I've thought about:
1. Making a new table which is InnoDB and inserting parts of the data each time.
2. Dumping the table into a text file and then doing LOAD FILE
3. Trying again and just keep the server non-responsive till he finishes (I tried for 2hours and the server is a production server so I prefer to keep it running)
4. Duplicating the table, Removing its indexes, then converting, and then adding indexes
Changing the engine of the table requires rewrite of the table, and that's why the table is not available for so long. Removing indexes, then converting, and adding indexes, may speed up the initial convert, but adding index creates a read lock on your table, so the effect in the end will be the same. Making new table and transferring the data is the way to go. Usually this is done in 2 parts - first copy records, then replay any changes that were done while copying the records. If you can afford disabling inserts/updates in the table, while leaving the reads, this is not a problem. If not, there are several possible solutions. One of them is to use facebook's online schema change tool. Another option is to set the application to write in both tables, while migrating the records, than switch only to the new record. This depends on the application code and crucial part is handling unique keys / duplicates, as in the old table you may update record, while in the new you need to insert it. (here transaction isolation level may also play crucial role, lower it as much as you can). "Classic" way is to use replication, which, as far as I know is also done in 2 parts - you start replication, recording the master position, then import dump of the database in the second server, then start it as a slave to catch up with changes.
Have you tried to order your data first by the PK ? e.g:
ALTER TABLE tablename ORDER BY PK_column;
should speed up the conversion.

MySql ALTER TABLE on Production Databases - Any Issues?

I have about 100 databases (all the same structure, just on different servers) with approx a dozen tables each. Most tables are small (lets say 100MB or less). There are occasional edge-cases where a table may be large (lets say 4GB+).
I need to run a series of ALTER TABLE commands on just about every table in each database. Mainly adding some rows to the structure, but a few changes like change a row from a varchar to tinytext (or vice versa). Also adding a few new indexes (but indexing new rows, not existing ones, so assuming that isn't a big deal).
I am wondering how safe this is to do, and if there are any best practices to this process.
First, is there any chance I may corrupt or delete data in the tables. I suspect no, but need to be certain.
Second, I presume for the larger tables (4GB+), this may be a several-minutes to several-hours process?
Anything and everything I should know about performing ALTER TABLE commands on a production database I am interested in learning.
If its of any value knowing, I am planning on issuing commands via PHPMYADMIN for the most part.
Thanks -
First off before applying any changers, make backups. Two ways you can do it: mysqldump everything or you can copy your mysql data folder.
Secondly, you may want to use mysql from the command line. PHPMyAdmin will probably time out. Most PHP server has timeout less than 10 minutes. Or you accidently close the browser.
Here is my suggestion.
You can do fail-over the apps.(make sure no connections on all dbs) .
You can create indexes by using "create index statements" .don't use alter table add index statements.
Do these all using script like(keep all these statements in a file and run from source).
Looks like table sizes are very small so it wont create any headache.