Recently, phpmyadmin show this message at the top of my Records in database.
The description of this message is:
"The number of records for InnoDB tables is not correct.
phpMyAdmin uses a quick method to get the row count, and this method only returns an approximate count in the case of InnoDB tables. See $cfg['MaxExactCount'] for a way to modify those results, but this could have a serious impact on performance."
I would like to know will it further affect my database data if I ignore it?
Or should I cleared my database and re-created those data?
Thanks.
I would like to know will it further affect my database data if I ignore it?
it won't affect you if you ignore it.
Or should I cleared my database and re-created those data?
there's no need to re-create the data, it won't get rid of the message.
all that message is telling you is that the numbers shown in the Rows column might not be exact. this isn't a problem with the data or the database, but just something phpMyAdmin does to speed up showing that page. counting all the rows takes a long time.
Related
As specified in the above heading normally, what I did is not a normal way of altering a table to add one column.
We are using MySql 5.5, and Toad for doing operations in database. There is one table appln_doc which is having nearly 300K records storing documents of applicants, like images and that too of huge size.
Now we have to add a new column is_signature of tinyint type. We tried in three different ways
Using Toads in built provision - Double click on table, a new window will there under column tab add a new column with name, type and size and click Alter button.
Using alter table query in toad itself.
Using putty we logged inside mysql and executed the same query.
All the three efforts lead into same problem Fixing “Lock wait timeout exceeded; try restarting transaction” for a 'stuck" Mysql table. So we tried to kill the waiting process and again tried to alter the table, still the result was same.
We killed the process again and restarted the mysql server and again tried to add the column, still the problem was same.
Lastly we exported all the table data to an excel sheet and truncated the table. After that when we tried to add that column it was successful. Then that exported excel sheet was added with a new column is_signature with all its values as 0 as a default value to the new column. Then we exported that data back to the table again. That's why I said that I didn't add the column in a normal way.
So has anybody faced any situation like this and has got a better solution than than this? Can anybody tell why this is happening is it because of the bulk and size of data stored in that table?
PS : The table appln_doc was having a child table appln_doc_details with no data. Only this table is having problem while altering.
At the end of the day, no matter what tools you use, it all boils down to one of two scenarios:
An alter table statement
Create new table/copy data/delete old table/rename new table.
The first one is generally faster, the second one is generally more flexible (there are some things for a table that cannot be altered).
Either way, handling this much data just takes a lot of time, and there's nothing you can really do about it.
On the bright side, almost all timeouts are configurable somewhere. I don't know how to configure this particular one, but I'm 99% sure that you can. Find out how and increase it to be big enough. For 300K records, I think that the operation will take around 10 minutes or less, but of course it depends. Set it to 10 hours. :)
I have a table PRI in database and I just realized that I have entered wrong data in the first 100 rows so I want to delete them. Now I don't have anything to ORDER the rows, so how should I go about the deletion process?
If TOP is an actual keyword, you are on the wrong DBMS. Else you have to read again on how to delete rows.
Generel tip:
If you mess up, use an external DB tool (SQLDeveloper, HeidiSQL, etc.) and connect to your database. Do your clean up until your have a sane database state again.
Then continue coding. Not before. Never use code to undo your failures.
I'm having another issue with an Microsoft Access database. Every so often, some records will get corrupted. Something happens and different shapes, Chinese characters, and wrong data will be in the records. I did find a way on not losing the corrupted records by having a backup for that table that I update everyday. Still, it's a bit of an annoyance especially when an update is ran.
I've tried to look for different solutions for this problem but none have really worked. It's a database that can be used by multiple users at the same time. It's an older one that I've had to update a bit. I don't have any memo fields present in the table either.
If you are using an autonumber field as a primary key, that could cause an increased corruption risk if the autonumber seed is reset and begins duplicating existing values. This has since been fixed, but you may need to update your Jet Engine Service Pack
If you are in a multi-user environment and have not split your database, you should try that. You can split the database using the database tools tab on the ribbon in the "Move Data" section. That can reduce corruption risk by better managing concurrent updates to the same record. See further discussion here.
Unfortunately I can't tell you the problem without more information regarding your tables and relationships. If the corruption is a common result of your update query, I would start by looking through your update routine for errors.
I have a big table, which saved data with an ID based on input from an external API. The ID is stored in an int field. When I developed the system, I encountered no problems, because the ID of records in the external API were always below 2147483647.
The system has been fetching data from the API for the last few months, and apparantly the ID crossed the 2147483647 mark. I now have a database with thousands of unusable records with ID 2147483647.
It is not possible to fetch this information from the database again (basically, the API allows us to look up data from max x days ago).
I am pretty sure that I am doomed. But might there be any backlog, or any other way, to retrieve the original input queries, or numbers that were truncated by MySQL to fit in the int field?
As already discussed in the comments, there is no way to retrieve the information from the table. It was silently(?!!!) truncated to 32 bits.
First, call the API provider, explain your situation, and see if you can redo the queries. Best that happens is they say yes and you don't have to try to reconstruct things from logs. Worst that happens is they say no and you're back where you are now.
Then there are some logs I would check.
First is the MySQL General Query Log. IF you had this turned on, it may contain the queries which were run. Another possibility is the Slow Query Log, more often enabled, if your queries happened to be slow.
In MySQL, data truncation is a warning by default. It's possible those warnings went into a log and included the original data. The MySQL Error Log is one possibility. On Windows it may have gone into the Windows Event Log. On a Mac, it might be in a log visible to the Console. In Unix, it might have gone to syslog.
Then it's possible the API queries themselves are logged somewhere. If you used a proxy it might contain them in its log. The program fetching from the API and adding to the database may also have its own logs. It's a long shot.
As a last resort, try grepping all of /var/log and /var/local/log and anywhere else you might think could contain a log.
In the future there are some things you can do to prevent this sort of thing from happening again. The most important is to turn on strict SQL mode. This will turn warnings, like that data has been truncated, into errors.
Set UNIQUE constraints on unique columns. Had your API ID column been declared UNIQUE the error would have been detected.
Use UNSIGNED BIGINT for numeric IDs. 2 billion is a number easily exceeded these days. It will mean 4 extra bytes per row or about 8 gigabytes extra to store 2 billion rows. Disk is cheap.
Consider turning on ANSI SQL mode. This will disable a lot of MySQL extensions and make your SQL more portable.
Finally, consider switching to PostgreSQL. Over the years MySQL has accumulated a lot of bad ideas, mish-mashes of functions, and bad default behaviors. You just got bit by one. PostgreSQL is far better designed, more powerful and flexible, and usually as fast or faster.
In Postgres, you would have gotten an error.
test=# CREATE TABLE foo ( id INTEGER );
CREATE TABLE
test=# INSERT INTO foo (id) VALUES (2147483648);
ERROR: integer out of range
If you have binary logging enabled, and you still have backups of the binlogs, and your binlog_format is not set to ROW then your original insert and/or update statements should be preserved there, where you could extract them and replay them into another server with a more appropriate table definition.
If you don't have the binlog enabled and/or you aren't archiving the binlogs in perpetuity... this is one of the reasons why you should consider doing it.
A really weird (for me) problem is occurring lately. In an application that accepts user submitted data the following occurs at random:
Rows from the Database Table where the user submitted data is stored are disappearing.
Please note that there is NO DELETE, DROP, TRUNCATE or other SQL statement issued on the database table except from the INSERT statement.
Could this be a bug of Mysql? Did some research on mysql.com (forums, bugs, etc) and found 2 similar cases but without getting a solid answer (just suggestions).
Some info you might find useful:
Storage Engine: InnoDB
User Submitted Data sanitized and checked for SQL Injection attempts
Appreciate any suggestions, info.
regards,
Here's 3 possibilities:
The data never got to the database in the first place. Something happened elsewhere so the data disappeared. Maybe intermitten network issues, overloaded server, application bug.
A database transaction was not commited, and got rolled back. Maybe a bug in your application code, maybe some invalid data screwd things up, maybe a concurrency exception occured etc.
A bug in mysql.
I'd look at 1. and 2. first.
A table on which you only ever insert (and presumably select) and never update or delete should be really stable. Are you absolutely certain you're protecting thoroughly against SQL injection attacks? Because those could (of course) delete rows and such if successful.
You haven't mentioned which table engine you're using (there are several), but it's well worth running whatever diagnostic tools there are for it on the table in question. For instance, on a MyISAM table, run myisamchk. Or more generically (this works for several table types), use the CHECK TABLE statement.
Have you had issues with the underlying storage? It may be worth checking for those.
Activating binlog and periodically monitoring DELETE queries can help to identify the culprit.
One more case to fullfill the above. There could also be the case of client-side and server-side parts of application. Client-side initiated changes can be processed on the server side with additional code logics.
For example, in our case, local admin panel updated an order information with pay_date = NULL and php-website processed this table to clean-up overdue orders from this table. As php logics were developed by another programmer, it looked strange when orders update resulted in records to disappear after some time.
The same refers to crone operations, working on mysql database in a schedule.