I have a table PRI in database and I just realized that I have entered wrong data in the first 100 rows so I want to delete them. Now I don't have anything to ORDER the rows, so how should I go about the deletion process?
If TOP is an actual keyword, you are on the wrong DBMS. Else you have to read again on how to delete rows.
Generel tip:
If you mess up, use an external DB tool (SQLDeveloper, HeidiSQL, etc.) and connect to your database. Do your clean up until your have a sane database state again.
Then continue coding. Not before. Never use code to undo your failures.
Related
We are running a service where we have to setup a new database for each new site. The database is exactly the same so we can simply dump from a backup file or clone from a sample database (which is created only for clone purpose, no transaction will be run there thus no worry about corrupting data) from the same server. The database it self contains around 100 tables and with some data, taking around 1-2mins to import, which is too slow.
I'm trying to find a way to do it as fast as possible, the first thought came to mind was to copy the files within the sample database data_dir, but it seems like I also need to somehow edit the table lists or mysql wont be able to read my new database's tables eventhough it still shows them there.
You're duplicating the database the wrong way, it will be much faster if you do it properly.
Here is how you duplicate a database:
create database new_database;
create table new_database.table_one select * from source_database.table_one;
create table new_database.table_two select * from source_database.table_two;
create table new_database.table_three select * from source_database.table_three;
...
I just did a performance test, this takes 81 seconds to duplicate 750MB of data across 7 million table rows. Presumably your database is smaller than that?
I don't think you are going to find anything faster. One thing you could do is already have a queue of duplicate databases on standby ready to be picked up and used at any time. So you don't need to create a new database at all, you just rename an existing database from a queue of available ones. And have a cron job running to make sure the queue never runs empty.
Why mysql not able to read or what you changes in table lists?
I think there may be problem of permissions to read by mysql, otherwise it would be fine..
Thanks
Recently, phpmyadmin show this message at the top of my Records in database.
The description of this message is:
"The number of records for InnoDB tables is not correct.
phpMyAdmin uses a quick method to get the row count, and this method only returns an approximate count in the case of InnoDB tables. See $cfg['MaxExactCount'] for a way to modify those results, but this could have a serious impact on performance."
I would like to know will it further affect my database data if I ignore it?
Or should I cleared my database and re-created those data?
Thanks.
I would like to know will it further affect my database data if I ignore it?
it won't affect you if you ignore it.
Or should I cleared my database and re-created those data?
there's no need to re-create the data, it won't get rid of the message.
all that message is telling you is that the numbers shown in the Rows column might not be exact. this isn't a problem with the data or the database, but just something phpMyAdmin does to speed up showing that page. counting all the rows takes a long time.
My database is periodicly being "deleted" by an automated command from the server (because the table is too big). What happens is that all data in a certain table becomes unaccessable with e.g. select. But if I do a "repair" on the table, all data comes back. I would like to stop this nonesense, but I can't find the command that does this. Any help?
Edit: I should note that the DB is on an external machine that I do not have access to.
I have now tried to do a "select" when the db was in this curious state. The table says it has 0 entries, but take 2.5 gb of storrage space. When I selected all I got one tuple, no errors.
Its likely your DB is becoming corrupt somehow. There's no command that does that (I hope).
Do yourself a favor and alter each and every one of your tables so they use the InnoDB engine instead of MyISAM. It's still be MySQL, but it'll be a lot less prone to data corruption.
And if changing DB altogether is an option, look into using PostgreSQL instead.
A really weird (for me) problem is occurring lately. In an application that accepts user submitted data the following occurs at random:
Rows from the Database Table where the user submitted data is stored are disappearing.
Please note that there is NO DELETE, DROP, TRUNCATE or other SQL statement issued on the database table except from the INSERT statement.
Could this be a bug of Mysql? Did some research on mysql.com (forums, bugs, etc) and found 2 similar cases but without getting a solid answer (just suggestions).
Some info you might find useful:
Storage Engine: InnoDB
User Submitted Data sanitized and checked for SQL Injection attempts
Appreciate any suggestions, info.
regards,
Here's 3 possibilities:
The data never got to the database in the first place. Something happened elsewhere so the data disappeared. Maybe intermitten network issues, overloaded server, application bug.
A database transaction was not commited, and got rolled back. Maybe a bug in your application code, maybe some invalid data screwd things up, maybe a concurrency exception occured etc.
A bug in mysql.
I'd look at 1. and 2. first.
A table on which you only ever insert (and presumably select) and never update or delete should be really stable. Are you absolutely certain you're protecting thoroughly against SQL injection attacks? Because those could (of course) delete rows and such if successful.
You haven't mentioned which table engine you're using (there are several), but it's well worth running whatever diagnostic tools there are for it on the table in question. For instance, on a MyISAM table, run myisamchk. Or more generically (this works for several table types), use the CHECK TABLE statement.
Have you had issues with the underlying storage? It may be worth checking for those.
Activating binlog and periodically monitoring DELETE queries can help to identify the culprit.
One more case to fullfill the above. There could also be the case of client-side and server-side parts of application. Client-side initiated changes can be processed on the server side with additional code logics.
For example, in our case, local admin panel updated an order information with pay_date = NULL and php-website processed this table to clean-up overdue orders from this table. As php logics were developed by another programmer, it looked strange when orders update resulted in records to disappear after some time.
The same refers to crone operations, working on mysql database in a schedule.
I'm being given a data source weekly that I'm going to parse and put into a database. The data will not change much from week to week, but I should be updating the database on a regular basis. Besides this weekly update, the data is static.
For now rebuilding the entire database isn't a problem, but eventually this database will be live and people could be querying the database while I'm rebuilding it. The amount of data isn't small (couple hundred megabytes), so it won't load that instantaneously, and personally I want a bit more of a foolproof system than "I hope no one queries while the database is in disarray."
I've thought of a few different ways of solving this problem, and was wondering what the best method would be. Here's my ideas so far:
Instead of replacing entire tables, query for the difference between my current database and what I want to place in the database. This seems like it could be an unnecessary amount of work, though.
Creating dummy data tables, then doing a table rename (or having the server code point towards the new data tables).
Just telling users that the site is going through maintenance and put the system offline for a few minutes. (This is not preferable for obvious reasons, but if it's far and away the best answer I'm willing to accept that.)
Thoughts?
I can't speak for MySQL, but PostgreSQL has transactional DDL. This is a wonderful feature, and means that your second option, loading new data into a dummy table and then executing a table rename, should work great. If you want to replace the table foo with foo_new, you only have to load the new data into foo_new and run a script to do the rename. This script should execute in its own transaction, so if something about the rename goes bad, both foo and foo_new will be left untouched when it rolls back.
The main problem with that approach is that it can get a little messy to handle foreign keys from other tables that key on foo. But at least you're guaranteed that your data will remain consistent.
A better approach in the long term, I think, is just to perform the updates on the data directly (your first option). Once again, you can stick all the updating in a single transaction, so you're guaranteed all-or-nothing semantics. Even better would be online updates, just updating the data directly as new information becomes available. This may not be an option for you if you need the results of someone else's batch job, but if you can do it, it's the best option.
BEGIN;
DELETE FROM TABLE;
INSERT INTO TABLE;
COMMIT;
Users will see the changeover instantly when you hit commit. Any queries started before the commit will run on the old data, anything afterwards will run on the new data. The database will actually clear the old table once the last user is done with it. Because everything is "static" (you're the only one who ever changes it, and only once a week), you don't have to worry about any lock issues or timeouts. For MySQL, this depends on InnoDB. PostgreSQL does it, and SQL Server calls it "snapshotting," and I can't remember the details off the top of my head since I rarely use the thing.
If you Google "transaction isolation" + the name of whatever database you're using, you'll find appropriate information.
We solved this problem by using PostgreSQL's table inheritance/constraints mechanism.
You create a trigger that auto-creates sub-tables partitioned based on a date field.
This article was the source I used.
Which database server are you using? SQL 2005 and above provides a locking method called "Snapshot". It allows you to open a transaction, do all of your updates, and then commit, all while users of the database continue to view the pre-transaction data. Normally, your transaction would lock your tables and block their queries, but snapshot locking would be perfect in your case.
More info here: http://blogs.msdn.com/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
But it requires SQL Server, so if you're using something else....
Several database systems (since you didn't specify yours, I'll keep this general) do offer the SQL:2003 Standard statement called MERGE which will basically allow you to
insert new rows into a target table from a source which don't exist there yet
update existing rows in the target table based on new values from the source
optionally even delete rows from the target that don't show up in the import table anymore
SQL Server 2008 is the first Microsoft offering to have this statement - check out more here, here or here.
Other database system probably will have similar implementations - it's a SQL:2003 Standard statement after all.
Marc
Use different table names(mytable_[yyyy]_[wk]) and a view for providing you with a constant name(mytable). Once a new table is completely imported update your view so that it uses that table.