I have run delete query in my table on my local system
delete from table_name
Now i want to recover it. How can i do this Please help me. I am using php mysql. Please help me
Unfortunately, no. If you were running the server in default config, go get your backups (you have backups, right?) - generally, a database doesn't keep previous versions of your data, or a revision of changes: only the current state.
(Alternately, if you have deleted the data through a custom frontend, it is quite possible that the frontend doesn't actually issue a DELETE: many tables have a is_deleted field or similar, and this is simply toggled by the frontend.
Note that this is a "soft delete" implemented in the frontend app - the data is not actually deleted in such cases; if you actually issued a DELETE, TRUNCATE or a similar SQL command, this is not applicable.
Related
I need to somehow store the changes made to database structure (on CREATE, DELETE, ALTER, RENAME). The goal is to create an api that will return to a client a list of changes made to the specified database structure since some date (user_who_made_changes, changes_date, DDL_query).
Is it possible with MySQL?
Unfortunately DDL triggers are not supported in MySQL.
I also looked at logging at MySQL (general or binary logging), but it seems there's no possibility to sort logged statements by specified schema.
Any ideas? Will be thankful for any advice.
I exported a couple of entries from a database I have stored locally on my MySQL dbase through PhpMyAdmin and I'd like to replace only those entries on my destination database hosted online. Unfortunately when I try to do so PHPMyAdmin says that those posts already exist and therefore he can't erase them.
It'll take me a lot of time to search for those entries manually within the rest of the posts and delete them one at a time so I was wondering if there's any workaround in order to overwite those entries on import.
Thanks in advance!
A great option is to handle this on your initial export from phpMyAdmin locally. When exporting from phpMyAdmin:
Export method: Custom
Format: SQL
Format-specific options - choose "data" (instead of "structure" or "structure and data")
In Data creation options - Function to use when dumping data: Switch "Insert" to "Update" <-- This is the ticket!
Click Go!
Import into your production database. (always backup your production database before hand just in case)
I know this is an old post, but it actually helped me find a solution built into phpMyAdmin. Hope it helps someone else!
This is a quick and dirty way to do it. Others may have a better solution:
It sounds like you're trying to run INSERT queries, and phpMyAdmin is telling you they already exist. If you use UPDATE queries, you could update the info.
I would copy the queries you have there, into a text editor, preferably one that can handle find and replace, like Notepad++ or Gedit, and then replace some code to change the queries around from INSERT to UPDATE.
See: http://dev.mysql.com/doc/refman/5.0/en/update.html
OR, you could just delete them, then run your INSERT queries.
You might be able to use some logic with find and replace to make a DELETE query that gets rid of them first.
http://dev.mysql.com/doc/refman/5.0/en/delete.html
Check out insert on duplicate. You can either add the syntax to your entries stored locally, or import into a temporary database, then run an INSERT ... SELECT ... ON DUPLICATE KEY UPDATE. If you could post a schema, it would help us guide you better.
I am facing a problem for a task I have to do at work.
I have a MySQL database which holds the information of several clients of my company and I have to create a backup/restore procedure to backup and restore such information for any single client. To clarify, if my client A is losing his data, I have to be able to recover such data being sure I am not modifying the data of client B, C, ...
I am not a DB administrator, so I don't know if I can do this using standard mysql tools (such as mysqldump) or any other backup tools (such as Percona Xtrabackup).
To backup, my research (and my intuition) led my to this possibile solution:
create the restore insert statement using the insert-select syntax (http://dev.mysql.com/doc/refman/5.1/en/insert-select.html);
save this inserts into a sql file, either in proper order or allowing this script to temporarily disable the foreign key checks to meet foreign keys' constraint;
of course, I do this for all my clients on a daily base, using a file for each client (and day).
Then, in the case I have to restore the data for a specific client:
I delete all his data left;
I restore the correct data using his sql file I created during the backup.
This way I believe I may recover the right data of client A without touching the data of client B. Is my solution eventually working? Is there any better way to achieve the same result? Or do you need more information about my problem?
Please, forgive me if this question is not well-formed, but I am new here and this is my first question so I may be unprecise...thanks anyway for the help.
Note: we will also backup the entire database with mysqldump.
You can use the --where parameter, you could provide a condition like *client_id=N* . Of course I am making an assumption since you don't provide any information on your schema.
If you have a Star schema , then you could probably write a small script that backups all lookup tables (considering they are adequately small) by using this parameter --tables and use the --where condition for your client data table. For additional performance, perhaps you could partition the table by the client_id.
I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.
I am working with multiple databases in a PHP/MySQL application. I have development, testing, staging and production databases to keep in sync.
Currently we're still building the thing, so it's easy to keep them in sync. I use my dev db as the master and when I want to update the others I just nuke them and recreate them from mine. However, in future once there's real data I can't do this.
I would like to write SQL scripts as text files that I can version with the PHP changes that accompany them in svn, then apply the scripts to each db instance as I update them.
I would like to use transactions so that if there are any errors during the script, it will roll back any partial changes made. All tables are InnoDB
When I try to add a column that already exists, and add one new column like this:
SET FOREIGN_KEY_CHECKS = 0;
START TRANSACTION;
ALTER TABLE `projects` ADD COLUMN `foo1` varchar(255) NOT NULL after `address2`;
ALTER TABLE `projects` ADD COLUMN `foo2` varchar(255) NOT NULL after `address2`;
COMMIT;
SET FOREIGN_KEY_CHECKS = 1;
... it still commits the new column even though it failed to add the first one, of course, because I issued COMMIT instead of ROLLBACK.
I need it to issue the rollback command conditionally upon error. How can I do this in an adhoc SQL script?
I know of the 'declare exit handler' feature of stored procs but I don't want to store this; I just want to run it as an adhoc script.
Do I need to make it into a stored proc anyway in order to get conditional rollbacks, or is there another way to make the whole transaction atomic in a single adhoc SQL script?
Any links to examples welcome - I've googled but am only finding stored proc examples so far
Many thanks
Ian
EDIT - This is never going to work; ALTER TABLE causes an implicit commit when encountered: http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html Thanks to Brian for the reminder
I learned the other day that data definition language statements are always acted on in MySQL and cause transactions to be committed when they are applied. I think you'll probably have to do this interactively if you want to be sure of success.
I can't find the question on this website where this was discussed (it was only a couple of days ago).
If you need to keep multiple databases in synch, you could look into replication. Although replication isn't to be trifled with, it may be what you need. See http://dev.mysql.com/doc/refman/5.0/en/replication-features.html