Restore deleted database - mysql

I just deleted two days of work because I though I had a backup, but I didn't. Now I need to create the database from scratch and I just wonder, isn't there a built-in backup system, just in case of someone being stupid? It's running on localhost and I haven't exported it out before.

If you've had binary logging enabled, then you might be lucky enough to use it to restore your database.
Note: If you ask me 'how do I know if I had binary logging enabled?' this pretty much means you didn't, because it's disabled by default.

There are several options. They're covered in detail at http://dev.mysql.com/doc/refman/5.1/en/backup-and-recovery.html
If you are developing some kind of app, I'd also recommend to store your DB structure in your version control system together with your source code.

No.
Computers will do anything you command them to - but it is your responsibility to know what you're doing.
In this way, "if you wanted backups, you would have made them" - power-user tools (such as databases) are optimized for performance, not for being user-proof.

Install the MySQL Workbench:-
On left side panel (i.e. under the Administration section), you will find the option to "Data Import/Restore".
On click, you will land on the Data Import. Select the option "Import from Dump Project Folder" and then select the file based on TimeStamp of that duration when the database was deleted.
Click on "Load Folder Content"
Select database object to import, check whether the deleted database is visible or not.
If visible then select all the tables and click on start import button.
If not visible then start with step 2 again and one by one select all the dump files.
At least I was able to recover all my database which accidentally got deleted during the database restore (i.e. initialization process).
This has saved my weeks of effort.

Related

Relationships disappeared - MS Access 2016

I have a split access database that's been in use for almost two years. The database resides on a computer which I access remotely via Remote Utilities, where I transfer the db to my local PC, work on it, then transfer back to the remote machine. We use EaseUS Todo Backups to create an image file every 30 minutes of the database file. I am currently in the process of doing some refactoring and have run into the following issue:
All of the relationships in the database have somehow disappeared. Here is what is strange about this:
About a week prior to discovering this issue I had taken a copy of the database and did not have this issue.
The relationships are gone whether I open it on my local machine or the remote machine.
Upon finding this, the first thing I did was try to restore a backup to see if the relationships were there -- they were not.
This is what I can't figure out -- I had copied the file, everything was OK, then a week later no relationships were found in either the current copy, nor any of the backups from before there was no issue.
I have tried the following to resolve this:
Updating Access on both machines.
Hiding all tables then adding back and showing 'all relationships' in relationships tab.
Looking for relationships in the database documenter.
Restoring old backups as mentioned.
I'm sure this could be a result of corruption -- but how could this corruption extend to the .pbd image files generated by EaseUS, that were created before the issue occurred?
Click the All Relationships button.
Repeatedly click the Hide Table button. Each click hides one table at a time. Continue clicking until the Hide Table button grays out. That will mean that you've hidden every last table.
Click the All Relationships button, again. This will make all of the tables (with their connecting lines intact) reappear in rows of four-each, with all of your tables now visible.

How to selectively export mysql data for a github repo

We're an opensource project and would like to collaboratively edit our website through github public repo.
Any ideas on the best solution to export the mysql data to github, as mysql can hold some sensitive info in it, and how we can version the changes that happen in it ?
Answer is you don't hold data in the repo.
You may want to hold your ddl, and maybe some configuration data. But that's it.
If you want to version control your data, there are other options. GIT isn't one of them
It seems dbdeploy is what you are looking for
Use a blog engine "backend-ed by git", forget about mysql, commit on github.com, push and pull, dominate !
Here it is a list of the best:
http://jekyllrb.com/
http://nestacms.com/
http://cloudhead.io/toto
https://github.com/colszowka/serious
and just in case, ... a simple, Git-powered wiki with a sweet API and local frontend. :
https://github.com/github/gollum
Assuming that you have a small quantity of data that you wish to treat this way, you can use mysqldump to dump the tables that you wish to keep in sync, check that dump into git, and push it back into your database on checkout.
Write a shell script that does the equivalent of:
mysqldump [options] database table1 table2 ... tableN > important_data.sql
to create or update the file. Check that file into git and when your data changes in a significant way you can do:
mysql [options] database < important_data.sql
Ideally that last would be in a a git post-receive hook, so you'd never forget to apply your changes.
So that's how you could do it. I'm not sure you'd want to do it. It seems pretty brittle, esp. if Team Member 1 makes some laborious changes to the tables of interest while Team Member 2 is doing the same. One of them is going to check-in their changes first, and best case you'll have some nasty merge issues. Worst case is that one of them lose all their changes.
You could mitigate those issues by always making your changes in the important_data.sql file, but the ease or difficulty of that depend on your application. If you do this, you'll want to play around with the mysqldump options so you get a nice readable, and git-mergable file.
You can export each table as a separate SQL file. Only when a table is changed it can be pushed again.
If you were talking about configuration then I'd recommend sql dumps or similar to seed the database as per Ray Baxters answer.
Since you've mentioned Drupal, I'm guessing the data concerns users/ content. As such you really ought to be looking at having a single database that each developer connects to remotely - i.e. one single version. This is because concurrent modifications to mysql tables will be extremely difficult to reconcile (e.g. two new users both with user.id = 10 each making a new post with post.id = 1, post.user_id = 10 etc).
It may make sense, of course, to back this up with an sql dump (potentially held in version control) in case one of your developers accidentally deletes something critical.
If you just want a partial dump, PHPMyAdmin will do that. Run your SELECT statement and when it's displayed there will be an export link at the bottom of the page(the one at the top does the whole table).
You can version mysqldump files which are simply sql scripts as stated in the prior answers. Based on your comments it seems that your primary interest is to allow the developers to have a basis for a local environment.
Here is an excellent ERD for Drupal 6. I don't know what version of Drupal you are using or if there have been changes to these core tables between v6 and v7, but you can check that using a dump, or phpMyAdmin or whatever other tool you have available to you that lets you inspect the database structure. Drupal ERD
Based on the ERD, the data that would be problematic for a Drupal installation is in the users, user_roles, and authmap tables. There is a quick way to omit those, although it's important to keep in mind that content that gets added will have relationships to the users that added it, and Drupal may have problems if there aren't rows in the user table that correspond to what has been added.
So to script the mysqldump, you would simply exclude the problem tables, or at very least the user table.
mysqldump -u drupaldbuser --password=drupaluserpw 0-ignore-table=drupaldb.user drupaldb > drupaldb.sql
You would need to create a mock user table with a bunch of test users with known name/password combinations that you would only need to dump and version once, but ideally you want enough of these to match or exceed the number of real drupal users you'll have that will be adding content. This is just to make the permissions relationships match up.

How to generate the whole database script in MySQL Workbench?

I want to take the whole database. Where do I find the database file?
And is there a way to write the whole database with all data to a text file (like the one in SQL Server)?
How to generate SQL scripts for your database in Workbench
In Workbench Central (the default "Home" tab) connect to your MySQL instance, opening a SQL Editor tab.
Click on the SQL Editor tab and select your database from the SCHEMAS list in the Object Browser on the left.
From the menu select Database > Reverse Engineer and follow the prompts. The wizard will lead you through connecting to your instance, selecting your database, and choosing the types of objects you want to reverse engineer.
When you're all done, you will have at least one new tab called MySQL Model. You may also have a tab called EER Diagram which is cool but not relevant here.
Click in the MySQL Model tab
Select Database > Forward Engineer
Follow the prompts. Many options present themselves, including Generate INSERT Scripts for Tables which allows you to script out the data contained within your tables (perfect for lookup tables).
Soon you will see the generated script in front of you. At this point you can Copy to Clipboard or Save to Text File.
The wizard will take you further, but if you just want the script you can stop here.
A word of caution: the scripts are generated with CREATE commands. If you want ALTER you'll have to (as far as I can tell) manually change the CREATEs to ALTERs.
This is guaranteed to work, I just did it tonight.
Q#1: I would guess that it's somewhere on your MySQL server?
Q#2: Yes, this is possible. You have to establish a connection via Server Administration. There you can clone any table or the entire database.
This tutorial might be useful.
EDIT
Since the provided link is no longer active, here's a SO answer outlining the process of creating a DB backup in Workbench.
In MySQL Workbench 6, commands have been repositioned as the "Server Administration" tab is gone.
You now find the option "Data Export" under the "Management" section when you open a standard server connection.
there is data export option in MySQL workbech
I found this question by searching Google for "mysql workbench export database sql file". The answers here did not help me, but I eventually did find the answer, so I am posting it here for future generations to find:
Answer
In MySQLWorkbench 6.0, do the following:
Select the appropriate database under MySQL Connections
On the top-left hand side of screen, under the MANAGEMENT heading, select "Data Export".
Here is a screenshot for reference:
None of these worked for me. I'm using Mac OS 10.10.5 and Workbench 6.3. What worked for me is Database->Migration Wizard... Flow the steps very carefully
In the top menu of MySQL Workbench click on database and then on forward engineer. In the options menu with which you will be presented, make sure to have "generate insert statements for tables" set.
Try the export function of phpMyAdmin.
I think there is also a possibility to copy the database files from one server to another, but I do not have a server available at the moment so I can't test it.
Using Windows 10 and MySql Workbench 8.0
Go to Server tab
Go to Database Export
This opens up something like this
Select the schema to export in the Tables to export
Click the button Start Export
Surprisingly the Data Export in the MySql Workbench is not just for data, in fact it is ideal for generating SQL scripts for the whole database (including views, stored procedures and functions) with just a few clicks. If you want just the scripts and no data simply select the "Skip table data" option. It can generate separate files or a self contained file. Here are more details about the feature: http://dev.mysql.com/doc/workbench/en/wb-mysql-connections-navigator-management-data-export.html
in mysql workbench server>>>>>>export Data
then follow instructions it will generate insert statements for all tables data each table will has .sql file for all its contained data

if we only have access to phpMyAdmin what's the best way to backup the entire db?

For someone that is not used to mySQL, when using phpMyAdmin administration program, what is the recommended setup to backup the entire database with all tables and with data?
Most of those options are fine, but check the Structure -> "Add DROP TABLE..." and "Add CREATE PROCEDURE", then Data -> "Extended inserts" (this decreases loading time when re-inserting the data and isn't essential). Then click "Save as file" and export, the rest of the options are suitable.
I'm assuming that your database is large and you're having problems with phpMyAdmin timing out. If that's the case, and you don't have shell access, then you may have to write a PHP script, which executes a mysqldump command, and then call it asynchronously so there is no browser timeout issue. It would be a sloppy workaround, but if the access is limited, something like that may be your only option.
Although it has it's places, phpmyadmin's strong suit is not database backups.
Having said that, if your database is small, using the default settings via an .sql export should be fine.
If your database is large (i.e. more than a couple MB) and/or you care about it, backups should be done on the server level via logical copy, or by doing a mysqldump.

How to set up a development environment in MS Access

I have created an MS Access 2003 application, set up as a split front-end/back-end configuration, with a user group of about five people. The front end .mdb sits on a network file server, and it contains all the queries, forms, reports, and VBA code, plus links to all the tables in the back end .mdb and some links to ODBC data sources like an AS/400. The back end sits on the same network file server, and it just has the table data in it.
This was working well until I "went live" and my handful of users started coming up with enhancement requests, bug reports, etc. I have been rolling out new code by developing/testing in my own copy of the front-end .mdb in another network folder (which is linked to the same back-end .mdb), then posting my completed file in a "come-and-get-it" folder, alerting the users, and they go copy/paste the new front-end file to their own folders on the network. This way, each user can update their front end when they're at a 'stopping point' without having to boot everyone out at once.
I've found that when I'm developing now, sometimes Access becomes extremely slow. Like, when I am developing a form and attempt to click a drop-down on the properties box, the drop-down arrow will push in, but it will take a few seconds before the list of options appears. Or there's tons of lag in selecting & moving controls on a form. Or lots of keyboard lag.
Then, at other times, there's no lag at all.
I'm wondering if it's because I'm linked to the same back end as the other users. I did make a reasonable effort to set up the queries, forms, reports etc. with minimal record locking, if any at all, depending on the need. But I may have missed something, or perhaps there is some other performance issue I need to address.
But I'm wondering if there is an even better way for me to set up my own development back-end .mdb, so I can be testing my code on "safe" data instead of the same live data as the rest of the users. I'm afraid that it's only a matter of time before I corrupt some data, probably at the worst possible moment.
Obviously, I could just set up a separate back-end .mdb and manually reconfigure the table links in the front end every time, using the Linked Table Manager. But I'm hoping there is a more elegant solution than that.
And I'm wondering if there are any other performance issues I should be considering in this multi-user, split database configuration.
EDIT: I should have added that I'm stuck with MS Access (not MS-SQL or any other "real" back end); for more details see my comment to this post.
If all your users are sharing the front end, that's THE WRONG CONFIGURATION.
Each user should have an individual copy of the front end. Sharing a front end is guaranteed to lead to frequent corruption of the shared front end, as well as odd corruptions of forms and modules in the front end.
It's not clear to me how you could be developing in the same copy of the front end that the end users are using, since starting with A2000, that is prohibited (because of the "monolithic save model," where the entire VBA project is stored in a single BLOB field in a single record in one of the system tables).
I really don't think the problems are caused by using the production data (though it's likely not a good idea to develop against production data, as others have said). I think they are caused by poor coding practices and lack of maintainance of your front end code.
turn off COMPILE ON DEMAND in the VBE options.
make sure you require OPTION EXPLICIT.
compile your code frequently, after every few lines of code -- to make this easy, add the COMPILE button to your VBE toolbar (while I'm at it, I also add the CALL STACK button).
periodically make a backup of your front end and decompile and recompile the code. This is accomplished by launching Access with the /decompile switch, opening your front end, closing Access, opening your front end with Access (with the SHIFT key held down to bypass the startup code), then compacting the decompiled front end (with the SHIFT key held down), then compiling the whole project and compacting one last time. You should do this before any major code release.
A few other thoughts:
you don't say if it's a Windows server. Linux servers accessed over SAMBA have exhibited problems in the past (though some people swear by them and say they're vastly faster than Windows servers), and historically Novell servers have needed to have settings tweaked to enable Jet files to be reliably edited. There are also some settings (like OPLOCKS) that can be adjusted on a Windows server to make things work better.
store your Jet MDBs in shares with short paths. \Server\Data\MyProject\MyReallyLongFolderName\Access\Databases\ is going to be much slower reading data than \Server\Databases. This really makes a huge difference.
linked tables store metadata that can become outdated. There are two easy steps and one drastic one to be taken to fix it. First, compact the back end, and then compact the front end. That's the easy one. If that doesn't help, completely delete the links and recreate them from scratch.
you might also consider distributing an MDE to your end users instead of an MDB, as it cannot uncompile (which an MDB can).
see Tony Toews's Performance FAQ for other generalized performance information.
1) Relink Access tables from code
http://www.mvps.org/access/tables/tbl0009.htm
Once I'm ready to publish a new MDE to the users I relink the tables, make the MDE and copy the MDE to the server.
2) I specifically created the free Auto FE Updater utility so that I could make changes to the FE MDE as often as I wanted and be quite confident that the next time someone went to run the app that it would pull in the latest version. For more info on the errors or the Auto FE Updater utility see the free Auto FE Updater utility at http://www.granite.ab.ca/access/autofe.htm at my website to keep the FE on each PC up to date.
3) Now when working on site at a clients I make the updates to the table structure after hours when everyone is out of the system. See HOW TO: Detect User Idle Time or Inactivity in Access 2000 (Q210297) http://support.microsoft.com/?kbid=210297 ACC: How to Detect User Idle Time or Inactivity (Q128814) http://support.microsoft.com/?kbid=128814
However we found that the code which runs on the timer event must be disabled for the programmers. Otherwise weird things start happening when you're editing code.
Also print preview would sometimes not allow the users to run a menu item to export the report to Excel or others. So you had to right click on the Previewed report to get some type of internal focus back on the report so they could then export it. This was also helped by extending the timer to five minutes.
The downside to extending the timer to five minutes was if a person stays in the same form and at the same control for considerable parts of the day, ie someone doing the same inquiries, the routine didn't realize that they had actually done something. I'll be putting in some logic sometime to reset this timer whenever they do something in the program.
4) In reference to another person commenting about scripts and such to update the schema see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm. While it has its quirks it does create the VBA code to update tables, fields, indexes and relationships.
Use VBA to unlink and re-link your tables to the new target when switching from dev to prod. It's been to many years for me to remember the syntax--I just know the function was simple to write.
Or use MS-Access to talk to MS-Access through ODBC, or some other data connection that lives outside of the client mdb.
As with all file base databases, you will eventually run into problems with peak usage or when you go over a small magical number somewhere between 2 and 30.
Also, Access tends to corrupt frequently, so backup, compact and repair need to be done on an frequent basis. 3rd party tools used to exist to automate this task.
As far as performance goes, the data is being processed client side, so you might want to use something like netmeter to watch how much data is going over the wire. The same principle about indexing and avoiding table scans apply to file base dbs as well.
Many good suggestions from other people. Here's my 2 millicents worth. My backend data is on server accessed through a Drive mapping. In my case, the Y drive. Production users get the mapping through a login script using active directory. Then the following scenarios are easily done by batch file:
Develop against local computer by doing a subst command in a batch file
run reports against last nights data by pointing Y to the backup server (read only)
run reports against end of month data by pointing to the right directory
test against specialized scenarios by keeping a special directory
In my environment (average 5 simultaneous users, 1000's of rows, not 10,000's.) corruption has occurred, but it's rare and manageable. Only once in the last several years have we resorted to the previous days backup. We use SQL Server for our higher volume stuff, but it's not as convenient to develop against, probably because we don't have a SQL admin on site.
You might also find some of the answers to this question (how to extract schemas from access) to be useful as well. Once you've extracted a schema using one of the techniques that were suggested you gain a whole range of new options like the ability to use source control on the schemas, as well as being able to easily build "clean" testing environments.
Edit to respond to comment:
There's no easy way to source control an Access database in it's native format, but schema files are just text files like any other. Hence, you can check them in and out of the source control software of your choice for easy version control/rollbacks.
Or course, it relies on you having a series of scripts set up to re-build your database from the schema. Once you do, it's normally fairly trivial to create an option/alternative version that rebuilds it in a different location, allowing you to build test environments from any previous committed version of the schema. I hope that clarifies a bit!
If you want to update the back end MDB schema automatically when you release a new FE to the clients then see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm will happily generate the VBA code need to recreate an MDB. Or the code to create the differences between two MDBs so you can do a version upgrade of the already existing BE MDB. It's a bit quirky but works.
I use it all the time.
You need to understand that a shared mdb file for the data is not a robust solution. Microsoft would suggest that SQL Server or some other server based database would be a far better solution and would allow you to use the same access front end. The migration wizard would help you make the changeover if you wanted to go that way.
As another uses pointed out, corruption will occur. It is simply a question of how often, not if.
To understand the performance issues you need to understand that to the server the mdb file with the data in it is simply that, a file. Since no code runs on the server, the server does not understand transactions, record locking etc. It simply knows that there is a file that a bunch of people are trying to read and write simultaniously.
With a database system such as SQL Server, Oracle, DB2. MySQL etc. the database program runs on the server and looks to the server like a single program accessing the database file. It is the database program (running on the server) that handles record locking, transactions, concurrency, logging, data backup/recovery and all the other nice things one wants from a database.
Since a database program designed to run on the server is designed to do that and only that, it can do it far better and more efficently that a program like Access reading an writing a shared file (mdb).
There are two rules for developing against live data
The first rule is . . . never develop
against live data. Not ever.
The second rule is . . .never develop
against live data. Not ever.
You can programatically change the bindings for linked tables, so you can write a macro to change your links when you're deploying a new version.
The application is slow because it's MS Access, and it doesn't like many concurrent users (where many is any number > 1).