Copy specific table from one Wordpress database to another - mysql

I recently discovered that my wordpress website (and it's database) were compromised and corrupted for reasons that are unknown (according to my webhost, iPower). No local backups exist, and iPower has no backups to restore to.
Certain essential parts of the site database are missing, but many of the most important tables still exist. To be specific, in my case the table 'wp_hlrv_options' was damaged, but all the other tables are intact.
My question is: would it be possible to 'copy' the 'wp_hlrv_options' table from a fresh wordpress install to my goofed up database?
If that isn't possible, I imagine I could copy the other intact tables to the fresh install, but simply replacing 'wp_hlrv_options' seems like it would be the fastest/easiest way to go about salvaging my site.
Any feedback/suggestions would be awesome, and I'm happy to provide more specific details if necessary!

Backup the database (as is), especially the table you are going to import, and then just try it. In the worst case you will just empty the database and reimport it from backed-up data.

Related

MySql databases corrupted after upgrade?

I have been dealing with this issue for a while now. For some reason, when i went to run a Ubuntu upgrade, MySql-server upgrade failed. This was on about 8/10. This had happened before due to a "DATADIR" link (won't go into detail on that just now). I went through hell trying to get the package to upgrade and eventually got the package to upgrade by creating a new MySql database structure (after moving mine somewhere else). Once I did that (with some steps involved) the package upgrade completed.
Then, when I tried to replace the "new" databases with my old onse, it wouldn't start the service. I came to find out that the "Mmysql" (system) database folder was just completely gone.
So, I took the "new" database and overlayed it on my "old" database files. This got me in! Of course, old users, and anything else in the system database, was gone. So I started to rebuild them.
The problem occurred when i tried to go into some old databases. About half of them report that the table does not exist when trying to load them. Mostly, it is all of the tables in particular database, but there are a few databases where some tables "don't exist" and others do.
The thing is that the tables do exist. I believe they are simply corrupt.
So, I'm really trying here, but I can't seem to figure out how to get all of the tables to load. I have a backup from the 13th, presumably after the upgrade failed but before I really started messing with things. I'm going to try to use that, but if anyone knows how/why some tables are corrupted all of a sudden and why others are not and especially if someone knows how to fix this, that would be absolutely wonderful.
Unfortunately, my regular backups haven't been working for months, and the latest backup I currently have access to is 2 years old. Quite a bit has changed in the database since then, but as a last ditch effort, I may try to import that data and use "mysql_upgrade" to restore this, then overlay any new databases I have created since then into the directory structure and see if they import that way.
Thanks for any suggestions you may offer.
--mobrien
I believe this was due to a permissions issue that had some files locked and when I fixed the perm issue, the tables that were accessible were corrupted. I restored the same backup again and this time it worked. The only folder that was missing was the "mysql" folder, and for that I recreated a new one, then patched it in, then created new user permissions for the existing tables. This was working, but then I ran into another issue, so I will open a new questions for that. This has been a nightmare and the moral of the story is: keep better backups and test them!

merge design of mysql between localhost and server?

I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.

Big problems with shared SugarCRM database

Here is my situation:
I have two hosting sites with a domain each, each with Sugarcrm infrastructure. I started with one hosting site and started creating a database through Sugar. Then, I started a SECOND hosting site with a new domain, and I believe I have linked the two databases accidentally. If I change a value in the database on one site, it gets reflected on the other.
So, the original domain/hosting site is expiring, and I would like to move the SQL database over to the new site permanently. I have made a backup of the database from the original site and have it on my desktop.
My questions:
1. Can I just drag the SQL file into the new site (I use FileZilla) database location and everything will be OK?
2. I cannot find the location in file manager of the new site where I would drag this database into!? I use goddady, and the newer site uses cpanel.
**Other problem: I have accidentally upgraded the newer sites SugarCrm version, and have created huge problems because the original site is not upgraded, and the sites do not like that very much as the database is shared. There original site is unreachable (it says you cannot use the newer version database with the old Sugar version), and the new site has visible problems but is workable.
As you can tell, I am a totally inexperienced n00b, and am learning as I go. I have spent weeks setting up this database, and would appreciate any help on maintaining its integrity.
Thank you very much!
Tom
I'm assuming you're using MySQL for your database.
Unless your tables are all MyISAM tables, simply copying the database files won't work.
Whenever you want to move a MySQL database it's a good idea to dump the database, move that file over, then recreate the database. Read up on the mysqldump command.
If you're using Oracle or something else, I would think a similar technique would be desirable. Basically dump your database to a backup format that your database server can use to recreate your database. Don't just copy database files around.

how to prevent mysql data file being copied out

Just wondering is there any ways to prevent mysql data files being copied by others.
I am a developer with mysql as database, i want to protect my tables so no one are possible to copy the table and used it with their own program, or simply to put i don't want other to see my table design.
As we are aware that mysql contains 3 files for each table in the mysql folder, so anyone can copy those files and put it into their own server.
There's no MySQL specific way to achieve that.
If you mean something like file encryption, MYSQL doesn't support that natively.
Have a look at this 3rd party product:
http://www.vormetric.com/products/encryption/database-encryption
But as zerkms correctly commented:
if you have admin/root permissions on a machine - you can do whatever
you want.

How to selectively export mysql data for a github repo

We're an opensource project and would like to collaboratively edit our website through github public repo.
Any ideas on the best solution to export the mysql data to github, as mysql can hold some sensitive info in it, and how we can version the changes that happen in it ?
Answer is you don't hold data in the repo.
You may want to hold your ddl, and maybe some configuration data. But that's it.
If you want to version control your data, there are other options. GIT isn't one of them
It seems dbdeploy is what you are looking for
Use a blog engine "backend-ed by git", forget about mysql, commit on github.com, push and pull, dominate !
Here it is a list of the best:
http://jekyllrb.com/
http://nestacms.com/
http://cloudhead.io/toto
https://github.com/colszowka/serious
and just in case, ... a simple, Git-powered wiki with a sweet API and local frontend. :
https://github.com/github/gollum
Assuming that you have a small quantity of data that you wish to treat this way, you can use mysqldump to dump the tables that you wish to keep in sync, check that dump into git, and push it back into your database on checkout.
Write a shell script that does the equivalent of:
mysqldump [options] database table1 table2 ... tableN > important_data.sql
to create or update the file. Check that file into git and when your data changes in a significant way you can do:
mysql [options] database < important_data.sql
Ideally that last would be in a a git post-receive hook, so you'd never forget to apply your changes.
So that's how you could do it. I'm not sure you'd want to do it. It seems pretty brittle, esp. if Team Member 1 makes some laborious changes to the tables of interest while Team Member 2 is doing the same. One of them is going to check-in their changes first, and best case you'll have some nasty merge issues. Worst case is that one of them lose all their changes.
You could mitigate those issues by always making your changes in the important_data.sql file, but the ease or difficulty of that depend on your application. If you do this, you'll want to play around with the mysqldump options so you get a nice readable, and git-mergable file.
You can export each table as a separate SQL file. Only when a table is changed it can be pushed again.
If you were talking about configuration then I'd recommend sql dumps or similar to seed the database as per Ray Baxters answer.
Since you've mentioned Drupal, I'm guessing the data concerns users/ content. As such you really ought to be looking at having a single database that each developer connects to remotely - i.e. one single version. This is because concurrent modifications to mysql tables will be extremely difficult to reconcile (e.g. two new users both with user.id = 10 each making a new post with post.id = 1, post.user_id = 10 etc).
It may make sense, of course, to back this up with an sql dump (potentially held in version control) in case one of your developers accidentally deletes something critical.
If you just want a partial dump, PHPMyAdmin will do that. Run your SELECT statement and when it's displayed there will be an export link at the bottom of the page(the one at the top does the whole table).
You can version mysqldump files which are simply sql scripts as stated in the prior answers. Based on your comments it seems that your primary interest is to allow the developers to have a basis for a local environment.
Here is an excellent ERD for Drupal 6. I don't know what version of Drupal you are using or if there have been changes to these core tables between v6 and v7, but you can check that using a dump, or phpMyAdmin or whatever other tool you have available to you that lets you inspect the database structure. Drupal ERD
Based on the ERD, the data that would be problematic for a Drupal installation is in the users, user_roles, and authmap tables. There is a quick way to omit those, although it's important to keep in mind that content that gets added will have relationships to the users that added it, and Drupal may have problems if there aren't rows in the user table that correspond to what has been added.
So to script the mysqldump, you would simply exclude the problem tables, or at very least the user table.
mysqldump -u drupaldbuser --password=drupaluserpw 0-ignore-table=drupaldb.user drupaldb > drupaldb.sql
You would need to create a mock user table with a bunch of test users with known name/password combinations that you would only need to dump and version once, but ideally you want enough of these to match or exceed the number of real drupal users you'll have that will be adding content. This is just to make the permissions relationships match up.