Keep MySQL files as 'per-project'? - mysql

Currently I work at two places - at work and at home. I have a problem with keeping up to date. With files I solved my problem (I use private SVN and commit from phpStorm), but I still have no idea about MySQL. Currently, I just export my tables when I'm going out, but it isn't much of a good way (I know myself, sooner or later I'll forget to do it).
My question is: can I store MySQL data files on per-project basis, so I could commit it into SVN along with other files?

You could make use of a post commit hook that dumps the database, and a hook before update the inserts the dump.

Related

How to properly Update Docker Azerothcore with customizations to both code (scripts), modules and database (added quests, vendors, items)

I'm running Azerothcore-WOLTK inside a Docker container.
I would like to update the server since I read there's an important security fix.
However I never updated the server since I first installed it last year (December 2019). Since then, I have customized the server in several ways:
I have customized a few boss scripts to work properly with two players.
I have installed a few modules, including one that also required some extra code to be compiled, and some SQL queries to be run.
I have modified the database myself, adding Quests, NPC, Vendors and Items
As such, I'm extremely concerned I would end up messing everything up. I would require your assistance on how to proceed to update the server to the latest version while maintaining all the customization I have performed.
I'm especially concerned about the database changes as I figure I could backup the updated boss scripts, do a git pull and replace them again before building (I should do a fork afterwards, I didn't think about it)...
But in any I case I would be extremely thankful if you could guide me step by step along the way, considering I am using a docker installation.
For anything Database related I use Heidi SQL, so I could use that for any Database procedure. I'm not very proficient in SQL queries, but I should be able to import .sql files as needed.
I realize I'm asking a lot, so please don't feel pressured to answer right away. I will be most thankful if you could help me whenever you have the chance.
Thank you for your time :)
I'll try to answer all points you mentioned:
1. The boss scripts.
The worst thing that can happen is that you get merge conflicts while pulling the latest changes using git. So you would have to manually solve them. It's not necessarily difficult, especially in your case. It's just boss scripts, so by nature, they are quite self-contained and you are sure to not break anything else when messing with them.
2. Modules
The modules should not be a problem at all. Modules exist exactly for this reason: being isolated and not causing issues in case of updating the core or similar.
My only concern here would be that module that required a core change. I don't know what module you installed, normally this shouldn't happen. A proper AzerothCore module should not require any core change.
However, again, the worst thing you can have is some git merge conflicts, nothing too big I hope (depends on how big and invasive were these changes required by the module).
3. Custom database changes.
The golden rule is: always store your custom SQL queries somewhere, in a way that they can be easily re-applied. For example, always use DELETE before INSERT, prefer UPDATE when possible, etc...
So all you need to have is a file (or a bunch of files) containing all your SQL code corresponding to the custom changes you made. If you don't have it, you can still extract it from your DB.
Then you can always re-apply them after you update your core, if you feel it's needed. It might also be the case that you don't need to re-apply them at all. Or maybe you want to start from a fresh AzerothCore world database and re-apply your changes. This really depends on the specific case, but anyway you will be fine (as long as you keep your changes in SQL files).
You can use Keira3 to edit your database, or just extract your changes in case you need to. For example, you can open an entity and copy its "full query".
Backup first
Before starting the upgrade procedure, create a backup of:
your DB
the source files that you have modified (e.g. bosses, etc...)
Update frequently!
However I never updated the server since I first installed it last year (December 2019).
This is not recommended at all! You are supposed to update your AzerothCore frequently (at least once a week). There are a lot of good reasons to do so, one of them is: it's way easier if you do it often.
How to update AzerothCore when using Docker
A generic question about updating AC with Docker has been asked already here: How to update azerothcore-wotlk docker container

Using mercurial, I added a new file and wrote code in it, then deleted that file. Can I retrieve it?

Pretty much the title. I've looked at a lot of similar questions asked here, and I can't seem to find something that applies.
Started by syncing with HEAD. Created a few new files. Filled in those files, they were being tracked at this point. I then not only deleted the files, but also removed them from being tracked (because of stupid UI). According to my understanding, those files are gone for good, but I thought I'd check with people who are smarter than me: Is it possible to retrieve them?
Mercurial does not store uncommitted changes, so if you did not commit the files then they are lost.
If you did commit them, then hg update -C will restore them (and all other files --- make sure there are no other changes you haven't committed and want to keep) to the latest commit for your working dir.

Is there a tool that converts git diff output to SQL INSERT and DELETE statements?

I'm working on a WordPress site by doing development on my laptop and then deploying the changes by pushing with git to the server. This works great for files and I want to do the same thing with content changes to the database.
My first iteration at solving the problem used git hooks to dump the database using mysqldump before commits and then restoring the dumps after checkouts. This works but drops and recreates the whole database each time. This is not OK because WordPress is also making changes automatically to the database that I want to keep, like records of which products are sold, so I don't want the whole thing dropped and restored every checkout.
I'm thinking a better solution would be to continue dumping the database during commits and then for checkouts use a new tool that reads the output of git diff HEAD^ and converts it to INSERT and DELETE SQL statements fed to mysql. That way the database would be patched incrementally with my changes while preserving changes made by others (such as WordPress). Example:
git diff:
(83,NULL,550,'TI-99/4A','',0,0,0,0,'',0,0,0),
-(85,NULL,2000,'Banana Jr. 6000','',0,0,0,0,'',0,0,0),
+(85,NULL,2000,'Banana Jr. 6000 (now with tint control!)','',0,0,0,0,'',0,0,0),
(88,NULL,150,'Symbolics 3645','',0,0,0,0,'',0,0,0),
converted to SQL:
DELETE FROM `wp_yak_product` WHERE `post_id`='85';
INSERT INTO `wp_yak_product` VALUES (85,NULL,2000,'Banana Jr. 6000 (now with tint control!)','',0,0,0,0,'',0,0,0);
I've searched around and can't find anything like this. I'm considering writing it myself.
Does something like this exist? Is this a good or a bad idea?
To my knowledge that type of tool does not exist. The best option that I know of to produce similar output would be to use the "Synchronize Model" functionality in MySQL Workbench.
That said, I would recommend tracking the changes that you make in your development database in a SQL file, checked into git, which can be executed on your production server.
I thought of a possible approach:
I dump the database into a different file for each table:
wp_commentmeta.sql
wp_comments.sql
wp_links.sql
etc.
Perhaps I could separate the tables into categories of content vs. bookkeeping, like the distinction between the usr and var directories in Unix, and add the bookkeeping tables to my .gitignore so they're not clobbered when I update the content.

How to selectively export mysql data for a github repo

We're an opensource project and would like to collaboratively edit our website through github public repo.
Any ideas on the best solution to export the mysql data to github, as mysql can hold some sensitive info in it, and how we can version the changes that happen in it ?
Answer is you don't hold data in the repo.
You may want to hold your ddl, and maybe some configuration data. But that's it.
If you want to version control your data, there are other options. GIT isn't one of them
It seems dbdeploy is what you are looking for
Use a blog engine "backend-ed by git", forget about mysql, commit on github.com, push and pull, dominate !
Here it is a list of the best:
http://jekyllrb.com/
http://nestacms.com/
http://cloudhead.io/toto
https://github.com/colszowka/serious
and just in case, ... a simple, Git-powered wiki with a sweet API and local frontend. :
https://github.com/github/gollum
Assuming that you have a small quantity of data that you wish to treat this way, you can use mysqldump to dump the tables that you wish to keep in sync, check that dump into git, and push it back into your database on checkout.
Write a shell script that does the equivalent of:
mysqldump [options] database table1 table2 ... tableN > important_data.sql
to create or update the file. Check that file into git and when your data changes in a significant way you can do:
mysql [options] database < important_data.sql
Ideally that last would be in a a git post-receive hook, so you'd never forget to apply your changes.
So that's how you could do it. I'm not sure you'd want to do it. It seems pretty brittle, esp. if Team Member 1 makes some laborious changes to the tables of interest while Team Member 2 is doing the same. One of them is going to check-in their changes first, and best case you'll have some nasty merge issues. Worst case is that one of them lose all their changes.
You could mitigate those issues by always making your changes in the important_data.sql file, but the ease or difficulty of that depend on your application. If you do this, you'll want to play around with the mysqldump options so you get a nice readable, and git-mergable file.
You can export each table as a separate SQL file. Only when a table is changed it can be pushed again.
If you were talking about configuration then I'd recommend sql dumps or similar to seed the database as per Ray Baxters answer.
Since you've mentioned Drupal, I'm guessing the data concerns users/ content. As such you really ought to be looking at having a single database that each developer connects to remotely - i.e. one single version. This is because concurrent modifications to mysql tables will be extremely difficult to reconcile (e.g. two new users both with user.id = 10 each making a new post with post.id = 1, post.user_id = 10 etc).
It may make sense, of course, to back this up with an sql dump (potentially held in version control) in case one of your developers accidentally deletes something critical.
If you just want a partial dump, PHPMyAdmin will do that. Run your SELECT statement and when it's displayed there will be an export link at the bottom of the page(the one at the top does the whole table).
You can version mysqldump files which are simply sql scripts as stated in the prior answers. Based on your comments it seems that your primary interest is to allow the developers to have a basis for a local environment.
Here is an excellent ERD for Drupal 6. I don't know what version of Drupal you are using or if there have been changes to these core tables between v6 and v7, but you can check that using a dump, or phpMyAdmin or whatever other tool you have available to you that lets you inspect the database structure. Drupal ERD
Based on the ERD, the data that would be problematic for a Drupal installation is in the users, user_roles, and authmap tables. There is a quick way to omit those, although it's important to keep in mind that content that gets added will have relationships to the users that added it, and Drupal may have problems if there aren't rows in the user table that correspond to what has been added.
So to script the mysqldump, you would simply exclude the problem tables, or at very least the user table.
mysqldump -u drupaldbuser --password=drupaluserpw 0-ignore-table=drupaldb.user drupaldb > drupaldb.sql
You would need to create a mock user table with a bunch of test users with known name/password combinations that you would only need to dump and version once, but ideally you want enough of these to match or exceed the number of real drupal users you'll have that will be adding content. This is just to make the permissions relationships match up.

I need a Git hook to sync MySql schema

One of the biggest problems i have today is that every time that i make a commit to git i get make the changes on the data base by hand. I don't want that the schema of the data base is always up to date.
I would like to be able have a pre commit hook that check the database schema and include it as part of the commit. Also that every time that I make a pull the data base gets updated.
Anyone has something like this already?
(I have a LAMP server, but I'm willing to install anything that helps me with this)
Like this?
http://www.edmondscommerce.co.uk/git/using-git-to-track-db-schema-changes-with-git-hook/