One of the biggest problems i have today is that every time that i make a commit to git i get make the changes on the data base by hand. I don't want that the schema of the data base is always up to date.
I would like to be able have a pre commit hook that check the database schema and include it as part of the commit. Also that every time that I make a pull the data base gets updated.
Anyone has something like this already?
(I have a LAMP server, but I'm willing to install anything that helps me with this)
Like this?
http://www.edmondscommerce.co.uk/git/using-git-to-track-db-schema-changes-with-git-hook/
Related
I'm running Azerothcore-WOLTK inside a Docker container.
I would like to update the server since I read there's an important security fix.
However I never updated the server since I first installed it last year (December 2019). Since then, I have customized the server in several ways:
I have customized a few boss scripts to work properly with two players.
I have installed a few modules, including one that also required some extra code to be compiled, and some SQL queries to be run.
I have modified the database myself, adding Quests, NPC, Vendors and Items
As such, I'm extremely concerned I would end up messing everything up. I would require your assistance on how to proceed to update the server to the latest version while maintaining all the customization I have performed.
I'm especially concerned about the database changes as I figure I could backup the updated boss scripts, do a git pull and replace them again before building (I should do a fork afterwards, I didn't think about it)...
But in any I case I would be extremely thankful if you could guide me step by step along the way, considering I am using a docker installation.
For anything Database related I use Heidi SQL, so I could use that for any Database procedure. I'm not very proficient in SQL queries, but I should be able to import .sql files as needed.
I realize I'm asking a lot, so please don't feel pressured to answer right away. I will be most thankful if you could help me whenever you have the chance.
Thank you for your time :)
I'll try to answer all points you mentioned:
1. The boss scripts.
The worst thing that can happen is that you get merge conflicts while pulling the latest changes using git. So you would have to manually solve them. It's not necessarily difficult, especially in your case. It's just boss scripts, so by nature, they are quite self-contained and you are sure to not break anything else when messing with them.
2. Modules
The modules should not be a problem at all. Modules exist exactly for this reason: being isolated and not causing issues in case of updating the core or similar.
My only concern here would be that module that required a core change. I don't know what module you installed, normally this shouldn't happen. A proper AzerothCore module should not require any core change.
However, again, the worst thing you can have is some git merge conflicts, nothing too big I hope (depends on how big and invasive were these changes required by the module).
3. Custom database changes.
The golden rule is: always store your custom SQL queries somewhere, in a way that they can be easily re-applied. For example, always use DELETE before INSERT, prefer UPDATE when possible, etc...
So all you need to have is a file (or a bunch of files) containing all your SQL code corresponding to the custom changes you made. If you don't have it, you can still extract it from your DB.
Then you can always re-apply them after you update your core, if you feel it's needed. It might also be the case that you don't need to re-apply them at all. Or maybe you want to start from a fresh AzerothCore world database and re-apply your changes. This really depends on the specific case, but anyway you will be fine (as long as you keep your changes in SQL files).
You can use Keira3 to edit your database, or just extract your changes in case you need to. For example, you can open an entity and copy its "full query".
Backup first
Before starting the upgrade procedure, create a backup of:
your DB
the source files that you have modified (e.g. bosses, etc...)
Update frequently!
However I never updated the server since I first installed it last year (December 2019).
This is not recommended at all! You are supposed to update your AzerothCore frequently (at least once a week). There are a lot of good reasons to do so, one of them is: it's way easier if you do it often.
How to update AzerothCore when using Docker
A generic question about updating AC with Docker has been asked already here: How to update azerothcore-wotlk docker container
The whenever gem is installed. I see all kinds of cron and whenever tutorials about scheduling tasks with minutes, hours, days etc. I watched the railscast video on cron/whenever.
But I've yet to find any examples about how to write a job itself, other than rake tasks.
I want to schedule a task that checks the database for changes. The idea is that the database will have tags that tell you which particular row has changed. Whenever should poll the database periodically to check for these changes. Then it hopefully can push, or let the client know it needs to update the page dynamically using ajax.
If I were doing this manually, I'd use commands like:
rails dbconsole
select blah from blah;
Is there a way to write mysql commands in whenever? Is this the correct/best way to poll the database for changes?
I know there are ways to poll a database from mysql itself, but I've been specifically told to do it from the rails side.
I'm a newbie to all of these technologies (Rails, databases, ajax) so that's probably why the answer isn't clear to me.
On the client end, I have buttons that use jquery to add/delete/change row data, just to assure myself I know how to change things in the table once I can get stuff from the database. Those buttons will eventually be removed.
Right now, the page uses ajax to refresh the entire html table. But they would like just a row refresh/update through ajax.
Look at the RailsCast for cron/whenever again. You'll notice an example line of code like this:
runner "MyModel.some_process"
The code in the strings is evaluated and run. So whatever you want whenever to run, just write that code yourself and have a way for it to be called.
So maybe you create a class named DatabaseWatcher and store it in lib which has a class level method named .run, you'd do the following:
runner "DatabaseWatcher.run"
And that's it. In your .run method is where you'd put your logic. As for how to actually write that code, that depends on your requirements. Are you looking for if the updated_at time is within 1 minute of now? Do you store a time when you last checked the DB, and then you can see if the updated_at time is greater than that? Do you have a table that stores every time the model is changed? That all depends on you.
I'm working on a WordPress site by doing development on my laptop and then deploying the changes by pushing with git to the server. This works great for files and I want to do the same thing with content changes to the database.
My first iteration at solving the problem used git hooks to dump the database using mysqldump before commits and then restoring the dumps after checkouts. This works but drops and recreates the whole database each time. This is not OK because WordPress is also making changes automatically to the database that I want to keep, like records of which products are sold, so I don't want the whole thing dropped and restored every checkout.
I'm thinking a better solution would be to continue dumping the database during commits and then for checkouts use a new tool that reads the output of git diff HEAD^ and converts it to INSERT and DELETE SQL statements fed to mysql. That way the database would be patched incrementally with my changes while preserving changes made by others (such as WordPress). Example:
git diff:
(83,NULL,550,'TI-99/4A','',0,0,0,0,'',0,0,0),
-(85,NULL,2000,'Banana Jr. 6000','',0,0,0,0,'',0,0,0),
+(85,NULL,2000,'Banana Jr. 6000 (now with tint control!)','',0,0,0,0,'',0,0,0),
(88,NULL,150,'Symbolics 3645','',0,0,0,0,'',0,0,0),
converted to SQL:
DELETE FROM `wp_yak_product` WHERE `post_id`='85';
INSERT INTO `wp_yak_product` VALUES (85,NULL,2000,'Banana Jr. 6000 (now with tint control!)','',0,0,0,0,'',0,0,0);
I've searched around and can't find anything like this. I'm considering writing it myself.
Does something like this exist? Is this a good or a bad idea?
To my knowledge that type of tool does not exist. The best option that I know of to produce similar output would be to use the "Synchronize Model" functionality in MySQL Workbench.
That said, I would recommend tracking the changes that you make in your development database in a SQL file, checked into git, which can be executed on your production server.
I thought of a possible approach:
I dump the database into a different file for each table:
wp_commentmeta.sql
wp_comments.sql
wp_links.sql
etc.
Perhaps I could separate the tables into categories of content vs. bookkeeping, like the distinction between the usr and var directories in Unix, and add the bookkeeping tables to my .gitignore so they're not clobbered when I update the content.
Currently I work at two places - at work and at home. I have a problem with keeping up to date. With files I solved my problem (I use private SVN and commit from phpStorm), but I still have no idea about MySQL. Currently, I just export my tables when I'm going out, but it isn't much of a good way (I know myself, sooner or later I'll forget to do it).
My question is: can I store MySQL data files on per-project basis, so I could commit it into SVN along with other files?
You could make use of a post commit hook that dumps the database, and a hook before update the inserts the dump.
We're an opensource project and would like to collaboratively edit our website through github public repo.
Any ideas on the best solution to export the mysql data to github, as mysql can hold some sensitive info in it, and how we can version the changes that happen in it ?
Answer is you don't hold data in the repo.
You may want to hold your ddl, and maybe some configuration data. But that's it.
If you want to version control your data, there are other options. GIT isn't one of them
It seems dbdeploy is what you are looking for
Use a blog engine "backend-ed by git", forget about mysql, commit on github.com, push and pull, dominate !
Here it is a list of the best:
http://jekyllrb.com/
http://nestacms.com/
http://cloudhead.io/toto
https://github.com/colszowka/serious
and just in case, ... a simple, Git-powered wiki with a sweet API and local frontend. :
https://github.com/github/gollum
Assuming that you have a small quantity of data that you wish to treat this way, you can use mysqldump to dump the tables that you wish to keep in sync, check that dump into git, and push it back into your database on checkout.
Write a shell script that does the equivalent of:
mysqldump [options] database table1 table2 ... tableN > important_data.sql
to create or update the file. Check that file into git and when your data changes in a significant way you can do:
mysql [options] database < important_data.sql
Ideally that last would be in a a git post-receive hook, so you'd never forget to apply your changes.
So that's how you could do it. I'm not sure you'd want to do it. It seems pretty brittle, esp. if Team Member 1 makes some laborious changes to the tables of interest while Team Member 2 is doing the same. One of them is going to check-in their changes first, and best case you'll have some nasty merge issues. Worst case is that one of them lose all their changes.
You could mitigate those issues by always making your changes in the important_data.sql file, but the ease or difficulty of that depend on your application. If you do this, you'll want to play around with the mysqldump options so you get a nice readable, and git-mergable file.
You can export each table as a separate SQL file. Only when a table is changed it can be pushed again.
If you were talking about configuration then I'd recommend sql dumps or similar to seed the database as per Ray Baxters answer.
Since you've mentioned Drupal, I'm guessing the data concerns users/ content. As such you really ought to be looking at having a single database that each developer connects to remotely - i.e. one single version. This is because concurrent modifications to mysql tables will be extremely difficult to reconcile (e.g. two new users both with user.id = 10 each making a new post with post.id = 1, post.user_id = 10 etc).
It may make sense, of course, to back this up with an sql dump (potentially held in version control) in case one of your developers accidentally deletes something critical.
If you just want a partial dump, PHPMyAdmin will do that. Run your SELECT statement and when it's displayed there will be an export link at the bottom of the page(the one at the top does the whole table).
You can version mysqldump files which are simply sql scripts as stated in the prior answers. Based on your comments it seems that your primary interest is to allow the developers to have a basis for a local environment.
Here is an excellent ERD for Drupal 6. I don't know what version of Drupal you are using or if there have been changes to these core tables between v6 and v7, but you can check that using a dump, or phpMyAdmin or whatever other tool you have available to you that lets you inspect the database structure. Drupal ERD
Based on the ERD, the data that would be problematic for a Drupal installation is in the users, user_roles, and authmap tables. There is a quick way to omit those, although it's important to keep in mind that content that gets added will have relationships to the users that added it, and Drupal may have problems if there aren't rows in the user table that correspond to what has been added.
So to script the mysqldump, you would simply exclude the problem tables, or at very least the user table.
mysqldump -u drupaldbuser --password=drupaluserpw 0-ignore-table=drupaldb.user drupaldb > drupaldb.sql
You would need to create a mock user table with a bunch of test users with known name/password combinations that you would only need to dump and version once, but ideally you want enough of these to match or exceed the number of real drupal users you'll have that will be adding content. This is just to make the permissions relationships match up.