Update Table if source modified - mysql

I know the title is not self-explaining but I don't know how to express this.
There is a site which keeps updating a small database opensource in many different formats, and I have a table in my DB that I want to be "linked" to that DB, so that if the DB gets updated the table gets cleared and repopulated with new data.
I am working with Rails and MySql as Database I don't know if this is possible, but any help would be appreciated, even a hint on what to google...
Thanks!

I assume you have access to codebase of both apps.
Suppose you have app A & B. You can create a rake task in A which will be triggered periodically via cronjob. The rake task will populate database of B through exposed api of B (accessible through secret api-key).
Hope you got the idea.

Related

How to reconcile WordPress data between staging and production [duplicate]

I've had a hard time trying to find good examples of how to manage database schemas and data between development, test, and production servers.
Here's our setup. Each developer has a virtual machine running our app and the MySQL database. It is their personal sandbox to do whatever they want. Currently, developers will make a change to the SQL schema and do a dump of the database to a text file that they commit into SVN.
We're wanting to deploy a continuous integration development server that will always be running the latest committed code. If we do that now, it will reload the database from SVN for each build.
We have a test (virtual) server that runs "release candidates." Deploying to the test server is currently a very manual process, and usually involves me loading the latest SQL from SVN and tweaking it. Also, the data on the test server is inconsistent. You end up with whatever test data the last developer to commit had on his sandbox server.
Where everything breaks down is the deployment to production. Since we can't overwrite the live data with test data, this involves manually re-creating all the schema changes. If there were a large number of schema changes or conversion scripts to manipulate the data, this can get really hairy.
If the problem was just the schema, It'd be an easier problem, but there is "base" data in the database that is updated during development as well, such as meta-data in security and permissions tables.
This is the biggest barrier I see in moving toward continuous integration and one-step-builds. How do you solve it?
A follow-up question: how do you track database versions so you know which scripts to run to upgrade a given database instance? Is a version table like Lance mentions below the standard procedure?
Thanks for the reference to Tarantino. I'm not in a .NET environment, but I found their DataBaseChangeMangement wiki page to be very helpful. Especially this Powerpoint Presentation (.ppt)
I'm going to write a Python script that checks the names of *.sql scripts in a given directory against a table in the database and runs the ones that aren't there in order based on a integer that forms the first part of the filename. If it is a pretty simple solution, as I suspect it will be, then I'll post it here.
I've got a working script for this. It handles initializing the DB if it doesn't exist and running upgrade scripts as necessary. There are also switches for wiping an existing database and importing test data from a file. It's about 200 lines, so I won't post it (though I might put it on pastebin if there's interest).
There are a couple of good options. I wouldn't use the "restore a backup" strategy.
Script all your schema changes, and have your CI server run those scripts on the database. Have a version table to keep track of the current database version, and only execute the scripts if they are for a newer version.
Use a migration solution. These solutions vary by language, but for .NET I use Migrator.NET. This allows you to version your database and move up and down between versions. Your schema is specified in C# code.
Your developers need to write change scripts (schema and data change) for each bug/feature they work on, not just simply dump the entire database into source control. These scripts will upgrade the current production database to the new version in development.
Your build process can restore a copy of the production database into an appropriate environment and run all the scripts from source control on it, which will update the database to the current version. We do this on a daily basis to make sure all the scripts run correctly.
Have a look at how Ruby on Rails does this.
First there are so called migration files, that basically transform database schema and data from version N to version N+1 (or in case of downgrading from version N+1 to N). Database has table which tells current version.
Test databases are always wiped clean before unit-tests and populated with fixed data from files.
The book Refactoring Databases: Evolutionary Database Design might give you some ideas on how to manage the database. A short version is readable also at http://martinfowler.com/articles/evodb.html
In one PHP+MySQL project I've had the database revision number stored in the database, and when the program connects to the database, it will first check the revision. If the program requires a different revision, it will open a page for upgrading the database. Each upgrade is specified in PHP code, which will change the database schema and migrate all existing data.
You could also look at using a tool like SQL Compare to script the difference between various versions of a database, allowing you to quickly migrate between versions
Name your databases as follows - dev_<<db>> , tst_<<db>> , stg_<<db>> , prd_<<db>> (Obviously you never should hardcode db names
Thus you would be able to deploy even the different type of db's on same physical server ( I do not recommend that , but you may have to ... if resources are tight )
Ensure you would be able to move data between those automatically
Separate the db creation scripts from the population = It should be always possible to recreate the db from scratch and populate it ( from the old db version or external data source
do not use hardcode connection strings in the code ( even not in the config files ) - use in the config files connection string templates , which you do populate dynamically , each reconfiguration of the application_layer which does need recompile is BAD
do use database versioning and db objects versioning - if you can afford it use ready products , if not develop something on your own
track each DDL change and save it into some history table ( example here )
DAILY backups ! Test how fast you would be able to restore something lost from a backup (use automathic restore scripts
even your DEV database and the PROD have exactly the same creation script you will have problems with the data, so allow developers to create the exact copy of prod and play with it ( I know I will receive minuses for this one , but change in the mindset and the business process will cost you much less when shit hits the fan - so force the coders to subscript legally whatever it makes , but ensure this one
This is something that I'm constantly unsatisfied with - our solution to this problem that is. For several years we maintained a separate change script for each release. This script would contain the deltas from the last production release. With each release of the application, the version number would increment, giving something like the following:
dbChanges_1.sql
dbChanges_2.sql
...
dbChanges_n.sql
This worked well enough until we started maintaining two lines of development: Trunk/Mainline for new development, and a maintenance branch for bug fixes, short term enhancements, etc. Inevitably, the need arose to make changes to the schema in the branch. At this point, we already had dbChanges_n+1.sql in the Trunk, so we ended up going with a scheme like the following:
dbChanges_n.1.sql
dbChanges_n.2.sql
...
dbChanges_n.3.sql
Again, this worked well enough, until we one day we looked up and saw 42 delta scripts in the mainline and 10 in the branch. ARGH!
These days we simply maintain one delta script and let SVN version it - i.e. we overwrite the script with each release. And we shy away from making schema changes in branches.
So, I'm not satisfied with this either. I really like the concept of migrations from Rails. I've become quite fascinated with LiquiBase. It supports the concept of incremental database refactorings. It's worth a look and I'll be looking at it in detail soon. Anybody have experience with it? I'd be very curious to hear about your results.
We have a very similar setup to the OP.
Developers develop in VM's with private DB's.
[Developers will soon be committing into private branches]
Testing is run on different machines ( actually in in VM's hosted on a server)
[Will soon be run by Hudson CI server]
Test by loading the reference dump into the db.
Apply the developers schema patches
then apply the developers data patches
Then run unit and system tests.
Production is deployed to customers as installers.
What we do:
We take a schema dump of our sandbox DB.
Then a sql data dump.
We diff that to the previous baseline.
that pair of deltas is to upgrade n-1 to n.
we configure the dumps and deltas.
So to install version N CLEAN we run the dump into an empty db.
To patch, apply the intervening patches.
( Juha mentioned Rail's idea of having a table recording the current DB version is a good one and should make installing updates less fraught. )
Deltas and dumps have to be reviewed before beta test.
I can't see any way around this as I've seen developers insert test accounts into the DB for themselves.
I'm afraid I'm in agreement with other posters. Developers need to script their changes.
In many cases a simple ALTER TABLE won't work, you need to modify existing data too - developers need to thing about what migrations are required and make sure they're scripted correctly (of course you need to test this carefully at some point in the release cycle).
Moreover, if you have any sense, you'll get your developers to script rollbacks for their changes as well so they can be reverted if need be. This should be tested as well, to ensure that their rollback not only executes without error, but leaves the DB in the same state as it was in previously (this is not always possible or desirable, but is a good rule most of the time).
How you hook that into a CI server, I don't know. Perhaps your CI server needs to have a known build snapshot on, which it reverts to each night and then applies all the changes since then. That's probably best, otherwise a broken migration script will break not just that night's build, but all subsequent ones.
Check out the dbdeploy, there are Java and .net tools already available, you could follow their standards for the SQL file layouts and schema version table and write your python version.
We are using command-line mysql-diff: it outputs a difference between two database schemas (from live DB or script) as ALTER script. mysql-diff is executed at application start, and if schema changed, it reports to developer. So developers do not need to write ALTERs manually, schema updates happen semi-automatically.
If you are in the .NET environment then the solution is Tarantino (archived). It handles all of this (including which sql scripts to install) in a NANT build.
I've written a tool which (by hooking into Open DBDiff) compares database schemas, and will suggest migration scripts to you. If you make a change that deletes or modifies data, it will throw an error, but provide a suggestion for the script (e.g. when a column in missing in the new schema, it will check if the column has been renamed and create xx - generated script.sql.suggestion containing a rename statement).
http://code.google.com/p/migrationscriptgenerator/ SQL Server only I'm afraid :( It's also pretty alpha, but it is VERY low friction (particularly if you combine it with Tarantino or http://code.google.com/p/simplescriptrunner/)
The way I use it is to have a SQL scripts project in your .sln. You also have a db_next database locally which you make your changes to (using Management Studio or NHibernate Schema Export or LinqToSql CreateDatabase or something). Then you execute migrationscriptgenerator with the _dev and _next DBs, which creates. the SQL update scripts for migrating across.
For oracle database we use oracle-ddl2svn tools.
This tool automated next process
for every db scheme get scheme ddls
put it under version contol
changes between instances resolved manually

Migrate Table Data Only from MSSQL to MySQL

Been tasked with moving a code first database from MSSQL to MySQL. After a few hours of kung fu, I was able to get the asp.net core project to properly deploy all migrations to mysql. Now I need to migrate the data inside of the existing mssql tables. I saw posts mentioning MySQL Migration Toolkit but that appears to be old. Also attempted to do so with MySQL Workbench and DBLoad's Data Loader but haven't had any luck.
Table structure is pretty simple with incremental integer keys + the usual crap with asp.net core identify framework (GUID). Just need to keep that consistent during the migration. What is the best way to migrate the data now that the table structure is setup in MySQL? Any recommendations would be greatly appreciated!
Update: Some more details...
I attempted a direct migration of the database from MSSQL to MySQL using MySQL WorkBench and DBLoader. But failed on the ASP.net Identity tables big time plus other issues. .net core api took a huge dump in multiple places so that idea is out.
From that point, I migrated the api controller over to mysql and then had to fix a myriad of issues related to mssql fks being too long and a few other issues.
So at this point, the controller works on mysql. I just need to dump all of the data into MySQL and keep the FKs consistent.
I have had a few thoughts with it such as CSV export>import and/or trying a few other things. Any recommendations?
You can use Data Export wizard built-in with SqlServer Management Studio (task -> Export Data)
or use SSIS package to migrate data.
Tried MySQL Workbench, DBLoad from DBLoad.com etc to migrate the data directly and they all failed.
So ended up finding a "solution"...
First off, I modified the ASP.net Core project with:
//services.AddDbContext<ApplicationDbContext>(options =>
// options.UseSqlServer(
// Configuration.GetConnectionString("DefaultConnection")));
services.AddDbContext<ApplicationDbContext>(options =>
options.UseMySql(
Configuration.GetConnectionString("DefaultConnection")));
Then went to Package Manager Console and ran: update-database
This created all of the tables in MySQL.
Then opened up Microsoft SQL Management Studio. Right clicked on the database Tasks > Generate Scripts. Saved everything to one file with Advanced option Schema and Data selected.
Then opened the db script in Notepad++ and applied the following edits using Replace with Extended Search Mode enabled:
GO -> blank
[dbo]. -> blank
[ -> blank
] -> blank
)\r\n -> );\r\n
\r\n' -> '
DateTime2 -> DATETIME
After edits were made in Notepad++, I removed all of the SET, ALTER and CREATE related stuff from the text file and then copied all of the insert lines into MySQL Workbench in order to ensure foreign keys were already populated before that table's data was inserted. Thank goodness there were no Stored Procedures to deal with! What a pain in the tucus!
Also on a side note, the app is hosted on Azure. Spent a couple of hours fighting the API not connecting to the database. This was not apparent at first as the app was fake throwing a 404 error to Postman. When I first attempted to wire up the API controllers to the MySQL DB, I entered the database connection string into Azure's App Service Configuration. It didn't work at all even though running the app locally worked fine. Ended up finding another post on this here site mentioning to get rid of the database connection string out of the App Service > Configuration window. Worked like a champ after that. All of the data with its auto incremented keys linked up without issue.
I am very pleased with the results and hope I never have to go through this process again! It is always a nice feeling to know an app now runs on a completely open source infrastructure. Hope this helps someone. Good luck.

merge design of mysql between localhost and server?

I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.

Talend Data warehousing tool

Ques: I have two database one is client's database(live database) and another is mine.I am using MySQL database. I should not access client's database directly so I created my own database. By using 'Talend' data warehousing tool I created job for each table and by executing all jobs I can get all updated data from client's database(live database) to my database. I need to execute these jobs manually for getting updated data into my Database, But my question is: is there any process which will automatically remind me, when client insert or update data on there data base so I can execute those jobs manually to get updated data into my database ?? or if client update their any database table so automatically associated job will Execute/Run ?? Please help me on this.
You would need to set up a database trigger that somehow notifies the Talend job and runs it. To do this you'd typically call the job as a web service using a stored procedure or user defined function. This link shows a typical way that a web service may be called on an update trigger for example.
If your source data tables are large, rather than extracting all of the data from the table and then I guess dropping your table and recreating you could use a tMysqlCDC component to only affect changes. The built in tutorial for the component looks like it pretty much covers a useful example of this in practice. If you are seeing regular changes in the source database this could make your job much more performant.
If you have absolutely no access to your client's database then you could alternatively just run the job with some scheduler. The Enterprise versions of Talend come with the Talend Administration Console that allows you to set CRON triggers for a job and could easily be set to run every minute or any other interval (not seconds). Alternatively you could use your operating systems scheduling system to run the job at your desired intervals.
If you can't modify your clients database (i.e. add triggers), and there is no other way to identify changed records (i.e. some kind of audit table) then you're our of luck.

Cleaning a Production Database for use in Testing

I'm building a local vm for doing web dev rather than using our on site development. I need a database locally, but I don't want to just pull down a production db and use that as it has information that, while not protected by HIPAA or anything, should not be available in the case of laptop theft. Are there any apps or recommended practices to sanitize this data so that I am able to pull down a db, clean it, and install it in my vm?
Clarification: What I'm really looking for is an app that would allow me to mark the specific columns as sensitive and whack those ones whenever I imported a new copy of the DB.
Sounds like what you need is a data generator, one that will populate your database with bogus data. Redgate has a good one, but I don't know if it will work with mysql. Maybe this will help you out?
TRUNCATE table;
or
DELETE FROM table WHERE true;
on any table that you don't want to keep the data of, and then either set dummy values for any sensitive user data, or delete all user data and just turn a few accounts into local testing accounts (user 'testadmin', password 'password', etc).
The more interesting question that you should be asking yourself is: Why does my database not already have skeleton sql migrations that I can run to create a clean database? What happens when you need to create a separate production instance on another server?