Skip exceptions doctrine migrations - exception

I know that It seems a strange behaviour avoid exceptions in doctrine but I need to do that because I am working in a old project and someone in the past executed some migrations and then he decided remove it so right now is complicated to replicate production enviroment in local without crashes, and this is the reason why I need to executed some query (Remove foreign key and add againg to be sure that I have the same enviroment in local as in production.
It is possible to do that? Of course I tried with try/catch but is not working, I read doctrine documentation but there is not information about, so it seems that is not possible but maybe there are a approach to do that

I was not able to catch exceptions, so to solve I decided to create two differents migrations file, the first one for development and the second one for production. To execute only in the correct environment I call to the skipIf and I only execute this query if I am not in production.
$this->skipIf('prod' === getenv('SYMFONY_ENV'), "This migrations is only executed in develop and test enviroments");
I know that is not the best approach but is the good one in order to this problem.

Related

Do I need to create DB migration, if I have a designed DB in MySQL?

I designed a database in MYSQL. Now I want to connect that with my Laravel application. But after researching, I understand that I need to migrate the database into Laravel by creating migration tables. But doing the same thing that I did in MySQL is tedious. I think there is another way to do the stuff. After searching I got this article from stackoverflow, but this is related to yii framework. I think someone knows here the solution of my problem.
I don't see a problem here, you can just connect Laravel to your DB (by editing your .env file) without any problems, migration files are just an easier way to design and implement your tables (also altering your tables' scheme on production is a very useful usage for migrations), but with that being done then no further actions are required, and you are good to go!
it's pretty obvious, you can update the database setting in the .env file and use DB::table('your_table_name') and get whatever query you want.
The upside of using migrations is that the exact same database can be created on different systems (ex: co-workers or acceptance/production servers).
Laravel migrations have an up and a down function so you can rollback to a specific version if something went wrong, very usefull when publishing your code.
That being said, co-workers could also review the database changes and for example hint at using more indexes and foreign keys.
As the other comments said, it's not a requirement you use migrations but it has considerable advantages versus creating and updating the database manually.

Test startegy for liquibase changes

I am part of a team that works on a SpringBoot application and we use liquibase for maintaining our database changes.
Config data that has to be added/removed/modified goes as part of a change-set.
The problem is that there are no ways of validating any liquibase changes from our end. There has been a couple of occurrences lately where an incorrect config data modification change-set broke a functionality.
Is there a way I can implement a test-case sort of thing for the liquibase change-sets to prevent such things from happening in future. If yes, could you please point me to the right direction as I am completely clueless about how to implement that.
Thanks in advance.
Liquibase recently introduce a new capability that can help: quality checks
The capability lets you scan your changelogs and changesets for patterns that are of concern to you. For example it can detect when privileges are being granted or a sql statement or changeset contains a DROP TABLE... statement. If you know the patterns that cause problems then this capability might be of help.

How to properly Update Docker Azerothcore with customizations to both code (scripts), modules and database (added quests, vendors, items)

I'm running Azerothcore-WOLTK inside a Docker container.
I would like to update the server since I read there's an important security fix.
However I never updated the server since I first installed it last year (December 2019). Since then, I have customized the server in several ways:
I have customized a few boss scripts to work properly with two players.
I have installed a few modules, including one that also required some extra code to be compiled, and some SQL queries to be run.
I have modified the database myself, adding Quests, NPC, Vendors and Items
As such, I'm extremely concerned I would end up messing everything up. I would require your assistance on how to proceed to update the server to the latest version while maintaining all the customization I have performed.
I'm especially concerned about the database changes as I figure I could backup the updated boss scripts, do a git pull and replace them again before building (I should do a fork afterwards, I didn't think about it)...
But in any I case I would be extremely thankful if you could guide me step by step along the way, considering I am using a docker installation.
For anything Database related I use Heidi SQL, so I could use that for any Database procedure. I'm not very proficient in SQL queries, but I should be able to import .sql files as needed.
I realize I'm asking a lot, so please don't feel pressured to answer right away. I will be most thankful if you could help me whenever you have the chance.
Thank you for your time :)
I'll try to answer all points you mentioned:
1. The boss scripts.
The worst thing that can happen is that you get merge conflicts while pulling the latest changes using git. So you would have to manually solve them. It's not necessarily difficult, especially in your case. It's just boss scripts, so by nature, they are quite self-contained and you are sure to not break anything else when messing with them.
2. Modules
The modules should not be a problem at all. Modules exist exactly for this reason: being isolated and not causing issues in case of updating the core or similar.
My only concern here would be that module that required a core change. I don't know what module you installed, normally this shouldn't happen. A proper AzerothCore module should not require any core change.
However, again, the worst thing you can have is some git merge conflicts, nothing too big I hope (depends on how big and invasive were these changes required by the module).
3. Custom database changes.
The golden rule is: always store your custom SQL queries somewhere, in a way that they can be easily re-applied. For example, always use DELETE before INSERT, prefer UPDATE when possible, etc...
So all you need to have is a file (or a bunch of files) containing all your SQL code corresponding to the custom changes you made. If you don't have it, you can still extract it from your DB.
Then you can always re-apply them after you update your core, if you feel it's needed. It might also be the case that you don't need to re-apply them at all. Or maybe you want to start from a fresh AzerothCore world database and re-apply your changes. This really depends on the specific case, but anyway you will be fine (as long as you keep your changes in SQL files).
You can use Keira3 to edit your database, or just extract your changes in case you need to. For example, you can open an entity and copy its "full query".
Backup first
Before starting the upgrade procedure, create a backup of:
your DB
the source files that you have modified (e.g. bosses, etc...)
Update frequently!
However I never updated the server since I first installed it last year (December 2019).
This is not recommended at all! You are supposed to update your AzerothCore frequently (at least once a week). There are a lot of good reasons to do so, one of them is: it's way easier if you do it often.
How to update AzerothCore when using Docker
A generic question about updating AC with Docker has been asked already here: How to update azerothcore-wotlk docker container

What's the appropriate way to test code that uses MySQL-specific queries internally

I am collecting data and store this data in a MySQL database using Java. Additionally, I use Maven for building the project, TestNG as a test framework, and Spring-Jdbc for accessing the database. I've implemented a DAO layer that encapsulates the access to the database. Besides adding data using the DAO classes I want to execute some queries which aggregate the data and store the results in some other tables (like materialized views).
Now, I would like to write some testcases which check whether the DAO classes are working as they should. Therefore, I thought of using an in-memory database which will be populated with some test data. Since I am also using MySQL-specific SQL queries for aggregating data, I went into some trouble:
Firstly, I've thought of simply using the embedded-database functionality provided by Spring-Jdbc to instantiate an embedded database. I've decided to use the H2 implementation. There I ran into trouble because of the aggregation queries, which are using MySQL-specific content (e.g. time-manipulation functions like DATE()). Another disadvantage of this approach is that I need to maintain two ddl files - the actual ddl file defining the tables in MySQL (here I define the encoding and add comments to tables and columns, both features are MySQL-specific); and the test ddl file that defines the same tables but without comments etc. since H2 does not support comments.
I've found a description for using MySQL as an embedded database which I can use within the test cases (http://literatitech.blogspot.de/2011/04/embedded-mysql-server-for-junit-testing.html). That sounded really promising to me. Unfortunately, it didn't worked: A MissingResourceExcpetion occurred "Resource '5-0-21/Linux-amd64/mysqld' not found". It seems that the driver is not able to find the database daemon on my local machine. But I don't know what I have to look for to find a solution for that issue.
Now, I am a little bit stuck and I am wondering if I should have created the architecture differently. Do someone has some tips how I should setup an appropriate system? I have two other options in mind:
Instead of using an embedded database, I'll go with a native MySQL instance and setup a database that is only used for the testcases. This options sounds slow. Actually, I might want to setup a CI server later on and I thought that using an embedded database would be more appropriate since the test run faster.
I erase all the MySQL-specific stuff out of the SQL queries and use H2 as an embedded database for testing. If this option is the right choice, I would need to find another way to test the SQL queries that aggregates the data into materialized views.
Or is there a 3rd option which I don't have in mind?
I would appreciate any hints.
Thanks,
XComp
I've created Maven plugin exactly for this purpose: jcabi-mysql-maven-plugin. It starts a local MySQL server on pre-integration-test phase and shuts it down on post-integration-test.
If it is not possible to get the in-memory MySQL database to work I suggest using the H2 database for the "simple" tests and a dedicated MySQL instance to test MySQL-specific queries.
Additionally, the tests for the real MySQL database can be configured as integration tests in a separate maven profile so that they are not part of the regular maven build. On the CI server you can create an additional job that runs the MySQL tests periodically, e.g. daily or every few hours. With such a setup you can keep and test your product-specific queries while your regular build will not slow down. You can also run a normal build even if the test database is not available.
There is a nice maven plugin for integration tests called maven-failsafe-plugin. It provides pre- and post- integration test steps that can be used to setup the test data before the tests and to cleanup the database after the tests.

How do I manage database evolutions when I'm using JPA?

I learned play by following the tutorial on their website for building a little blogging engine.
It uses JPA and in it's bootstrap calls Fixtures.Deletemodels(), (or something like that).
It basically nukes all the tables every time it runs and I lose all the data.
I've deployed a production system like (sans the nuke statement).
Now I need to deploy a large update to the production system. Lots of classes have changed, been added, and been removed. In my testing locally, without nuking the tables on every run I ran into sync issues. When I would try to write or read from tables play would throw errors. I opened up mysql and sure enough the tables had only been partially modified and modified incorrectly in some cases. Even if I have the DDL mode set to "create" in my config JPA can't seem to "figure out" how to reconcile the changes and modify my schema accordingly.
So I have to put back in the bootstrap statement that nukes all my tables.
So I started looking into database evolutions within Play and read an article on the play framework website about database evolutions. The article talked about version scripts, but it said, "If you work with JPA, Hibernate can handle database evolutions for you automatically. Evolutions are useful if you don’t use JPA".
So if JPA is supposed to be taking care of this for me, how do I deploy large updates to a large Play app? So far JPA has not been able to make the schema changes correctly and the app will throw errors. I'm not interested in losing all my data so the fix on dev "Fixtures.deleteModels()" can't really be used in prod.
Thanks in advance,
Josh
No, JPA should not take care of it for you. It's not a magic tool. If you decide to rename the table "client" to "customer", the column "street" to "line1" and to switch the values of the customer type column from 1, 2, 3 to "bronze", "silver", "gold", there is no way for JPA to read in your mind and figure all the changes to do automagically.
To migrate from one schema to another, you use the same tools as if you didn't use JPA: SQL scripts, or more adavanced schema and data migration tools, or even custom migration JDBC code.
Have a look at flyway. You may trigger database migrations from your code or maven.
There is a property called hbm2ddl.auto=update which will update your schema.
I would STRONGLY suggest to not use this setting in production as it introduces a whole new level of problems if something goes wrong.
It's perfectly fine for development though.
When a JPA container starts (say, EclipseLink or any other), it expects to find a database which matches #Entity classes you've defined in your code. If the database has been migrated already, everything will work smoothly; otherwise: probably it will fail.
So, long story short, you need to perform database migrations (or evolutions, if you prefer) before the JPA container starts. Apparently, Play performs migrations for you, before Play kicks off the database manager you configured. So, in theory, regardless the ORM you are using, Play decides when it's time for the ORM to start its work. So, conceptually it should work.
For a good presentation about this subject, please have a look at the second video at: http://flywaydb.org/documentation/videos.html