Playframework: update / delete mysql tables? - mysql

Right now Play! automatically adds new tables to my mySQL database if I manually delete them. I remember reading a while back that it was possible to make play update the tables (without me needing to delete them first) when the models are changed.
I wasn't able to find anything with google, does anyone know how I can activate this? My biggest problems are the constraints that JPA is adding, they make it quite difficult do delete tables.

The way hibernate/play manages the database on Model changes is via the jpa.ddl property in your application.conf. If you read the file it states.
# Specify the ddl generation pattern to use. Set to none to disable it
# (default to update in DEV mode, and none in PROD mode):
# jpa.ddl=update
The options that I know about are
jpa.ddl=update - This simply updates the tables when a model changes
jpa.ddl=create-drop - This deletes the tables and recreates on model changes
jpa.ddl=validate -Just checks the schema, but does not make any changes
jpa.ddl=none - Does nothing
You can read more about this on the Hibernate site under the first property autoGenerateSchema

Related

How to properly wipe a database, and re-import?

I am unsure about the best way to do this. As I'm getting ready to put a new database into production, I need to import data from the old database that has been formed in the meantime of me working on it. The new database now also contains a lot of fake data that was used for testing, which I have to get rid of, so a fresh complete re-import seems reasonable.
Now, truncating all the tables in the new database cannot go through, because the foreign keys prevent it. Simply deleting the data instead would solve that problem, but it leaves the AUTO_INCREMENT indexes to the values where they were, so it's not a "proper" wipe. Now, there could be more properties such as that one, that would be left over (so to say), but this is the only one that I'm aware of.
So my question now is, how much of a problem could these "leftover" pieces of data pose to performance, if I were to go with the simple DELETE solution?
And also; is there a way that would be more thorough in cleaning it out, and also allow me to, of course, keep the defined constraints?
First i would use some gui tool to create the dump for the old DB ( like mySql workbench, or what ever you prefer ). Check options "Export to self-contained file", and check "Dump stored procedures and functions","Dump events" and "Dump triggers".
Then get create scripts for all tables not included in the old DB.
You can do this via "reverse engineer" option.
If you have trouble with this part this post will help.
How to get a table creation script in MySQL Workbench?
When you have old DB dump and create scripts for new sql tables, combine them to a single sql file.
On the first row add:
SET FOREIGN_KEY_CHECKS = 0;
On the last row add:
SET FOREIGN_KEY_CHECKS = 1;
Run the script. As a result you should have all tables ( new without data and old with data ), with all relations set properly. Hope it will work for you.

If I update a SQL table Scheme. Do I have to update all users DBs linked tables?

I updated the SCHEMA of a live table in MySQL for use in my multi-user database. Each user has their own db and links to the production tables through ODBC.
I have been receiving a write error while trying to test my schema updates. I cannot find the core reason. I hypothesized that because the other users are in the production table but have not been relinked to update the table SCHEMA; That it is causing a conflicting write error on my relinked table.
I added a TINYINT with No NULLS and default value of 0
I double checked all datatypes for incompatibility & have tested the "non relinked" tables in a older version of the DB and confirmed it is working as intended with no errors
I expect/want to be able to edit records without a write error, but am hesitant to update the other users to the new table if it is currently having write errors
After changing the schema of a linked table, it's required to refresh the link on all Access databases connected to it.
You can do this on the ribbon through external data -> linked table manager.
Unfortunately, either all users that have a database need to do this manually, unless you automate the task on startup through vba.
You have two separate issues. To "see" new columns, then yes, you must re-link the tables.
(so above is separate question and separate issue). You thus as a general rule can add new columns to the database (even while in use). However, the client side linked tables will not see the new columns until such time you re-link. This approach (adding new columns, but not yet re-linked from Access) is certainly ok and fine - the only downside is end users can't see nor use the new columns until such time you link. From a developer point of view, this good - since your users will not see nor find new columns until such time you roll out a new front end to each work station.
Ok, now problem and issue number two.
As for adding a new column, then re-linking, and THEN having some issue is really a separate issue. In most cases, if you attempting to use a tiny int as a Boolean (and I think that is your case), then you need to ensure several things:
Do not allow nulls (you seem to have this ok).
Make sure you set a default of 0 (server side) for this column. (you might have not allowed nulls, but without a default, then Access likely will still complain. And this default is important during creating time - since the new column needs to be "filled" with zeros.
Make sure the table has a PK defined.
Consider adding a row version column (I think mySQL has these, not sure but they can help immensely).

Add a global rule to modify dates for all database inserts & updates

I have a requirement to add a rule to a legacy MySQL 5.1.73 database whenever a specified date is about to be inserted or updated into any of the tables. (Each field already had a default date setting so using that in some way is not a viable solution).
For example:
IF NewDate = *TheSpecifiedDate* THEN SET NewDate = *ConstantDate*
The logic for this will be identical for all tables that have one or more DateTime fields.
My only solution at the moment is manually add triggers to each and every table. This will be a lot of work to do, and a lot of hassle to maintain if the requirement ever changes.
I therefore wondered if I can somehow make this a globalized rule/trigger for the entire database whenever an insert or update is attempted on any DateTime field?
Or is there a more elegant/preferred way of implementing this kind of global rule that may not even involve using triggers?
Just to close this off, for anyone trying to do something similar...
The main restriction in my scenario was that I am slowly migrating 200+ legacy applications and could therefore not manipulate the legacy database behavior or structure until all applications are converted. I also needed to replicate some existing behavior, whether I liked it or not!
When I posted this question I was using EF6 for my data access layer. Having changed to EF Core 1.1.0 I can now utilise the 'HasDefaultValueSql' setting in my table mappings which resolves my particular issue by allowing me to set a value whenever the DB is updated.

django - default value migration - will this touch database?

I want to make new migration which contains only new default values for some fields. But the database table has >10.000.000 rows (MySQL) and ~200.000 users online.
so what I generally dont know is: does the default value migration (or migrations like choicefield changes) touch the database?
I know that migrations like adding, deleting do touch the database to create/delete stuffs.
would be grateful for some useful tips and links if possible.
No, it doesn't touch the database.
It does not effect the behavior of setting defaults in the database directly - Django never sets database defaults and always applies them in the Django ORM code.
(in fact applying such migrations only marks them as applied and that's all, no real work is performed)

Entity Framework 4.1 Custom Database Initializer strategy

I would like to implement a custom database initialization strategy so that I can:
generate the database if not exists
if model change create only new tables
if model change create only new fields without dropping the table and losing the data.
Thanks in advance
You need to implement IDatabaseInitializer interface.
Eg
public class MyInitializer : IDatabaseInitializer<MyDbContext>
{
public void InitializeDatabase(MyDbContext context)
{
//your logic here
}
}
And then set your initializer at your application startup
Database.SetInitializer<ProductCatalog>(new MyInitializer());
Here's an example
You will have to manually execute commands to alter the database.
context.ObjectContext.ExecuteStoreCommand("ALTER TABLE dbo.MyTable ADD NewColumn VARCHAR(20) NULL");
You can use a tool like SQL Compare to script changes.
There is a reason why this doesn't exist yet. It is very complex and moreover IDatabaseInitializer interface is not very prepared for such that (there is no way to make such initialization database agnostic). Your question is "too broad" to be answered to your satisfaction. With your reaction to #Eranga's correct answer you simply expect that somebody will tell you step by step how to do that but we will not - that would mean we will write the initializer for you.
What you need to do what you want?
You must have very good knowledge of SQL Server. You must know how does SQL server store information about database, tables, columns and relations = you must understand sys views and you must know how to query them to get data about current database structure.
You must have very good knowledge of EF. You must know how does EF store mapping information. You must be able to explore metadata get information about expected tables, columns and relations.
Once you have old database description and new database description you must be able to write a code which will correctly explore changes and create SQL DDL commands for changing your database. Even this look like the simplest part of the whole process this is actually the hardest one because there are many other internal rules in SQL server which cannot be violated by your commands. Sometimes you really need to drop table to make your changes and if you don't want to lose data you must first push them to temporary table and after recreating table you must push them back. Sometimes you are doing changes in constraints which can require temporarily turning constrains off, etc. There is good reason why tools which do this on SQL level (comparing two databases) are probably all commercial.
Even ADO.NET team doesn't implemented this and they will not implement it in the future. Instead they are working on something called migrations.
Edit:
That is true that ObjectContext can return you script for database creation - that is exactly what default initializers are using. But how it could help you? Are you going to parse that script to see what changed? Are you going to execute that script in another connection to use the same code as for current database to see its structure?
Yes you can create a new database, move data from the old database to a new one, delete the old one and rename a new one but that is the most stupid solution you can ever imagine and no database administrator will ever allow that. Even this solution still requires analysis of changes to create correct data transfer scripts.
Automatic upgrade is a wrong way. You should always prepare upgrade script manually with help of some tools, test it and after that execute it manually or as part of some installation script / package. You must also backup your database before you are going to do any changes.
The best way to achieve this is probably with migrations:
http://nuget.org/List/Packages/EntityFramework.SqlMigrations
Good blog posts here and here.