In my MVC application I imported a stored procedure as a function import (in EDMX File)
The stored procedure changed (new parameter) but I don't know how to update it.
For now I just deleted and re-add it manually, but I would like to know what's the best way to achieve this.
UPDATE:
I found an option in the update model from database wizar, there is a refresh tab there, but when attempting to refresh, It does not create the new parameter
First, to understand your problem, what you need to know is that the EDMX file is just an XML file that contains 3 different sections:
CSDL: Conceptual schema definition language
SSDL: Store schema definition language
MSL: Mapping specification language
The CSDL contains the entities and relationships that make up your conceptual model. The SSDL describes your DB model and the MSL is the mapping between the 2.
The “Update Model From DB” process will update the SSDL (change everything that is inconsistent with the current DB schema), it will only modify the CSDL in case you’ve added new things to your DB schema.
This is quite a normal behavior since your Conceptual schema may/should differ from your DB schema (unless you want your Domain model to look exactly like a DB model which obviously do not sound as OOP/DDD best practices).
The Function Import mechanism works the same way. As soon as you import a new Stored Procedure, a new FunctionImport Element will be added in the CSDL. This new element will describe the SP including its parameters. As I said, only new things will be added in the CSDL if you run the Update Wizard, that's why if you change any SP parameter in your DB, it won't be changed in the conceptual model.
To force the conceptual model to change, open your EDMX, go in your Model Browser, expand the Function Import entry:
If you want everything to be refreshed, simply remove the function and import it again
If you want to change input parameters, expand the right function, remove the parameters and update the function
If you want to update only the return type, right click on the right function, select update and click on Update
Related
I am using TypeORM with MySQL and am setting up automatic auditing of all columns and database tables via MySQL Triggers - not TypeORM's "Logger" feature (unless you have some extra info)...
Without getting bogged down, the MySQL Triggers approach works very well and means no app-side code is required.
The problem: I cannot provide MySQL queries with the logged in app user's ID in a way that does not require we apply it in every query created in this app. We do have a central "CRUD" class, but that is for generic CRUD, so our more "specialist" queries would require special treatment - undesired.
Each of our tables has an int field "editedBy" where we would like to update with the user ID who edited the row (by using our app).
Question: Is there a way to intercept all non-read queries in TypeORM (regardless if its active record or query builder) and be able to update a column in the affected tables ('editedBy' int field)?
This would allow our Triggers solution to be complete.
P.S. I tried out TypeORM's custom logging function:
... typeorm createConnection({ ....
logger: new MyCustomLogger()
... });
class MyCustomLogger { // 'extend' has issue - works without anyway: extends Logger {
logQuery(query, parameters, somethingelse) // WORKS
{ ... }
logQuery does appear to fire before the query (I think) is sent to MySQL, but I cannot find a way how to extract the "Json-like" javascript object out of this, to modify each table's "editedBy". It would be great if there was a way to find all tables within this function and adjust editedBy. Happy to try other options... that don't entail updating the many files we have containing database calls.
Thanks
IMHO it should not be correct to use the logging feature of TypeOrm to modify your queries, it is very dangerous even if it would work with a bit of effort.
If you want to manage the way the upsert queries are done in TypeOrm, the best practice is to use custom repositories and then always calling it (not spawning vanilla repositories aftewards like in entityManager.getRepository(Specialist), instead use yours with entityManager.getCustomRepository(SpecialistRepository)).
The official documentation on the subject should help you a lot: https://github.com/typeorm/typeorm/blob/master/docs/custom-repository.md
Then in your custom repository you can override the save method and add whatever you want. Your code will be explicit and a good advantage is that it does not apply to every entity so if you have other different cases when you want to save differently, you are not stuck (you can also add custom save methods).
If you want to generalize the processing of the save methods, you can create an abstract repository to extend TypeOrm repository that you can then extend with your custom repository, in it you can add your custom code so that you don't end up copying it in every custom repository.
SpecialistRepository<Specialist> -> CustomSaveRepository<T> -> Repository<T>
I used a combination of https://github.com/skonves/express-http-context node module to pass user ID to TypeORM's Event Subscribers feature to make the update to data about to be submitted to DB: https://github.com/typeorm/typeorm/blob/master/sample/sample5-subscribers/subscriber/EverythingSubscriber.ts
I'm setting up alembic for our project, which is already really big, and has a lot of tables. The thing is that our project's DB has been managed via SQL for a long time, and our Alchemy models are almost all reflected like so (I obscured the name, but the rest is all directly from our code):
class SomeModel(Base, BaseModelMixin):
"""
Model docstring
"""
"""Reflect Table"""
__table__ = Table('some_models', metadata, autoload=True)
What's happening is that when I create an automatic migration, a lot of drop table (and a lot of create table) operations are created for some reason. I assumed it's because the model class doesn't explicitly define the tables, but I don't see why that would drop the tables as well.
I'm making sure all model definitions are processed before setting the target_metadata variable in env.py:
# this imports every model in our app
from shiphero_app.utils.sql_dependencies import import_dependencies
import_dependencies()
from shiphero_app.core.database import Base
target_metadata = Base.metadata
Any ideas of what might I be missing here?
This is probably what you are looking for - this makes Alembic ignore predefined tables:
https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-generate-any-drop-table-directives-with-autogenerate
Unfortunately this also prevents Alembic from dropping tables within scope as well
I have two sites, one administration site, where i can edit all the values in a database, and a public site that only reads the data from the database.
Both the site have identical dbml files to work on the databas.
When I insert a new car (Site is about cars, so all values are car related) and it's data, the inserted values display on the public site immediatly...
When I update data for a car, the values in the database are changed immediatly but the Public site keeps displaying the old values...
I read that I can use a new instance of the dbml file for each query to force the dbml file to go read the values in the database...
I do this with the following code in a code file where I put al my queries... but this doesn't work...
Public Shared AixamReader As FrontstoreAdministrationDataClassesDataContext = New FrontstoreAdministrationDataClassesDataContext
Then I call AixamReader in each Query...
Is there a better way to force the dbml file to get the updated values from a database?
The cause of your problem is not the DBML file. The DBML does not read or query your data, it is just a base for the generation of the classes that do the actual work.
So updating/inserting data in the actual database has zero effect on your DBML file.
I think what you are calling DMBL file here, is actually the DataContext.
And yes, if you use an "old" DataContext to do your queries and in the meantime your data is updated by another process, it will not show up in the queries you do with that DataContext. This is also known as a "stale datacontext".
So, use a new DataContext for each request and your problem is solved. Anyway, DataContexts are pretty lightweight and are designed to be used this way (a new one for each unit of work).
I know EF checks the EdmMetadata table to determine if the version of model classes is same as database tables.
I want to know exactly how EF can find if the version of a model has changed. In other words, I want to know what does EF compare to the modelhash in the database?
Have a look at this blog post about the EdmMetadata table.
For your question, this is the relevant parts:
The EdmMetadata table is a simple way for Code First to tell if the
model used to create a database is the same model that is now being
used to access the database. As of EF 4.1 the only thing stored in the
table is a single row containing a hash of the SSDL part of the model
used to create the database.
(Geek details: when you look in an EDMX file, the SSDL is the part of
that file that represents the database (store) schema. This means that
the EdmMetadata model hash only changes if the database schema that
would be generated changes; changes to the conceptual model (CSDL) or
the mapping between the conceptual model and the database (MSL) will
not affect the hash.)
I would like to implement a custom database initialization strategy so that I can:
generate the database if not exists
if model change create only new tables
if model change create only new fields without dropping the table and losing the data.
Thanks in advance
You need to implement IDatabaseInitializer interface.
Eg
public class MyInitializer : IDatabaseInitializer<MyDbContext>
{
public void InitializeDatabase(MyDbContext context)
{
//your logic here
}
}
And then set your initializer at your application startup
Database.SetInitializer<ProductCatalog>(new MyInitializer());
Here's an example
You will have to manually execute commands to alter the database.
context.ObjectContext.ExecuteStoreCommand("ALTER TABLE dbo.MyTable ADD NewColumn VARCHAR(20) NULL");
You can use a tool like SQL Compare to script changes.
There is a reason why this doesn't exist yet. It is very complex and moreover IDatabaseInitializer interface is not very prepared for such that (there is no way to make such initialization database agnostic). Your question is "too broad" to be answered to your satisfaction. With your reaction to #Eranga's correct answer you simply expect that somebody will tell you step by step how to do that but we will not - that would mean we will write the initializer for you.
What you need to do what you want?
You must have very good knowledge of SQL Server. You must know how does SQL server store information about database, tables, columns and relations = you must understand sys views and you must know how to query them to get data about current database structure.
You must have very good knowledge of EF. You must know how does EF store mapping information. You must be able to explore metadata get information about expected tables, columns and relations.
Once you have old database description and new database description you must be able to write a code which will correctly explore changes and create SQL DDL commands for changing your database. Even this look like the simplest part of the whole process this is actually the hardest one because there are many other internal rules in SQL server which cannot be violated by your commands. Sometimes you really need to drop table to make your changes and if you don't want to lose data you must first push them to temporary table and after recreating table you must push them back. Sometimes you are doing changes in constraints which can require temporarily turning constrains off, etc. There is good reason why tools which do this on SQL level (comparing two databases) are probably all commercial.
Even ADO.NET team doesn't implemented this and they will not implement it in the future. Instead they are working on something called migrations.
Edit:
That is true that ObjectContext can return you script for database creation - that is exactly what default initializers are using. But how it could help you? Are you going to parse that script to see what changed? Are you going to execute that script in another connection to use the same code as for current database to see its structure?
Yes you can create a new database, move data from the old database to a new one, delete the old one and rename a new one but that is the most stupid solution you can ever imagine and no database administrator will ever allow that. Even this solution still requires analysis of changes to create correct data transfer scripts.
Automatic upgrade is a wrong way. You should always prepare upgrade script manually with help of some tools, test it and after that execute it manually or as part of some installation script / package. You must also backup your database before you are going to do any changes.
The best way to achieve this is probably with migrations:
http://nuget.org/List/Packages/EntityFramework.SqlMigrations
Good blog posts here and here.