Entity Framework Code First - MySQL - error can't find table - mysql

I'm new to EF, EF Code First, and EF with MySQL. When would EF Code First create your tables within a ASP.NET MVC web project?
I created a Person model. Then generated the Controller and standard Views.
When I hit the Index method of the Person controller it tries to pull back a list of all People. Then I get the error:
An error occurred while executing the command definition. See the inner exception for details.
The inner exception:
Table 'testmvc.people' doesn't exist
So I've made it past the connection. But the table wasn't created. How do I create the tables? Also how do I prevent the pluralization of Person to People in the naming scheme?

The simplest way to generate the database schema (people table and others) is to set a database initializing strategy like this:
Database.SetInitializer<SomeContext>( new
DropCreateDatabaseAlways<SomeContext>());
This code needs to run before you attempt to load any data, so the Application_Start() method in Global.asax would be a good place to do that. There are several ways to initialize, so you may want to take a look at them before choosing one, see http://msdn.microsoft.com/en-us/library/system.data.entity%28v=vs.103%29.aspx and look at the methods that implement IDatabaseInitializer. Officially, there is a strategy by default, although I have never quite found that to work for me.
You should also be aware that while this method is great for prototyping and development, you can't quite use it on production database with live data since the database is first dropped and then recreated. There are other methods of doing this at that point - see Database migrations for Entity Framework 4 for possibilities.
Regarding your other question of using non-pluralized table names, there are several ways to do this. One way is to annotate the Person class like this:
[Table("Person")]
class Person
{
// some field attributes
}
To set this for all tables at once, you can use the fluent API, like this:
class SomeContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
}
}

MySql with entity framework needs some little tweaks. You need to create three classes(you can check https://learn.microsoft.com/en-us/aspnet/identity/overview/getting-started/aspnet-identity-using-mysql-storage-with-an-entityframework-mysql-provider for more details). First create a MySqlHistoryContext class.
public class MySqlHistoryContext : HistoryContext
{
public MySqlHistoryContext(
DbConnection existingConnection,
string defaultSchema)
: base(existingConnection, defaultSchema)
{
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<HistoryRow>().Property(h =>
h.MigrationId).HasMaxLength(100).IsRequired();
modelBuilder.Entity<HistoryRow>().Property(h =>
h.ContextKey).HasMaxLength(200).IsRequired();
}
}
Create a MySqlConfiguration class next
public class MySqlConfiguration : DbConfiguration
{
public MySqlConfiguration()
{
SetHistoryContext(
"MySql.Data.MySqlClient", (conn, schema) => new MySqlHistoryContext(conn, schema));
}
}
Create MySqlInitializer class next
public class MySqlInitializer : IDatabaseInitializer<ApplicationDbContext>
{
public void InitializeDatabase(ApplicationDbContext context)
{
if (!context.Database.Exists())
{
// if database did not exist before - create it
context.Database.Create();
}
else
{
// query to check if MigrationHistory table is present in the database
var migrationHistoryTableExists =
((IObjectContextAdapter)context).ObjectContext.ExecuteStoreQuery<int>(
"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema =
'IdentityMySQLDatabase' AND table_name = '__MigrationHistory'");
// if MigrationHistory table is not there (which is the case first time
we run) - create it
if (migrationHistoryTableExists.FirstOrDefault() == 0)
{
context.Database.Delete();
context.Database.Create();
}
}
}
}
Open the IdentityModels.cs in the model folder.Add this to the ApplicationDbContext : IdentityDbContext class
static ApplicationDbContext()
{
Database.SetInitializer(new MySqlInitializer());
}

Related

Implementing with both Adapter Design Pattern and Facade Design pattern

I'm new to design patterns.
I'm implementing a tool which can connect to different databases as user need.
this is my code structure.
in controllers I have my API calls. Below I paste post APi call for get all databases in server
#PostMapping("/allDatabases")
public List<String> getDatabases(#RequestBody DatabaseModel db)
throws IOException, SQLException {
return migrationInterface.getAllDatabases(db);
}
for now I'm getting response by calling a method in interface inside service package.
But when database server is change(ex: postgres,mysql) I have to use different queries.
Ex:
public class PostgresPreparedStatements {
public PreparedStatement getAllDbs(Connection con) throws SQLException {
return con.prepareStatement(
"SELECT datname FROM pg_database
WHERE datistemplate = false;");
}
}
This query is not working in MySQL database. So I'll keep deferent prepared statements for deferent databases. My idea is calling to a BaseAdapter from controller and check server type like below.
public class BaseAdapter {
public void checkServerType(String server) {
switch(server) {
case "postgres" :
// postgres functions
break;
case "mysql" :
// mysql functions
break;
default:
break;
}
}
}
I want to call PostgresConnector.java if server is postgres. from Connector I want to call Facade to call functions and related queries.
Any idea how to do this?
please note: For now I'm implementing this for postgres and MySQL,but in future this should work with any database.
Adapter pattern is not used when you want to add new behaviour such as new databases in your case. The goal of adapter class is to allow other class to access the existing functionality. Adapter converts the interface of one class into something that may be used by another class.
It looks like BaseAdapter has a responsibility to choose SQL statement for different databases. We can paraphraze this responsibility like we want to have generated SQL query based on database. So it looks like
we can replace this switch statement with HashTable(Java) or Dictionary(C#). And this HashTable(Java) or Dictionary(C#) can be a simple factory that creates SQL queries. And our generated SQL queries can be strategies for concrete database.
So let's dive in code.
It looks like this is a place where Strategy pattern can be used:
Strategy pattern is a behavioral software design pattern that enables
selecting an algorithm at runtime. Instead of implementing a single
algorithm directly, code receives run-time instructions as to which in
a family of algorithms to use.
Let me show an example via C#. I am sorry I am not Java guy, however I provided comments about how code could look in Java.
We need to have some common behaviour that will be shared across all strategies. In our case, it would be just one GetAllDbs() method from different data providers:
public interface IDatabaseStatement
{
IEnumerable<string> GetAllDbs();
}
And its concrete implementations. These are exchangeable strategies:
public class PostgresDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new [] { "PostgresDatabaseStatement" };
}
}
public class MySQLDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "MySQLDatabaseStatement" };
}
}
public class SqlServerDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "SqlServerDatabaseStatement" };
}
}
We need a place where all strategies can be stored. And we should be able to get necessary strategy from this store. So this is a place where simple factory can be used. Simple factory is not Factory method pattern and not Abstract factory.
public enum DatabaseName
{
SqlServer, Postgres, MySql
}
public class DatabaseStatementFactory
{
private Dictionary<DatabaseName, IDatabaseStatement> _statementByDatabaseName
= new Dictionary<DatabaseName, IDatabaseStatement>()
{
{ DatabaseName.SqlServer, new SqlServerDatabaseStatement() },
{ DatabaseName.Postgres, new PostgresDatabaseStatement() },
{ DatabaseName.MySql, new MySQLDatabaseStatement() },
};
public IDatabaseStatement GetInstanceByType(DatabaseName databaseName) =>
_statementByDatabaseName[databaseName];
}
and then you can get instance of desired storage easier:
DatabaseStatementFactory databaseStatementFactory = new();
IDatabaseStatement databaseStatement = databaseStatementFactory
.GetInstanceByType(DatabaseName.MySql);
IEnumerable<string> allDatabases = databaseStatement.GetAllDbs(); // OUTPUT:
// MySQLDatabaseStatement
This design is compliant with the open/closed principle.

Entity Framework Code First Deleting By ID Without Fetching (Generic Style)

Please tell me if this is a decent approach to deleting an Entity without fetching it given I have the ID.
I have a generic store with the following interface (I'll only show Delete):
public interface IStore : IReadOnlyStore
{
void Delete<TEntity>(TEntity entity) where TEntity : class, IEntity, new();
void SaveChanges();
}
And in the concrete Store class of that interface, here's my delete method:
public void Delete<TEntity>(TEntity entity) where TEntity : class, IEntity, new()
{
var obj = Ctx.Entry(entity);
if (obj.State == System.Data.EntityState.Detached)
{
Ctx.Set(typeof(TEntity)).Attach(obj.Entity);
}
Ctx.Set(typeof(TEntity)).Remove(obj.Entity);
}
I have tested both newing up an Entity:
Store.Delete(new Foo() { Id = request.Entity.Id });
as well as fetching an entity and then calling delete.
Through debugging, I have the desired affect on both scenarios.
I just want to make sure this is a good design and that there are no side effects to this approach.
For reference, Ctx is just the DbContext itself.
Thanks.
It's good design and doesn't have side effects :) (IMHO)
Two remarks:
I'm wondering if you could simplify your Delete method by:
public void Delete<TEntity>(TEntity entity)
where TEntity : class, IEntity, new()
{
Ctx.Entry(entity).State = EntityState.Deleted;
}
I would hope that setting the state to Deleted will attach automatically if the entity isn't already attached. But I am not sure if it works. (Let me know whether it works for attached and detached scenarios (if you should test this).)
If you have performance optimization in mind (avoiding to load the entities) don't forget that, if there are multiple entities to delete in the context, SaveChanges will still send one single DELETE statement per entity to the database. Bulk deletes with EF are quite terrible in performance and it's a terrain where going back to a SQL statements (DELETE ... WHERE ... IN ... many IDs....) sometimes makes sense (if performance matters).

Entity Framework Code First Update Does Not Update Foreign Key

I'm using EF 4.1 Code First. I have an entity defined with a property like this:
public class Publication
{
// other stuff
public virtual MailoutTemplate Template { get; set; }
}
I've configured this foreign key using fluent style like so:
modelBuilder.Entity<Publication>()
.HasOptional(p => p.Template)
.WithMany()
.Map(p => p.MapKey("MailoutTemplateID"));
I have an MVC form handler with some code in it that looks like this:
public void Handle(PublicationEditViewModel publicationEditViewModel)
{
Publication publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
publication.Template = _mailoutTemplateRepository.Get(publicationEditViewModel.Template.Id);
if (publication.Id == 0)
{
_publicationRepository.Add(publication);
}
else
{
_publicationRepository.Update(publication);
}
_unitOfWork.Commit();
}
In this case, we're updating an existing Publication entity, so we're going through the else path. When the _unitOfWork.Commit() fires, an UPDATE is sent to the database that I can see in SQL Profiler and Intellitrace, but it does NOT include the MailoutTemplateID in the update.
What's the trick to get it to actually update the Template?
Repository Code:
public virtual void Update(TEntity entity)
{
_dataContext.Entry(entity).State = EntityState.Modified;
}
public virtual TEntity Get(int id)
{
return _dbSet.Find(id);
}
UnitOfWork Code:
public void Commit()
{
_dbContext.SaveChanges();
}
depends on your repository code. :) If you were setting publication.Template while Publication was being tracked by the context, I would expect it to work. When you are disconnected and then attach (with the scenario that you have a navigation property but no explicit FK property) I'm guessing the context just doesn't have enough info to work out the details when SaveChanges is called. I'd do some experiments. 1) do an integration test where you query the pub and keep it attached to the context, then add the template, then save. 2) stick a MailOutTemplateId property on the Publicaction class and see if it works. Not suggesting #2 as a solution, just as a way of groking the behavior. I"m tempted to do this experiment, but got some other work I need to do. ;)
I found a way to make it work. The reason why I didn't initially want to have to do a Get() (aside from the extra DB hit) was that then I couldn't do this bit of AutoMapper magic to get the values:
Publication publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
However, I found another way to do the same thing that doesn't use a return value, so I updated my method like so and this works:
public void Handle(PublicationEditViewModel publicationEditViewModel)
{
Publication publication = _publicationRepository.Get(publicationEditViewModel.Id);
_mappingEngine.Map(publicationEditViewModel, publication);
// publication = Mapper.Map<PublicationEditViewModel, Publication>(publicationEditViewModel);
publication.Template = _mailoutTemplateRepository.Get(publicationEditViewModel.Template.Id);
if (publication.Id == 0)
{
_publicationRepository.Add(publication);
}
else
{
_publicationRepository.Update(publication);
}
_unitOfWork.Commit();
}
I'm injecting an IMappingEngine now into the class, and have wired it up via StructureMap like so:
For<IMappingEngine>().Use(() => Mapper.Engine);
For more on this, check out Jimmy's AutoMapper and IOC post.

Entity Framework/Linq to sql model to business model

I'm coming from a stored procedure and creating the data access layer manually approach. I am trying to understand where I should fit Linq To SQL or entity frameworks into my normal planning. I normally seperate out the business layer from the DAL layer and use a repository inbetween.
It seems that people will either use the generated classes from linq to sql, extend them by using the partial class or do a full seperation and map the generated linq classes to seperate business entities. I am partial to the seperate Business entities. However, this seems to be counterintuitive.
One of my last projects used DDD and the entity framework. When needing to udpate an object it moved the business entity to the repistory layer which when going to the DAL layer would create a context and than requery the object. It would than update the values and resbumit.
I didn't see the large point as the data context wasn't saved and required an extra query to grab the object before updating. Normally I would just do the update(If concurrency wasn't an issue)
So my questions come down to:
Does it make sense to seperate linq to sql generated classes into Business entities?
Should the data context be saved or is that impractical?
Thanks for your time, trying to make sure I understand. I normally like to seperate out as it makes it cleaner to understand even in some smaller porjects.
I currently hand roll my own Dto classes and Datacontext instead of using auto-generated code files from Linq to Sql. To give some background of my solution architecture/modeling, I have a "Contract" project, and a "Dal" project. (Also a "Model" project, but I'll try to stay focused here on Dal only). Hand-rolling my own Dtos and Datacontext, makes everything a lot smaller and simpler, I'll give a few examples of how I do that here.
I never return out a Dto object outside of the Dal, in fact I make sure to declare them as internal. The way I return them out is I cast them as an interface (interfaces are located in my "Contract" layer). We'll make a simple "PersonRepository" that implements an "IPersonRetriever and IPersonSaver" interfaces.
Contracts:
public interface IPersonRetriever
{
IPerson GetPersonById(Guid personId);
}
public interface IPersonSaver
{
void SavePerson(IPerson person);
}
Dal:
public class PersonRepository : IPersonSaver, IPersonRetriever
{
private string _connectionString;
public PersonRepository(string connectionString)
{
_connectionString = connectionString;
}
IPerson IPersonRetriever.GetPersonById(Guid id)
{
using (var dc = new PersonDataContext(_connectionString))
{
return dc.PersonDtos.FirstOrDefault(p => p.PersonId == id);
}
}
void IPersonSaver.SavePerson(IPerson person)
{
using (var dc = new PersonDataContext(_connectionString))
{
var personDto = new PersonDto
{
Id = person.Id,
FirstName = person.FirstName,
Age = person.Age
};
dc.PersonDtos.InsertOnSubmit(personDto);
dc.SubmitChanges();
}
}
}
PersonDataContext:
internal class PersonDataContext : System.Data.Linq.DataContext
{
static MappingSource _mappingSource = new AttributeMappingSource(); // necessary for pre-compiled linq queries in .Net 4.0+
internal PersonDataContext(string connectionString) : base(connectionString, _mappingSource) { }
internal Table<PersonDto> PersonDtos { get { return GetTable<PersonDto>(); } }
}
[Table(Name = "dbo.Persons")]
internal class PersonDto : IPerson
{
[Column(Name = "PersonIdentityId", IsPrimaryKey = true, IsDbGenerated = false)]
internal Guid Id { get; set; }
[Column]
internal string FirstName { get; set; }
[Column]
internal int Age { get; set; }
#region IPerson implementation
Guid IPerson.Id { get { return this.Id; } }
string IPerson.FirstName { get { return this.FirstName; } }
int IPerson.Age { get { return this.Age; } }
#endregion
}
You will need to add the "Column" attribute to all of your Dto properties, but if you notice, if there is a one-to-one correlation between what you want the field to be exposed as on the interface, and the name of the actual table column, you won't need to add any of the Named Parameters. In this example my PersonId in the database is stored as "PersonIdentityId", yet I only want my interface to make the field say "Id".
That's how I do my Dal layer, I believe this layer should be dumb, real dumb. Dumb in the sense that it is only there for CRUD (Create, Retrieve, Update and Delete) operations. All of the business logic would go into my "Model" project, which would consume and utilize the IPersonSaver and IPersonRetriever interfaces.
Hope this helps!

Fluent NHibernate DuplicateMappingException with AutoMapping

Summary:
I want to save two classes of the same name and different namespaces with the Fluent NHibernate Automapper
Context
I'm writing having to import a lot of different objects to database for testing. I'll eventually write mappers to a proper model.
I've been using code gen and Fluent NHibernate to take these DTOs and dump them straight to db.
the exception does say to (try using auto-import="false")
Code
public class ClassConvention : IClassConvention
{
public void Apply(IClassInstance instance)
{
instance.Table(instance.EntityType.Namespace.Replace(".", "_"));
}
}
namespace Sample.Models.Test1
{
public class Test
{
public virtual int Id { get; set; }
public virtual string Something { get; set; }
}
}
namespace Sample.Models.Test2
{
public class Test
{
public virtual int Id { get; set; }
public virtual string SomethingElse { get; set; }
}
}
And here's the actual app code
var model = AutoMap.AssemblyOf<Service1>()
.Where(t => t.Namespace.StartsWith("Sample.Models"))
.Conventions.AddFromAssemblyOf<Service1>();
var cfg = Fluently.Configure()
.Database(
MySQLConfiguration.Standard.ConnectionString(
c => c.Is("database=test;server=localhost;user id=root;Password=;")))
.Mappings(m => m.AutoMappings.Add(model))
.BuildConfiguration();
new SchemaExport(cfg).Execute(false, true, false);
Thanks I really appreciate any help
Update using Fluent Nhibernate RC1
solution from fluent-nhibernate forums by James Gregory
Got around to having a proper look at
this tonight. Basically, it is down to
the AutoImport stuff the exception
mentioned; when NHibernate is given
the first mapping it sees that the
entity is named with the full assembly
qualified name and creates an import
for the short name (being helpful!),
and then when you add the second one
it then complains that this import is
now going to conflict. So the solution
is to turn off the auto importing;
unfortunately, we don't have a way to
do that in the RC... I've just
commited a fix that adds in the
ability to change this in a
convention. So if you get the latest
binaries or source, you should be able
to change your Conventions line in
your attached project to do this:
.Conventions.Setup(x => {
x.AddFromAssemblyOf<Program>();
x.Add(AutoImport.Never()); });
Which adds all the conventions you've
defined in your assembly, then uses
one of the helper conventions to turn
off auto importing.
I was not able to get this to work using Conventions for FluentMappings (in contrast to AutoMappings). However, the following works for me, though it must be added to each ClassMap where needed.
public class AMap : ClassMap<A>
{
public AMap()
{
HibernateMapping.Not.AutoImport();
Map(x => x.Item, "item");
...
}
}
I am having real problem with this, and the example above or any of its variants do not help.
var cfg = new NotifyFluentNhibernateConfiguration();
return Fluently.Configure()
.Database(
FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2005
.ConnectionString("Server=10.2.65.227\\SOSDBSERVER;Database=NotifyTest;User ID=NHibernateTester;Password=test;Trusted_Connection=False;")
)
.Mappings(m => {
m.AutoMappings
.Add(AutoMap.AssemblyOf<SubscriptionManagerRP>(cfg));
m.FluentMappings.Conventions.Setup(x =>
{
x.AddFromAssemblyOf<Program>();
x.Add(AutoImport.Never());
});
} )
.BuildSessionFactory();
I can't find Program's reference..
I've also tried to put down a seperate xml file to in desperation config fluent nhibernate's mapping to auto-import = false with no success.
Can I please have some more extensive example on how to do this?
Edit, I got the latest trunk just weeks ago.
Edit, Solved this by removing all duplicates.
I have had the same problem. I solved it like this:
Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2008
.ConnectionString(...)
.AdoNetBatchSize(500))
.Mappings(m => m.FluentMappings
.Conventions.Setup(x => x.Add(AutoImport.Never()))
.AddFromAssembly(...)
.AddFromAssembly(...)
.AddFromAssembly(...)
.AddFromAssembly(...))
;
The imported part is: .Conventions.Setup(x => x.Add(AutoImport.Never())). Everything seems to be working fine with this configuration.
Use the BeforeBindMapping event to gain access to the object representation of the .HBM XML files.
This event allows you to modify any properties at runtime before the NHibernate Session Factory is created. This also makes the FluentNHibernate-equivalent convention unnecessary. Unfortunately there is currently no official documentation around this really great feature.
Here's a global solution to duplicate mapping problems ( Just remember that all HQL queries will now need to use Fully Qualified Type names instead of just the class names ).
var configuration = new NHibernate.Cfg.Configuration();
configuration.BeforeBindMapping += (sender, args) => args.Mapping.autoimport = false;
I had to play around with where to add the convention AutoImport.Never() to. I have my persistence mapping separated into different projects - models for each application can also be found in different projects. Using it with Fluent NHibernate and auto mapping.
There are occasions when domains, well mappings really have to be combined. This would be when I need access to all domains. POCO classes used will sometimes have the same name and different namespaces, just as examples above.
Here is how my combine all mapping looks like:
internal static class NHIbernateUtility
{
public static ISessionFactory CreateSessionFactory(string connectionString)
{
return Fluently.Configure()
.Database(
MsSqlConfiguration
.MsSql2008
.ConnectionString(connectionString))
.Mappings(m => m.AutoMappings
.Add(ProjectA.NHibernate.PersistenceMapper.CreatePersistenceModel()))
.Mappings(m => m.AutoMappings
.Add(ProjectB.NHibernate.PersistenceMapper.CreatePersistenceModel()))
.Mappings(m => m.AutoMappings
.Add(ProjectC.NHibernate.PersistenceMapper.CreatePersistenceModel())).BuildSessionFactory();
}
}
And one of the persistence mappers:
public static class PersistenceMapper
{
public static AutoPersistenceModel CreatePersistenceModel()
{
return
AutoMap.AssemblyOf<Credential>(new AutoMapConfiguration())
.IgnoreBase<BaseEntity>()
.Conventions.Add(AutoImport.Never())
.Conventions.Add<TableNameConvention>()
.Conventions.Add<StandardForeignKeyConvention>()
.Conventions.Add<CascadeAllConvention>()
.Conventions.Add<StandardManyToManyTableNameConvention>()
.Conventions.Add<PropertyConvention>();
}
}
Persistence mappers are very similar for each POCO namespace - some have overrides. I had to add .Conventions.Add(AutoImport.Never()) to each persistence mapper and it works like a charm.
Just wanted to share this if anyone else is doing it this way.