Is it possible to use the guid.comb strategy for identity generation with Mysql Db using Nhibernate?
When I use it as
mapping.Id(x => x.Id)
.Column("row_guid")
.CustomType(typeof(string))
.GeneratedBy.GuidComb()
.Length(36);
I end up with a
---->
System.InvalidOperationException :
Identity type must be Guid
Is there a way to overcome this obstacle in the MySql scenario?
Edit:
I don’t have a choice between guid and int. This is a port of a legacy db from MSSql
This is common problem especially when porting MSSql applications to MySql.
As David said,implement a simple CustomIdGenerator which is a wrapper over GuidCombGenerator that gives you the Guid as a string.
using NHibernate.Engine;
using NHibernate.Id;
namespace NHibernateMaps
{
public class GuidStringGenerator : IIdentifierGenerator
{
public object Generate(ISessionImplementor session, object obj)
{
return new GuidCombGenerator().Generate(session, obj).ToString();
}
}
}
And in the mapping specify it as
mapping.Id(x => x.Id)
.Column("row_id")
.CustomType(typeof(string))
.GeneratedBy.Custom(typeof(GuidStringGenerator))
.Length(36);
I'm not convinced of the wisdom of using an unsupported data type for your primary key, but if you really do need to do this then you could try writing an NHibernate user type that exposes its property as a Guid but persists to the database as a string. The problem in this case seems to be that the property itself is defined as a data type other than System.Guid, which the guid.comb strategy expects.
I can't guarantee that it still won't error, but it's the only way you'll get it to work if it is possible. If you're new to NHibernate user types, there is an abstract base class that takes care of some of the drudge work for you here, with an example implementation class. You should be able to follow this example.
Just can use System.Guid as the type of the property.
O/RM is about mapping, so even though your database doesn't support a given type natively, you can still use this in your domain model. The underlying type of your column should be BINARY(16), for MySQL compatibility.
Related
In my PostgreSQL database I have:
CREATE TABLE category (
// ...
category_name_localization JSON not null,
);
In Java, I have a JDO class like so:
#javax.jdo.annotations.PersistenceCapable(table = "category" )
public class Category extends _BlueEntity implements Serializable {
//...
private org.json.simple.JSONObject category_name_localization;
#javax.jdo.annotations.Column( name = "category_name_localization" )
public org.json.simple.JSONObject getCategoryNameLocalization() {
return category_name_localization;
}
}
When I use this class, DataNucleus gives the following exception:
org.datanucleus.exceptions.NucleusUserException: Field "com.advantagegroup.blue.ui.entity.Category.category_name_localization" is a map that has been specified without a join table and neither the key nor the value has a mapped-by specified. This is invalid!
at org.datanucleus.store.rdbms.RDBMSStoreManager.newJoinTable(RDBMSStoreManager.java:2720)
at org.datanucleus.store.rdbms.mapping.java.AbstractContainerMapping.initialize(AbstractContainerMapping.java:82)
at org.datanucleus.store.rdbms.mapping.MappingManagerImpl.getMapping(MappingManagerImpl.java:680)
at org.datanucleus.store.rdbms.table.ClassTable.manageMembers(ClassTable.java:518)
at org.datanucleus.store.rdbms.table.ClassTable.manageClass(ClassTable.java:424)
at org.datanucleus.store.rdbms.table.ClassTable.initializeForClass(ClassTable.java:1250)
at org.datanucleus.store.rdbms.table.ClassTable.initialize(ClassTable.java:271)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.initializeClassTables(RDBMSStoreManager.java:3288)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2897)
at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:118)
at org.datanucleus.store.rdbms.RDBMSStoreManager.manageClasses(RDBMSStoreManager.java:1637)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:665)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getPropertiesForGenerator(RDBMSStoreManager.java:2098)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1278)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3668)
at org.datanucleus.state.StateManagerImpl.setIdentity(StateManagerImpl.java:2276)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:482)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:122)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:218)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:1986)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1830)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1685)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:712)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:738)
at com.advantagegroup.blue.ui.jdo._BlueJdo.insert(_BlueJdo.java:40)
at ...
This error makes sense in a way, because org.json.simple.JSONObject extends Map. However, this field is not part of any relationships -- it is of type JSON and therefore it is natural to back it with JSONObject
How do I tell JDO / DataNucleus to chill and treat org.json.simple.JSONObject the same way it would a String or a Date?
Thanks!
DC
My understanding of this is that your default attempt is trying to persist a normal Map (since while it doesnt know what a JSONObject is, it does know what a Map is), and it will need a join table for that for RDBMS.
Since you presumably want the JSONObject persisted into a single column then you need to create a JDO AttributeConverter. I've done similar things with my own types and it works fine (i'm on v5.0.5 IIRC).
I also found this in their docs, for when you have your own Map class that it doesn't know how to handle by default in terms of replacing it with a proxy (to intercept the calls to put, putAll etc). If you add that line it will not try to wrap this field with a proxy (which it doesn't know how to do for that type, unless you tell it). If you wanted to auto-detect the JSONObject becoming "dirty" you would need to write a proxy wrapper, as per this page.
This doesn't answer how to map the column for that converter to use a "json" type in PostgreSQL, but i'd guess that if you set the sqlType you may get success in that respect.
I have a Curriculum entity which is as follows:
#Entity
public class Curriculum {
#ManyToMany
private Set<Language> languages;
...
I am trying to persist a Curriculum instance but the constraint is that the Language instances are already in database.
How can I make sure that when a call to persist is made, a line is inserted into the curriculum table, the curriculum_languages mapping table but not into the language table as it is a reference table which is already populated?
edit 1:
-Here is the error I get if I don't specify the Cascade.ALL attribute:
org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.bignibou.domain.Language
-If I do specify the Cascade.ALL, new lines with new IDs are insterted into the language table which is obviously not what I want...
edit 2: Note that I use Spring Data JPA in order to persist my instances and the data coming from the browser is a JSON object as follows:
{"curriculum":{"languages":[{"id":46,"description":"Français"},{"id":30,"description":"Chinois"}],"firstName":"Julianito","dateOfBirth":"1975-01-06","telephoneNumber":"0608965874","workExperienceInYears":3,"maxNumberChildren":1,"drivingLicense":true}}
Probably the most simplest way to achieve it is to load the needed languages from the DB and assign them to the curriculum just before the persist operation.
The logic behind this is the following:
When a language object comes into your application and is deserialized from JSON, it has an ID field already assigned to it.
When you try to persist the curriculum (with languages deserialized from JSON) JPA gets confused about those IDs, cause on the one hand it 'should' know about these objects (IDs are set), but on the other hand active JPA session does not know anything about them (they came from outer world of JSON)
So fetching languages by ID tells JPA who's who.
What should happen, when an inexistent Language comes with a Curriculum instance? Should the Language instance simply be ignored or should an Exception be thrown?
If an exception should be thrown, just wrap the code that persists the entity in a try-catch block and throw your own Exception. If the language should simply be ignored, just remove it from the List<Language> before persisting, e.g if Language's ID is null.
I forgot to mention that I use optimistic locking. Including the version field in JSon sorted the issue:
"languages":[{"id":46,"description":"Français","version":0}],...
It would be great if anyone would kindly provide an explanation to this though...
edit: version field:
#Version
#Column(name = "version")
private Integer Curriculum.version;
I am normally writing all parts of the code in C# and when writing protocols that are serialized I use FastSerializer that serializes/deserializes the classes fast and efficient. It is also very easy to use, and fairly straight-forward to do "versioning", ie to handle different versions of the serialization. The thing I normally use, looks like this:
public override void DeserializeOwnedData(SerializationReader reader, object context)
{
base.DeserializeOwnedData(reader, context);
byte serializeVersion = reader.ReadByte(); // used to keep what version we are using
this.CustomerNumber = reader.ReadString();
this.HomeAddress = reader.ReadString();
this.ZipCode = reader.ReadString();
this.HomeCity = reader.ReadString();
if (serializeVersion > 0)
this.HomeAddressObj = reader.ReadUInt32();
if (serializeVersion > 1)
this.County = reader.ReadString();
if (serializeVersion > 2)
this.Muni = reader.ReadString();
if (serializeVersion > 3)
this._AvailableCustomers = reader.ReadList<uint>();
}
and
public override void SerializeOwnedData(SerializationWriter writer, object context)
{
base.SerializeOwnedData(writer, context);
byte serializeVersion = 4;
writer.Write(serializeVersion);
writer.Write(CustomerNumber);
writer.Write(PopulationRegistryNumber);
writer.Write(HomeAddress);
writer.Write(ZipCode);
writer.Write(HomeCity);
if (CustomerCards == null)
CustomerCards = new List<uint>();
writer.Write(CustomerCards);
writer.Write(HomeAddressObj);
writer.Write(County);
// v 2
writer.Write(Muni);
// v 4
if (_AvailableCustomers == null)
_AvailableCustomers = new List<uint>();
writer.Write(_AvailableCustomers);
}
So its easy to add new things, or change the serialization completely if one chooses to.
However, I now want to use JSON for reasons not relevant right here =) I am currently using DataContractJsonSerializer and I am now looking for a way to have the same flexibility I have using the FastSerializer above.
So the question is; what is the best way to create a JSON protocol/serialization and to be able to detail the serialization as above, so that I do not break the serialization just because another machine hasn't yet updated their version?
The key to versioning JSON is to always add new properties, and never remove or rename existing properties. This is similar to how protocol buffers handle versioning.
For example, if you started with the following JSON:
{
"version": "1.0",
"foo": true
}
And you want to rename the "foo" property to "bar", don't just rename it. Instead, add a new property:
{
"version": "1.1",
"foo": true,
"bar": true
}
Since you are never removing properties, clients based on older versions will continue to work. The downside of this method is that the JSON can get bloated over time, and you have to continue maintaining old properties.
It is also important to clearly define your "edge" cases to your clients. Suppose you have an array property called "fooList". The "fooList" property could take on the following possible values: does not exist/undefined (the property is not physically present in the JSON object, or it exists and is set to "undefined"), null, empty list or a list with one or more values. It is important that clients understand how to behave, especially in the undefined/null/empty cases.
I would also recommend reading up on how semantic versioning works. If you introduce a semantic versioning scheme to your version numbers, then backwards compatible changes can be made on a minor version boundary, while breaking changes can be made on a major version boundary (both clients and servers would have to agree on the same major version). While this isn't a property of the JSON itself, this is useful for communicating the types of changes a client should expect when the version changes.
Google's java based gson library has an excellent versioning support for json. It could prove a very handy if you are thinking going java way.
There is nice and easy tutorial here.
It doesn't matter what serializing protocol you use, the techniques to version APIs are generally the same.
Generally you need:
a way for the consumer to communicate to the producer the API version it accepts (though this is not always possible)
a way for the producer to embed versioning information to the serialized data
a backward compatible strategy to handle unknown fields
In a web API, generally the API version that the consumer accepts is embedded in the Accept header (e.g. Accept: application/vnd.myapp-v1+json application/vnd.myapp-v2+json means the consumer can handle either version 1 and version 2 of your API) or less commonly in the URL (e.g. https://api.twitter.com/1/statuses/user_timeline.json). This is generally used for major versions (i.e. backward incompatible changes). If the server and the client does not have a matching Accept header, then the communication fails (or proceeds in best-effort basis or fallback to a default baseline protocol, depending on the nature of the application).
The producer then generates a serialized data in one of the requested version, then embed this version info into the serialized data (e.g. as a field named version). The consumer should use the version information embedded in the data to determine how to parse the serialized data. The version information in the data should also contain minor version (i.e. for backward compatible changes), generally consumers should be able to ignore the minor version information and still process the data correctly although understanding the minor version may allow the client to make additional assumptions about how the data should be processed.
A common strategy to handle unknown fields is like how HTML and CSS are parsed. When the consumer sees an unknown fields they should ignore it, and when the data is missing a field that the client is expecting, it should use a default value; depending on the nature of the communication, you may also want to specify some fields that are mandatory (i.e. missing fields is considered fatal error). Fields added within minor versions should always be optional field; minor version can add optional fields or change fields semantic as long as it's backward compatible, while major version can delete fields or add mandatory fields or change fields semantic in a backward incompatible manner.
In an extensible serialization format (like JSON or XML), the data should be self-descriptive, in other words, the field names should always be stored together with the data; you should not rely on the specific data being available on specific positions.
Don't use DataContractJsonSerializer, as the name says, the objects that are processed through this class will have to:
a) Be marked with [DataContract] and [DataMember] attributes.
b) Be strictly compliant with the defined "Contract" that is, nothing less and nothing more that it is defined. Any extra or missing [DataMember] will make the deserialization to throw an exception.
If you want to be flexible enough, then use the JavaScriptSerializer if you want to go for the cheap option... or use this library:
http://json.codeplex.com/
This will give you enough control over your JSON serialization.
Imagine you have an object in its early days.
public class Customer
{
public string Name;
public string LastName;
}
Once serialized it will look like this:
{ Name: "John", LastName: "Doe" }
If you change your object definition to add / remove fields. The deserialization will occur smoothly if you use, for example, JavaScriptSerializer.
public class Customer
{
public string Name;
public string LastName;
public int Age;
}
If yo try to de-serialize the last json to this new class, no error will be thrown. The thing is that your new fields will be set to their defaults. In this example: "Age" will be set to zero.
You can include, in your own conventions, a field present in all your objects, that contains the version number. In this case you can tell the difference between an empty field or a version inconsistence.
So lets say: You have your class Customer v1 serialized:
{ Version: 1, LastName: "Doe", Name: "John" }
You want to deserialize into a Customer v2 instance, you will have:
{ Version: 1, LastName: "Doe", Name: "John", Age: 0}
You can somehow, detect what fields in your object are somehow reliable and what's not. In this case you know that your v2 object instance is coming from a v1 object instance, so the field Age should not be trusted.
I have in mind that you should use also a custom attribute, e.g. "MinVersion", and mark each field with the minimum supported version number, so you get something like this:
public class Customer
{
[MinVersion(1)]
public int Version;
[MinVersion(1)]
public string Name;
[MinVersion(1)]
public string LastName;
[MinVersion(2)]
public int Age;
}
Then later you can access this meta-data and do whatever you might need with that.
I have this problem:
The Vehicle type derives from the EntityObject type which has the property "ID".
I think i get why L2S can't translate this into SQL- it does not know that the WHERE clause should include WHERE VehicleId == value. VehicleId btw is the PK on the table, whereas the property in the object model, as above, is "ID".
Can I even win on this with an Expression tree? Because it seems easy enough to create an Expression to pass to the SingleOrDefault method but will L2S still fail to translate it?
I'm trying to be DDD friendly so I don't want to decorate my domain model objects with ColumnAttributes etc. I am happy however to customize my L2S dbml file and add Expression helpers/whatever in my "data layer" in the hope of keeping this ORM-business far from my domain model.
Update:
I'm not using the object initialization syntax in my select statement. Like this:
private IQueryable<Vehicle> Vehicles()
{
return from vehicle in _dc
select new Vehicle() { ID = vehicle.VehicleId };
}
I'm actually using a constructor and from what I've read this will cause the above problem. This is what I'm doing:
private IQueryable<Vehicle> Vehicles()
{
return from vehicle in _dc
select new Vehicle(vehicle.VehicleId);
}
I understand that L2S can't translate the expression tree from the screen grab above because it does not know the mappings which it would usually infer from the object initialization syntax. How can I get around this? Do I need to build a Expression with the attribute bindings?
I have decided that this is not possible from further experience.
L2S simply can not create the correct WHERE clause when a parameterized ctor is used in the mapping projection. It's the initializer syntax in conventional L2S mapping projections which gives L2S the context it needs.
Short answer - use NHibernate.
Short answer: Don't.
I once tried to apply the IQueryable<.IEntity> to Linq2Sql. I got burned bad.
As you said. L2S (and EF too in this regard) doesn't know that ID is mapped to the column VehicleId. You could get around this by refactoring your Vehicle.ID to Vehicle.VehicleID. (Yes, they work if they are the same name). However I still don't recommend it.
Use L2S with the object it provided. Masking an extra layer over it while working with IQueryable ... is bad IMO (from my experience).
Otherway is to do .ToList() after you have done the select statement. This loads all the vehicles into your memory. Then you do the .Where statment against Linq 2 Object collections. Ofcourse this won't be as effecient as L2S handles all of the query and causes larger memory usage.
Long story short. Don't use Sql IQueryable with any object other than the ones it was originally designed for. It just doesn't work (well).
I am having trouble with inheritance mapping in Linq to Sql. I am using MSDN as a reference and as a basis it sounds good. However the example it gives is a single table inheritance mapping. However, I am trying to do multiple table inheritance to save on table space. Is this possible? So far I have:
[Table(Name="writing_objs")]
[InheritanceMapping(Code="T",Type=typeof(ObjectTypeA), IsDefault=true)] // Only default because a default is required
[InheritanceMapping(Code="N",Type=typeof(ObjectTypeb))]
public abstract class WritingObject
{
/* ... */
[Column(Name="obj_tp", IsDiscriminator=true)]
[StringLength(1)]
public string ObjectType { get; set; }
}
I then have the different object types defined like so:
[Table(Name="obj_type_a")]
public class ObjectTypeA: WritingObject
{
/* Object Type A fields */
}
The issue seems to be that I am defining a table attribute in the 2nd type, as I get the following exception:
The inheritance subtype 'ObjectTypeA' is also declared as a root type.
Is it possible to keep these fields in separate tables with Linq to Sql or am I going to have to consolidate them all into a single table? Is it necessarily bad to have some extra fields in one table as long as there aren't too many (some object types might even be able to share some fields)?
Linq to SQL does not support multiple-table inheritance using a discriminator, even though that is the best design in many cases (it's the most normalized).
You'll have to implement it using associations instead. If you use a mapping layer that converts it to an inheritance-based domain model, it will be easier to manage at higher layers.
Well I know this problem has already been resolved, but as I just encountered the same issue, I'd like to share what I did :
Just remove the [Table] attribute from your inherited classes. And it's quite logical, because we define in the generic classes a way to store all subtypes (with the discriminatory attrbute).
Maybe this will help someone in the future.