I have exposed entity frame work DBContext's POCO entities to WCF (without tracking enabled) and through my client when i am trying to update a combination of Parent and related entieis. Only the parent entity is gets updated but child entity is not updated.
I am trying some thing like … A Customer has one or more Customer Address entities. In my client , I have added a new customer Adress , deleted an existing customer Address, and updated an existing Customer Address and modified some thing in Customer Object iteslf. Now, i want to update these changes at one shot.
Currently, it’s updating only the customer and ignoring the rest.
I was under impression that, with EF 4.1 the change tracking capablity has been improved and we can acheive this with out STE (self tracking entities). My assumption is correct?
Can it possible with DBContext?. Any help or directions?
No. EF change tracking tracks changes only for attached entity. If you serialize entity and send it elsewhere it is not tracked anymore. It becomes detached scenario and you are responsible for telling EF what changes happened on your WCF client. If you just attach the entity and set parents state to modified it will do exactly that - it will only update parent because you didn't tell it that anything else has changed.
So either send additional information from client about modified entities and set every entity or relation to correct state prior to saving changes or load your current state (current parent and children) from database and merge it with state received from the client.
Related
This should be a pretty common issue: let's say I'm updating a users table as well as a users_organizations table. From the UI perspective, there is only one button "Save".
I can either:
1) Create a single API route
2) Create one API route for each resource (one for users, one for users_organizations)
And then, suppose I choose 1). Should I update both tables in a single database call or should I split it up into 2 database calls?
In general I'm never sure how to approach these problems. Sometimes there's an action that affects more than 2 database tables at once. How do I ensure robustness, proper error handling, and keep my code sane all at once?
Definitely a problem I struggle with as well.
From what I've seen in the past, most operations that go along with a UI action are related, and can be given a common action name like update-user when clicking "Save". I'd have a single API endpoint to update the user, such as PUT /api/users/123 in a REST API. The body of that request would contain updated fields and new organizations the user belongs to.
Then on the server side I would make 2 database calls, one to update the user table and one to update the user_organization table.
If you feel 2 operations are so different that it's difficult to come up with a common API endpoint name, or if they need to be called independently in other parts of the app, I would argue that they should be 2 different API endpoints.
At the end of the day I try to ask, if a new developer were to try to understand this code, what would be the simplest approach?
I've been doing some experiments on my data storage and for that reason I've created a handful of fake ACLs . Now I want to delete them . So I queried the data storage using the following :
select * from dm_Acl enable (row_based)
But then I realized that there is no such attribute as date created or modified or any thing else related to date what so ever . Then (with doubt) I thought that alcs might be considered as DM_SYSOBJECT but then I queried a specific alc name that I had in mind but there was no result . I was wondering if there is any approach for me to meet my objective ?
I think you must not delete ACL based on their creation date (moreover this is not possible), as there are might be objects referenced with an ACL.
So, I think what you really need is to delete orphaned ACL objects (which are not referenced with any objects).
There is a dm_DMClean Documentum Job which does exactly this.
However, I'm currently not sure if it deletes orphaned custom dm_acl objects or only automatically created ones which name starts with dm_45.. (I haven't been working with DCTM for a long time already), but it is easy to check - make sure you have an orphaned ACL, run the job and check if your acl was deleted.
Sergi's answer is pretty much good, but I had issue with deliberately deleted ACL's on production environment. Whole issue was fixed by simply creating new ACL's. It seems that there is no additional link between object's ACL property and ACL object itself, so in case of a problem it should be easily fixable.
Since you say this is your development environment you can go ahead and delete ACL's you don't want to have in your environment. In this situation it's wise to run ConsistencyChecker job from time to time.
Check for orphaned ACL's, if there is no orphaned objects then try to query objects you created during your development period and JOIN ACL properties from created objects to dm_acl table.
We are developing an enterprise application which provides cloud storage services for its clients (via Software As A Service). Some of our clients want to track users coming from their web portal (landing page) by adding some custom fields specific to that client (example: license type, key, serial number etc). At this point, we only have one client which uses our application and provides services to its end users, but we are negotiating with a new client which will use our service as well in near future.
At this point, we have a relational database (MySql) table specific to that first client which we use to handle that client's custom fields, and we don't want to create tables specific to the new client's custom fields as it would require changing the database schema every time we add a new client to our application.
Our research has indicated that there are few methods we could use for our relational databse to solve our problem. They are (Source: http://www.percona.com/live/mysql-conference-2013/sites/default/files/slides/Extensible%20Data%20Modeling.pdf):
Adding Extra columns (in a custom field table)
Entity - Attribute - Value Model
Class Table Inheritance Model
Serialized LOB & Inverted Indexes
Our question is : How do mature companies (specifically like Salesforce, NetSuite etc) store custom fields in the database architecturally and how should we store them?
Any help / pointers in the right direction would be helpful. Thanks.
PROBLEM
I am developing an app where the data model will be very similar to JSFiddle's. A user will create a new entry that will be assigned a GUID in the database. My question is how to handle when other users want to modify/fork/version the original entry. JSFiddle handles this by versioning the entry (so the URL becomes something like jsfiddle.net/GUID/1).
What is the benefit to JSFiddle's method over assigning a new GUID to the modified version and just recording a relationship to the original entry in the database?
It seems like no matter what I will have to create a new entry in the database that will essentially be a modified copy of the original.
Also, there will be both registered and anonymous users just like JSFiddle. The registered users should be able to log in and see all of their own entries and possibly the versions/forks that exist off of their own entries (though this isn't currently a requirement).
Am I missing something? Is there a right and wrong way to do this?
TECH
Using parse.com's RESTful API for data CRUD; node on the server.
What is the benefit to JSFiddle's method over assigning a new GUID to the modified version and just recording a relationship to the original entry in the database?
I would imagine none, both would require the same copy operation and the same double query (in MongoDB) to get the parent.
The only difference is what field you go by.
Am I missing something?
Not that I can see.
Is there a right and wrong way to do this?
It seems as though you have this pretty well covered frankly.
MVCC does seem the right way to do this in some respects, however you don't have to go the full hog. If you were there might be cause for you to change to a database that has it built in like CouchDB or something because MongoDBs implementation would be on top of its current existing lock mechanisms, its like adding a lock on a lock.
I have this scenario in my 3-tier app with a Service Layer that serves to a MVC presentation layer:
I have an operation that creates, for example, an Employee whose email must be unique in the set of Employees. This operation is executed in the MVC presentation layer through a service.
How do I manage the intent to create a Employee whose email is already registered in the Database for another Employee?
I am thinking in 2 options:
1) Have another operation that queries if there's an Employee with the same email given for the new Employee.
2) Throw an exception in the service CreateEmployee for the duplicate email.
I think it is a matter of what I think is best or most suitable for the problem.
I propose the 1) option because I think that this is a matter of validation.
But the 2) option only needs 1 call to the service, therefore its (?) more efficient.
What do you think?
Thanks!
I would definitely go with second option:
as you mentioned it avoids 1 call to the service
it keeps your service interface clean with just one method for the employee creation
it is consistant from a transactional point of view (an exception meaning "transaction failed"). Keep in mind that validation is not only one of the many reasons that can make the transaction fail.
imagine your validation constraints evolve (ex: other employee attributes...), you won't want to make all you implementation evolve only for this.
Something to have in mind: Make sure to make your Exception verbose enough to identify clearly the cause of the failure.
If by 'Presentation' layer you really mean presentation, you should not be creating a new employee in that layer. You should only be preparing any data to be cleanly presented in the HTTP response object.
Generally a good way to think of this sort of problem is to consider what your service objects should do if called by a command-line program:
> createEmployee allison.brie#awesome.com
Error! 'allison.brie#awesome.com' is already registered.
In this there is some terminal management layer that calls the service. The service does something to realize there is another user with the same email, and throws an appropriate exception (ie DuplicateUserException). Then the terminal management layer interprets that exception and prints out "Error! " + exception.getMessage();
That said, note that your two options are actually the same option. Your service must still query the database for the duplicate. While this is 'validation' it is not input validation. Input validation means checking to see if it's a valid email address. (Is there an '#' symbol and a valid TLD?)