Linq-to-Sql vs CRUD object? - linq-to-sql

I have a linq-to-sql object from a table. I want to store a similar data in the profile collection. Technically, I don't need all of the data in the linq2sql table and do not really prefer the legacy naming construct. My first thought would be to create a collection of CRUD objects.
But, if I choose this solution I will have double the classes with much overlapping functionality. If I use the linq2sql objects as-is then I'll be dealing with abstractions that contain more data then necessary.
To give you a more clear example, I will create an example that is similar:
This goes into the database and a linq-2-sql abstraction is created
NoteSaved
Date
Id
UserId
Text
....
Custom Methods
This goes into the user profile
[Serializable]
SavedSearchText
Text
...
Custom Methods
SavedSearchText doesn't need the junk like id, userid, and date, and that data doesn't even make sense for it. But yet the custom functionality will overlap for both classes.
I see 2 trivial approaches:
Created a whole new set of objects that are CRUD just for this purpose
Use the linq2sql objects as a proxy for the CRUD, i.e using my "wisdom" that these objects are not really "those objects"
I was going the route 1 but see a lot of duplication. It is not very DRY. What are some solutions that keep things as DRY as possible while also maintaining a clear architecture? In other words, I want to avoid having to duplicate the same methods for every object AND I want to avoid storing unneeded data to the profile, such as Ids or DateStored which are not required.
I thought this should be obvious but SavedSearchText and SearchText share a common data field text and common functionality, i.e SomeFunction1, SomeFunction2, i.e FindText.
Edit/Update:
Typically this would be handled with inheritance. We'd have a base business class Text and then 2 derived types SavedText and UserText. But, with linq2sql I do not see a way to do this keeping with DRY principle.
One might also choose to solve via containment via has-a relationship but that is not really not "right" for this context.
Obviously one could create own business layer but that also doesnt keep with dry, as linq2sql has much of the functionality already required.
Perhaps the cleanest solution would be to create CRUD objects but that only read/dump back into the linq2sql objects. Unfortunately, these linq2sql objects will not actually be "tables" and some fields wont make sense. that appears to be the DRYiest solution.
It may even be possible in this case to use advanced methods such as extension methods which would extend both classes but I prefer not to use extension methods unless required.

Related

Get Map value like plain old Javascript objects

I'm new to Immutable.js, so this is a very trivial question.
It looks like I can't get a Map value like with plain old Javascript objects, e.g. myMap.myKey. Apparently I have to write myMap.get('myKey')
I am very surprised by this behavior. Is there a reason for that? Is there any extension to Immutable.js which would allow me to type myMap.myKey?
Came back to elaborate on my comment, but SO doesn't allow that after certain time. Converting it into an answer.
The question you have asked has been reciprocated several times with people who start new with immutable, yours truly included. Its on one of the rants I wrote a while ago.
It starts to make sense when you look at it from immutability perspective. If you expose value types as your own properties, they won't be immutable because they are value types and could be assigned to.
Nonetheless, its frustrating to spread these getters all across your components/views. If you can afford it, you should try to use the Record type. It offers traditional access to members (except in IE 8). Better still, you can extend from this type and add helper getters/setters (e.g. user.getName(), user.setName('thebat') instead of user.get('name')/set('name', 'thebat')) to abstract your model's internal structure from your views. However there are challenges to overcome like nested structures and de-serialization of objects.
If the above is not your cup of tea, I'd recommend swallowing the bitter pill :).
I think you are missing the concept Immutable was build:
Immutable data cannot be changed once created, leading to much simpler
application development, no defensive copying, and enabling advanced
memoization and change detection techniques with simple logic.
Persistent data presents a mutative API which does not update the data
in-place, but instead always yields new updated data.
One way or another you may transform Immutable data structures to plain old JS objects as: myMap.toJS()

Restructuring to avoid accessing components in models

Continuing to work on my port of a CakePHP 1.3 app to 3.0, and have run into another issue. I have a number of areas where functionality varies based on certain settings, and I have previously used a modular component approach. For example, Leagues can have round-robin, ladder or tournament scheduling. This impacts on the scheduling algorithm itself, such that there are different settings required to configure each type, but also dictates the way standings are rendered, ties are broken, etc. (This is just one of 10 areas where I have something similar, though not all of these suffer from the problem below.)
My solution to this in the past was to create a LeagueComponent with a base implementation, and then extend that class as LeagueRoundRobinComponent, LeagueLadderComponent and LeagueTournamentComponent. When controllers need to do anything algorithm-specific, they check the schedule_type field in the leagues table, create the appropriate component, and call functions in it. This still works just fine.
I mentioned that this also affects views. The old solution for this was to pass the league component object from the controller to the view via $this->set. The view can then query it for various functionality. This is admittedly a bit kludgy, but the obvious alternative seems to be extracting all the info the view might require and setting it all individually, which doesn't seem to me to be a lot better. If there's a better option, I'm open to it, but I'm not overly concerned about this at the moment.
The problem I've encountered is when tables need to get some of that component info. The issue at hand is when I am saving my add/edit form and need to deal with the custom settings. In order to be as flexible as possible for the future, I don't have all of these possible setting fields represented in the database, but rather serialize them into a single "custom" column. (Reading this all works quite nicely with a custom constructor and getters.) I had previously done this by loading the component from the beforeSave function in the League model, calling the function that returns the list of schedule-specific settings, extracting those values and serializing them. But with the changes to component access in 3.0, it seems I can no longer create the component in my new beforeMarshal function.
I suppose the controller could "pass" the component to the table by setting it as a property, but that feels like a major kludge, and there must be a better way. It doesn't seem like extending the table class is a good solution, because that would horribly complicate associations. I don't think that custom types are the solution, as I don't see how they'd access a component either. I'm leaning towards passing just the list of fields from the controller to the model, that's more of a "configuration" method. Speaking of configuration, I suppose it could all just go into the central Configure data store, but that's always felt to me like somewhere that you only put "small" data. I'm wondering if there's a better design pattern I could follow that would let the table continue to take care of these implementation details on its own without the controller needing to get involved; if at some point I decide to change from the serialized method to adding all of the possible columns, it would be nice to have those changes restricted to the table class.
Oh, and keep in mind that this list of custom settings is needed in both a view and the table, so whatever solution is proposed will ideally provide a way for both of them to access it, rather than requiring duplication of code.

Class diagram for xpages project

I am working on a project with Xpages.I wanted to know how to make the representation of a class diagram to my project.Notes is a documentary database so no relationnal.How I could represent my entities?
In Domino, documents are merely evidence of the existence of people, processes, and physical entities (products, offices, inventory, etc.). Ideally, your classes should model those things.
For instance, you might have classes like Employee, with properties like firstName, lastName, hireDate; maybe Asset, with properties like category, model, serialNumber; or perhaps Request, with properties like status, requester, dateApproved. Eventually the values of each of these properties might be stored as item values in Domino documents, but defining these first as attributes of classes allows you to follow a simple pattern to develop your application:
Use your class structure to rapidly define the nature of each "thing" your application interacts with, without worrying yet what each must look like or how and where the data will ultimately be stored.
Once you have these classes defined, you can bind visual components on an XPage (such as input fields like edit boxes and radio button groups) very easily using the #{dataSource.propertyName} syntax.
When these two steps are done, all you have left to do is to add two methods to each of these entity classes: one to write the data, and another to retrieve it.
Following this approach makes it very easy to rapidly build the application, but also protects your user interface from changes in how you wish the data to be stored. Initially, each object might represent a single document. As the application grows in either complexity or adoption, however, you may decide to segregate the data such that many documents are created to represent a single entity. Or at some point you might even decide to store some, or all, of the data outside of Domino (DB2, SQL, etc.). If your XPage components are bound to properties of these entity classes, all you need to do to change how or where the data is stored is to update the two methods you created in step 3 of the above list: alter how you write and retrieve the data. Your actual XPage design elements don't need to change at all.
Depends how you look at it. You can always think of following relation: Notes Form <-> Java POJO and Notes View <-> Java Collections.
See http://www.pipalia.co.uk/notes-development/rethinking-xpages-part-two/ for some tips on using Java world standards when working with xPages.

Look-up tables with Linq-to-Sql

I'm working on a car dealer website at the moment, and I've been working on a 'Vehicle' model in my database. I've been making extensive use of lookup tables with FK relationships for things like Colour, Make, Model etc.
So a basic version could be something like:
Vehicle {
Id
MakeId
ModelId
ColourId
Price
Year
Odometer
}
Which then uses a simple 2-column look-up table, for example Colour would have a ColourId column, and ColourText column; nothing unusual.
This is all good and well, however I've found my generated Linq-to-Sql classes become more complex when you start using look-up tables. To get the colour I now have to do things like Vehicle.Colour.ColourText. Adding new vehicles requires looking up all the various tables to ensure the values are there, etc. So I don't really want to be passing this Linq-to-Sql model around the rest of my application code.
So my current approach implements a couple of methods to convert this linq model into a pure domain model, which is nearly an identical object, but just replaces the Id fields with their actual textual values (strings). This is all wrapped up in a repository, so the rest of the app is only aware of these straight 'domain' objects, and not the data access objects. If the app needs to insert a new record, I pass this domain model back in to the repository, which then converts it back to the Linq-to-Sql variant, ensuring all the lookup values are in fact valid.
Is this a decent idea? I feel a little dirty doing this conversion, it seems to go against one of the reasons for using Linq-to-Sql in the first place. Then again, I guess it would be even worse passing around objects exposing look-ups and the like to the rest of the app. Is this why more fully-fledged O/RMs are more widely used?
Using the domain objects over the L2S ones also makes it easier for JSON serialisation for use with AJAX and the like too.
I guess I'm just asking if the approach I've taken is reasonable? Cheers.
What you have done is created low level objects from LINQ and then built your own Business Objects (or View Model) on top of them.
There is nothing wrong with this: in fact, it can help isolate the application from the relational model and bring it more fully into the Object realm. You see this done explicitly when people build a ViewModel to bind to the UI, where the ViewModel actually loads and saves through the low level entities.
The downside is more coding. The upside is that your object collection actually reflects your application use-cases better. I recommend continuing to explore this avenue. Perhaps a look here help you along: http://blogs.msdn.com/dphill/archive/2009/01/31/the-viewmodel-pattern.aspx

Creating lightweight Linq2Sql proxy objects

I'm trying to find the most efficient way to send my Linq2Sql objects to my jQuery plugins via JSON, preferably without additional code for each class.
The EntitySets are the main issue as they cause not only recursion, but when recursion is ignored (using JSON.NET's ReferenceLoopHandling feature) a silly amount of data can be retrieved, when I only really need 1 or 2 levels. This gets really bad when you're talking about Users, Roles and Permissions as you get the User's Role, the User's Permissions, the Role's Permissions, and the Role's Users all up in your JSON before it hits recursion and stops. Compare this to what I actually want, which is just the RoleId.
My initial approach was to send a "simplified" version of the object, where I reflect the entity and set any EntitySets to null, but of course in the above example Roles gets set to null and so RoleId is null. Setting only the 2nd level properties to null kind of works but there's still too much data as the EntitySets that weren't killed (the first level ones) repopulate their associated tables when the JsonSerializer does its reflection and I still get all those Permission objects that I just don't need.
I definately don't want to get into the situation of creating a lightweight version of every class and implementing "From" and "To" style methods on them, as this is a lot of work and seems wasteful.
Another option is to put a JsonIgnoreAttribute on the relevant properties, but this is going to cause a nightmare scenario whenever classes need to be re-generated.
My current favourite solution which I like and hate at the same time is to put the classes into opt-in serialization mode, but because I can't add attributes to the real properties I'd have to create JSON-only properties in a partial class. Again, this seems wasteful but I think it's the best so far.
Any suggestions gratefully received!
Have you tried to set the Serialization Mode in the dbml file?
It's a standard property under code generation and when you set it to Unidirectional it won't generate all the additional levels of your table structure. I've used this with silverlight and WCF to send data because the data contracts don't allow for additional levels to be sent (silverlight is very limited on what you can and can't do).
Hope this helps!