I am working on a project with Xpages.I wanted to know how to make the representation of a class diagram to my project.Notes is a documentary database so no relationnal.How I could represent my entities?
In Domino, documents are merely evidence of the existence of people, processes, and physical entities (products, offices, inventory, etc.). Ideally, your classes should model those things.
For instance, you might have classes like Employee, with properties like firstName, lastName, hireDate; maybe Asset, with properties like category, model, serialNumber; or perhaps Request, with properties like status, requester, dateApproved. Eventually the values of each of these properties might be stored as item values in Domino documents, but defining these first as attributes of classes allows you to follow a simple pattern to develop your application:
Use your class structure to rapidly define the nature of each "thing" your application interacts with, without worrying yet what each must look like or how and where the data will ultimately be stored.
Once you have these classes defined, you can bind visual components on an XPage (such as input fields like edit boxes and radio button groups) very easily using the #{dataSource.propertyName} syntax.
When these two steps are done, all you have left to do is to add two methods to each of these entity classes: one to write the data, and another to retrieve it.
Following this approach makes it very easy to rapidly build the application, but also protects your user interface from changes in how you wish the data to be stored. Initially, each object might represent a single document. As the application grows in either complexity or adoption, however, you may decide to segregate the data such that many documents are created to represent a single entity. Or at some point you might even decide to store some, or all, of the data outside of Domino (DB2, SQL, etc.). If your XPage components are bound to properties of these entity classes, all you need to do to change how or where the data is stored is to update the two methods you created in step 3 of the above list: alter how you write and retrieve the data. Your actual XPage design elements don't need to change at all.
Depends how you look at it. You can always think of following relation: Notes Form <-> Java POJO and Notes View <-> Java Collections.
See http://www.pipalia.co.uk/notes-development/rethinking-xpages-part-two/ for some tips on using Java world standards when working with xPages.
Related
We are now in process of evaluating integration solutions and comparing Mule and Boomi.
Use case is to read an Excel file, map the columns to a pre-defined set of JSON attributes and then use the JSON to insert records into a database. The mapping may vary from one Excel template to another wherein the column names in an Excel may be different from others.
How do I inject mapping information (source vs target) from outside integration flow?
Note: In Mule, I'm able to do that using a mapping variable (value is JSON) that I inject using Mule DataWeave language.
Boomi's mapping component is static in terms of structure but more versatile solutions are certainly possible.
The data processor component opens up Groovy, JavaScript, and XSLT 3.0 as options. These are Turing-complete languages that can be used to bend Boomi to almost any outcome.
You could make the Boomi UI available to those who need to write the maps in JSON. It's a pretty simple interface to learn. By using a route component, there could be one "parent" process that governs the a process for each template/process and then a map for each template. Such a solution would be pretty easy to build and run; allowing the template-specific processes to be deployed independently of the "parent".
You could map to a generic columnar structure and then dynamically alter the target
columns by writing a SQL procedure that would alter the target columns.
I've come across attempts to do what you're describing (not using either Boomi or Mulesoft) which were tragic failures: https://www.zdnet.com/article/uk-rural-payments-agency-rpa-it-failure-and-gross-incompetence-screws-farmers/ I draw your attention to the NAO's points:
ensure the system specifications retain a realistic level of flexibility
and
bespoke software is costly to develop, needs to be thoroughly tested, and takes more time to implement
The general goal for such a requirement like yours is usually to make transformation/ETL available to "non-programmers" which denies the reality that there are many more skills to delivering an outcome than "programming".
Continuing to work on my port of a CakePHP 1.3 app to 3.0, and have run into another issue. I have a number of areas where functionality varies based on certain settings, and I have previously used a modular component approach. For example, Leagues can have round-robin, ladder or tournament scheduling. This impacts on the scheduling algorithm itself, such that there are different settings required to configure each type, but also dictates the way standings are rendered, ties are broken, etc. (This is just one of 10 areas where I have something similar, though not all of these suffer from the problem below.)
My solution to this in the past was to create a LeagueComponent with a base implementation, and then extend that class as LeagueRoundRobinComponent, LeagueLadderComponent and LeagueTournamentComponent. When controllers need to do anything algorithm-specific, they check the schedule_type field in the leagues table, create the appropriate component, and call functions in it. This still works just fine.
I mentioned that this also affects views. The old solution for this was to pass the league component object from the controller to the view via $this->set. The view can then query it for various functionality. This is admittedly a bit kludgy, but the obvious alternative seems to be extracting all the info the view might require and setting it all individually, which doesn't seem to me to be a lot better. If there's a better option, I'm open to it, but I'm not overly concerned about this at the moment.
The problem I've encountered is when tables need to get some of that component info. The issue at hand is when I am saving my add/edit form and need to deal with the custom settings. In order to be as flexible as possible for the future, I don't have all of these possible setting fields represented in the database, but rather serialize them into a single "custom" column. (Reading this all works quite nicely with a custom constructor and getters.) I had previously done this by loading the component from the beforeSave function in the League model, calling the function that returns the list of schedule-specific settings, extracting those values and serializing them. But with the changes to component access in 3.0, it seems I can no longer create the component in my new beforeMarshal function.
I suppose the controller could "pass" the component to the table by setting it as a property, but that feels like a major kludge, and there must be a better way. It doesn't seem like extending the table class is a good solution, because that would horribly complicate associations. I don't think that custom types are the solution, as I don't see how they'd access a component either. I'm leaning towards passing just the list of fields from the controller to the model, that's more of a "configuration" method. Speaking of configuration, I suppose it could all just go into the central Configure data store, but that's always felt to me like somewhere that you only put "small" data. I'm wondering if there's a better design pattern I could follow that would let the table continue to take care of these implementation details on its own without the controller needing to get involved; if at some point I decide to change from the serialized method to adding all of the possible columns, it would be nice to have those changes restricted to the table class.
Oh, and keep in mind that this list of custom settings is needed in both a view and the table, so whatever solution is proposed will ideally provide a way for both of them to access it, rather than requiring duplication of code.
I am currently refactoring a project where so far a lot of data was kept as constants and arrays in the code. Also there are a lot of redundancies. Now I want to move all that data into the db, but I am not sure how I would do the mapping. The data is rarely dynamically selected based on user input but rather specifically selected in the code. It is used at a very core level of the application, but it is actually not THE core. Also a database is already being used, so there would be no real extra effort.
My idea would be to use a Mapping class in which I have constants pointing to the IDs of the respective rows. Is that a good idea?
Another idea would be to index the name row and just directly query for the names.
The database would probably have the following columns: id, name, polynom and params.
So, basically we are talking math data. For example: 1, "Price approximation", 20x^3 - 5x^2 + 11x", "non-cumulated".
I think this question is language-agnostic but since there might be a language-specific (or even framework-specific) best practice, here is what I use: PHP5 with the Yii Framework.
I don't have much experience with PHP nor Yii, but here is my 2 cents...
If these are constants and collections of constants that technically define your application (application architecture constants), but the end-user shouldn't have control over, I would put them in a configuration file instead of your database, unless you've built a module to easily access and modify them. Whether you implement a mapping class (or a configuration class) to retrieve them is not important, but be consistent in how you retrieve them. If you have too many to manage in a configuration file, then storing them in the database would be appropriate, but make sure you provide an easy way to modify them. To make your source code readable, I'd use descriptors that a human can understand and map those descriptors to the respective row like you mentioned.
If these are user defined constants, then you should definitely provide an interface. But keep the same architecture as the application architecture constants.
In a perfect program/application (or even better--an application framework), nothing is hard coded, and everything is controlled by constants (switches). If you're able to achieve this successfully without the need to maintain your source code, you will win the Nobel Peace Prize.
I have a linq-to-sql object from a table. I want to store a similar data in the profile collection. Technically, I don't need all of the data in the linq2sql table and do not really prefer the legacy naming construct. My first thought would be to create a collection of CRUD objects.
But, if I choose this solution I will have double the classes with much overlapping functionality. If I use the linq2sql objects as-is then I'll be dealing with abstractions that contain more data then necessary.
To give you a more clear example, I will create an example that is similar:
This goes into the database and a linq-2-sql abstraction is created
NoteSaved
Date
Id
UserId
Text
....
Custom Methods
This goes into the user profile
[Serializable]
SavedSearchText
Text
...
Custom Methods
SavedSearchText doesn't need the junk like id, userid, and date, and that data doesn't even make sense for it. But yet the custom functionality will overlap for both classes.
I see 2 trivial approaches:
Created a whole new set of objects that are CRUD just for this purpose
Use the linq2sql objects as a proxy for the CRUD, i.e using my "wisdom" that these objects are not really "those objects"
I was going the route 1 but see a lot of duplication. It is not very DRY. What are some solutions that keep things as DRY as possible while also maintaining a clear architecture? In other words, I want to avoid having to duplicate the same methods for every object AND I want to avoid storing unneeded data to the profile, such as Ids or DateStored which are not required.
I thought this should be obvious but SavedSearchText and SearchText share a common data field text and common functionality, i.e SomeFunction1, SomeFunction2, i.e FindText.
Edit/Update:
Typically this would be handled with inheritance. We'd have a base business class Text and then 2 derived types SavedText and UserText. But, with linq2sql I do not see a way to do this keeping with DRY principle.
One might also choose to solve via containment via has-a relationship but that is not really not "right" for this context.
Obviously one could create own business layer but that also doesnt keep with dry, as linq2sql has much of the functionality already required.
Perhaps the cleanest solution would be to create CRUD objects but that only read/dump back into the linq2sql objects. Unfortunately, these linq2sql objects will not actually be "tables" and some fields wont make sense. that appears to be the DRYiest solution.
It may even be possible in this case to use advanced methods such as extension methods which would extend both classes but I prefer not to use extension methods unless required.
I'm working on a car dealer website at the moment, and I've been working on a 'Vehicle' model in my database. I've been making extensive use of lookup tables with FK relationships for things like Colour, Make, Model etc.
So a basic version could be something like:
Vehicle {
Id
MakeId
ModelId
ColourId
Price
Year
Odometer
}
Which then uses a simple 2-column look-up table, for example Colour would have a ColourId column, and ColourText column; nothing unusual.
This is all good and well, however I've found my generated Linq-to-Sql classes become more complex when you start using look-up tables. To get the colour I now have to do things like Vehicle.Colour.ColourText. Adding new vehicles requires looking up all the various tables to ensure the values are there, etc. So I don't really want to be passing this Linq-to-Sql model around the rest of my application code.
So my current approach implements a couple of methods to convert this linq model into a pure domain model, which is nearly an identical object, but just replaces the Id fields with their actual textual values (strings). This is all wrapped up in a repository, so the rest of the app is only aware of these straight 'domain' objects, and not the data access objects. If the app needs to insert a new record, I pass this domain model back in to the repository, which then converts it back to the Linq-to-Sql variant, ensuring all the lookup values are in fact valid.
Is this a decent idea? I feel a little dirty doing this conversion, it seems to go against one of the reasons for using Linq-to-Sql in the first place. Then again, I guess it would be even worse passing around objects exposing look-ups and the like to the rest of the app. Is this why more fully-fledged O/RMs are more widely used?
Using the domain objects over the L2S ones also makes it easier for JSON serialisation for use with AJAX and the like too.
I guess I'm just asking if the approach I've taken is reasonable? Cheers.
What you have done is created low level objects from LINQ and then built your own Business Objects (or View Model) on top of them.
There is nothing wrong with this: in fact, it can help isolate the application from the relational model and bring it more fully into the Object realm. You see this done explicitly when people build a ViewModel to bind to the UI, where the ViewModel actually loads and saves through the low level entities.
The downside is more coding. The upside is that your object collection actually reflects your application use-cases better. I recommend continuing to explore this avenue. Perhaps a look here help you along: http://blogs.msdn.com/dphill/archive/2009/01/31/the-viewmodel-pattern.aspx