Downside of having string properties in service contracts that can contain a full json model - json

We are working with a DDD framework in our company. We are changing a lot of core things in our API because we are still growing and we are still in our enfant phase when designing a good API.
The problem is that there are alot of flows already in the same api. Which are not compatible with eachother.
We have an order service and a product service.
Normally when the product model radically changes, we have a major impact in the order model.
Now im here listing all kind of red flags which should never happen but I simply dont have control over how it needs to be done. That is pretty much management pushing for a fast solution. And leading to bad shortcuts...
The way is has been decided to overcome that Order needs to adapt constantly. They made a property in the orderline called productConfiguration. This is in the contract of the service and is direcrtly translated as is in the DB tables. This contains the product model that can change. In json format.
For me its very clear that this is very dangerous to do this. Because i nthe end you need to change this json into an actual object. So you just move the restrictions from the service contract to code logic. Which makes it worse cause it will only cause an issue at run time...
Are there other major things I just know about, so I can bring it to the table to avoid this way of working...

Using strings that are directly converted into DB tables is not just in your opinion a bad design. It's an opinion shared by a lot of us.
What do you do when an object changes? For example, the new one requires an attribute that the old one didn't had. How do you manage this situation? I suppose that you've to change everything, including the objects stored before. Or build a kind of transformation layer where you translate objects from the old to the new design. A lot of extra work.
Anyway, given that the two domains are separated, what are the information that change so much and require such a design? I mean, for most of the things you could know at the beginning what do you need for your part of the domain. For the rest, I would prefer to have a kind of service that given an Id gives you the information from the other domain. You can change this service (here could be also json obj, if nothing than just showing is required) and adapt to your/their needs. But, it's just a solution that comes from my limited knowledge of your processes.
Other ways are also possible, as long as you can always understand which version of the design are you using.

Related

Populating a Maximo field using a db function: Why is this a bad practice?

In a separate SO question, I asked how to populate a Maximo field using a db function:
Take value from FieldA, send to a db function, and return value to FieldB
A Stack Overflow community member was kind enough to answer the question and provided this advice:
And all that said, you should just use the automation script to do
what you have the database function doing, if at all possible. To be
more blunt, what you are wanting to do is not considered good
practice. So, make sure to include in your script's comments your
justification for not following good practice.
If we assume that there aren't any out-of-the-box methods for doing what I want (Spatial Query), then why would referencing a database function from Maximo be a bad practice?
(Bear in mind that I'm new to the IT industry. I would benefit from layman's terms.)
I can be a little verbose, so I'll apologize up front for that. And it may seem like I'm wandering, but I'll try to bring it back together at the end.
As I said in my answer to your first question, Take value from FieldA, send to db function, return value to FieldB, calling a stored procedure (or stored function or whatever) from an automation script is not "good" practice. That isn't to say, dogmatically, that it shouldn't ever be done, but to say that, as a rule, it should be avoided. When making an exception to the good practice rule is the best way to solve a particular problem, your code should document why you chose (or were forced) to make an exception. And I stand by that answer to your first question, which made no mention of a special circumstance.
If there are no out-of-the-box configuration options for doing what you want, such as crossovers or relationships or domains or etc in Maximo, then your next option should be in-product customization options (also known as "small 'c' customizations), if they exist. It so happens that in the case of Maximo you have "automation scripting" or "autoscripting" in Python or JavaScript, with all (Java) classes in the JVM's / server's classpath at your disposal (possibly including Maximo Spatial's Java class methods), for an in-product customization option. Using examples from Maximo 76 Scripting Features, you can even figure out how to call RESTful APIs, like those exposed by ESRI's ArcGIS, over HTTP or HTTPS.
If in-product (small "c") customizations don't work don't work well enough (such as causing performance problems), then it is generally acceptable, though not supportable, to customize the product itself (aka a big "C" customization). (Generally acceptable, as many companies would accept that rationale for developing a big "C" customization, but not supportable, as the vendor will ask you to remove your Customization and reproduce your problem if a problem is found and if it is at all conceivable that your Customization could be contributing to the problem in any way.) In the case of Maximo, writing your own Java classes or stored procedures are generally considered big "C" customizations.
In the case of Maximo, and you could probably generalize that to any COTS product, updating Maximo data from a stored procedure is considered exceptionally bad practice. This is because such updates are not subjected to Maximo's business rules and logic, which can lead to data integrity problems, support problems, and more. In particular, triggers often assume that Maximo has made database updates in a particular order (parent data being inserted before child data, for example) when its documentation explicitly disclaims commitment to such order. (If it doesn't anymore, it used to.)
All that in mind, if out of the box Maximo doesn't provide a configuration for doing what you need, and if you can't use autoscripting to do what you want, even with access to all of Maximo's and Java's libraries (in that order of preference), then it would be acceptable to use an automation script to call a database function to calculate a value for you to store via Maximo. In fact, in that scenario, calling a function from your script would be far better than having a trigger set the value, because, assuming you update Maximo via it's API, such as mbo.setValue("attribute","value"), your script will still leave the auditing, security, validation, data integrity, and other business rules in operation. As a bonus, any professional Maximo consultants (like me) you bring on to help with projects will waste less time (read: your money) trying to figure out what you are doing and why so they don't break it.
I hope that helps.

How can I handle relational data in Meteor?

I am learning Meteor using the Discover Meteor book.
I come from a PHP and MySQL background, and the application I am thinking of doing as a side-project is a real-time Backgammon web application. While Meteor's reactivity is a very, very big plus, I am stumped on how I can handle relational data (e.g. games, users, tournaments, friends, teams, etc).
I have read a lot of answers (ranging from old to very old) on StackOverflow on how one can use MySQL with Meteor. My search has led me to numtel/meteor-mysql. However, when I look at the examples provided in that repository, it is nowhere as clean as Meteor's own implementation of MongoDB.
My options, as I understand them, are the following:
Use MongoDB, and rewrite a lot of the features present in RDBMS in Javascript
Use an RDBMS that is not as well-supported in Meteor as MongoDB
IMO, option two is much less work, and I think might lead to less problems in the future. Take the problem in the epilogue of Why You Should Never Use MongoDB, for example.
We could also model this data as a set of nested hashes. The set of information about a particular TV show is one big nested key/value data structure. Inside a TV show, there’s an array of seasons, each of which is also a hash. Within each season, an array of episodes, each of which is a hash, and so on. This is how MongoDB models the data. Each TV show is a document that contains all the information we need for one show.
But then, how would you query for the TV shows that someone has starred in?
Back to my original question: is there something I'm missing here? Handling relational data is something that a lot of applications will need to do, but I can't seem to find a clean solution
It will be much less work if you go with option 1 in my opinion.
It won't be difficult to learn to use MongoDB, and since MongoDB uses JSON objects and is supported natively by Meteor and all it's packages, it will be much less work.
I advise having a look at the aldeed packages: collection2 and simple-schema to structure your collections. I also advice using the collection-helpers package to help with joins.
If you have a posts collection with name, authorId and content fields, then to get the author of the post, you'd write Meteor.users.findOne(userId).
Hope that clears things up a bit and gets you on your way.

Soft Delete vs. DB Archive

Suggested Reading
Similar: Are soft deletes a good idea?
Good Article: http://weblogs.asp.net/fbouma/archive/2009/02/19/soft-deletes-are-bad-m-kay.aspx
How I ended up here
I strongly belive that when making software, anything done up front to minimize work later on pays off in truck loads. As such, I am trying to make sure when approaching my database schema and maintenance that it can maintain relational integrity while not being archaic or overly complex.
This resulted in a sort of shudder when looking at the typical delete approach, CASCADE. Yikes, a little over the top for my current situation. I wanted to maintain relational graph integrity, but I didn't want to remove every graph just because one part of the chain was irrelevant. Therefore I chose to go the way of soft deleting to make sure data integrity would remain while records could be removed from relevance. I accomplished this by adding a "DateDeleted" field to every, sigh, table in the database.
Turning Point
However, this is clearly starting to add too much complexity and work to be worth it. I am including logic where it should not go and do not feel like perpetuating these bad practices throughout my whole application. In short, I am going to roll back this implementation.
When looking up weather or not people like soft-deleting, it seems there is a lot of support for it. In fact, the linked "Similar" post up top sports a top voted answer of "I always soft-delete". Moreover, the majority of answers there and around SO include some sort of "isDeleted" or "isActive" type of approach.
New Implementation Idea
The "Good Article" linked covers some of the issues I actually began encountering. It also suggests an alternative to soft-deleting which I found spot on from a best practices standpoint. The suggestion is to use an "Archiving Database", which I had actually considered when looking at soft deleting. The reason I decided against it was because of the point I made earlier about CASCADE deleting. I am wary to remove entire graphs from the database because one part of the chain is removed. However, this graph would be able to be retained at least from the archive so I am not sure that it would be really that terrible.
Crossroads
So, should I just keep adding logic, logic, logic....logic? Or, should I consider making the archival database where most of the logic would simply sit in a very complex graph management class to store / restore relational object graphs? The latter seems to be best practice to me.
Soft deleting is definitely an easy approach in theory. However, not really much attention is paid to what to do with the data that wasn't deleted. In fact, it is glossed over.
In my opinion this is because the wrong issue is in focus. Not just "what does deleting mean", but what IS being deleted. When a record is to be removed, what is really being removed is a node in a graph - not just a single record. That whole graph integriy is the reason for people to bandaid over the issue with "soft deletes". These bandaid solutions tend to hide the gangrene underneath - a festering problem which only gets worse with time.
What's worse is that in order to accompany the soft delete logic must be included all over (many times breaking various conventions and implementing anti-patterns) to account for the possible breaks in the object graph. Moreover, what kind of business logic is "isDeleted"?!
I believe a very strong solution to this problem, the problem of removing an object while retaining the referential integrity of the object graph, is to use an archival pattern. On delete of an object, the object is archived then deleted. The archive database, a mirror database with meta data (temporal database design can be used and is very relevant here), would then receive the object to be archived and restored if necessary.
This makes it very direct to avoid listing or including a deleted object as the relevant database will no longer hold it. Now, the same logic which was applied looking for "isDeleted" "isActive" or "DeletedDate" can be applied in the correct place (Not all over the place) to foreign keys of retrieved objects. When a foreign key is present, but the object is not, then there is now a logical explanation and a logical set of options. Display that the containing object was deleted and some course of action: "Restore, Delete Current Containing Object, View Deleted". These options can be either chosen by the user, or explicitly defined in code in a logical manner. Depending on how advanced the archival database is, perhaps more options exist such as who deleted it, when, why, etc. etc.

First write code using API, then actual API - does this approach have a name and is valid for API design process?

Standard way of working on new API (library, class, whatever) usually looks like this:
you think about what methods would API user need
you implement API that you suspect user will need
So basically you trying to guess what your API should look like. It very often leads to over engineering stuff, huge APIs that you think user will need and it is very possible that great part of your code won't be used at all.
Some time ago, maybe few years even, I read some article that promoted writing client code first. I don't remember where I found it but author pointed out several advantages like better understanding how API will be used, what it should provide and what is basically obsolete. I think idea was that it goes along with SCRUM methodology and user stories but on implementation level.
Just out of curiosity for my latest private project I started not with actual API (some kind of toolkit library) but with client code that would use this API. Of course my code is all in red because classes, methods and properties does not exist and I can forget about help from intellisense but what I noticed is that after few days of coding my application "has" all basic functionalities and my library API "is" a lot smaller than I imagined when starting a project.
I don't say that if somebody took my library and started using it it wouldn't lack some features but I think it helped me to realize that my idea of this API was somewhat flawed because I usually try to cover all bases and provide methods "just in case". And sometimes it bites me badly because I made some stupid mistake in basic functions being more focused on code that somebody maybe would need.
So what I would like to ask you do you ever tried this approach when needed to create a new API and did it helped you? Is it some recognized technique that has a name?
So basically you're trying to guess what your API should look like.
And that's the biggest problem with designing anything this way: there should be no (well, minimal) guesswork in software design. Designing an API based on assumptions rather than actual information is dangerous, for several reasons:
It's directly counter to the principle of YAGNI: in order to get anything done, you have to assume what the user is going to need, with no information to back up those assumptions.
When you're done, and you finally get around to using your API, you'll invariably find that it sucks to use (poor user experience), because you weren't thinking about how the library is used (UX), you were thinking about what the library must do (features).
An API, by definition, is an interface for users (i.e., developers). Designing as anything else just makes for a bad design, without fail.
Writing sample code is like designing a GUI before writing the backend: a Good Thing. It forces you to think about user experience and practical effects of design decisions without getting bogged down in useless theorising and assumption.
And contrary to Gabriel's answer, this is not bottom-up design: it's top-down. Rather than design the concrete backend of your library and then force an abstract interface on top of it, you first design the interface and then worry about the implementation.
Generally speaking, the idea of designing the concrete first and abstracting from that afterwards is called bottom-up design. Test Driven Development uses similar principle to what you describe to support better design. Firstly you write a test, which is an use of code you are going to write afterwards. It is important to proceed stepwise, because you have to proove the API is implementable. IMportant part of each part is refactoring - this allows you design more concise API and reuse parts of your code.

The best way to familiarize yourself with an inherited codebase

Stacker Nobody asked about the most shocking thing new programmers find as they enter the field.
Very high on the list, is the impact of inheriting a codebase with which one must rapidly become acquainted. It can be quite a shock to suddenly find yourself charged with maintaining N lines of code that has been clobbered together for who knows how long, and to have a short time in which to start contributing to it.
How do you efficiently absorb all this new data? What eases this transition? Is the only real solution to have already contributed to enough open-source projects that the shock wears off?
This also applies to veteran programmers. What techniques do you use to ease the transition into a new codebase?
I added the Community-Building tag to this because I'd also like to hear some war-stories about these transitions. Feel free to share how you handled a particularly stressful learning curve.
Pencil & Notebook ( don't get distracted trying to create a unrequested solution)
Make notes as you go and take an hour every monday to read thru and arrange the notes from previous weeks
with large codebases first impressions can be deceiving and issues tend to rearrange themselves rapidly while you are familiarizing yourself.
Remember the issues from your last work environment aren't necessarily valid or germane in your new environment. Beware of preconceived notions.
The notes/observations you make will help you learn quickly what questions to ask and of whom.
Hopefully you've been gathering the names of all the official (and unofficial) stakeholders.
One of the best ways to familiarize yourself with inherited code is to get your hands dirty. Start with fixing a few simple bugs and work your way into more complex ones. That will warm you up to the code better than trying to systematically review the code.
If there's a requirements or functional specification document (which is hopefully up-to-date), you must read it.
If there's a high-level or detailed design document (which is hopefully up-to-date), you probably should read it.
Another good way is to arrange a "transfer of information" session with the people who are familiar with the code, where they provide a presentation of the high level design and also do a walk-through of important/tricky parts of the code.
Write unit tests. You'll find the warts quicker, and you'll be more confident when the time comes to change the code.
Try to understand the business logic behind the code. Once you know why the code was written in the first place and what it is supposed to do, you can start reading through it, or as someone said, prolly fixing a few bugs here and there
My steps would be:
1.) Setup a source insight( or any good source code browser you use) workspace/project with all the source, header files, in the code base. Browsly at a higher level from the top most function(main) to lowermost function. During this code browsing, keep making notes on a paper/or a word document tracing the flow of the function calls. Do not get into function implementation nitti-gritties in this step, keep that for a later iterations. In this step keep track of what arguments are passed on to functions, return values, how the arguments that are passed to functions are initialized how the value of those arguments set modified, how the return values are used ?
2.) After one iteration of step 1.) after which you have some level of code and data structures used in the code base, setup a MSVC (or any other relevant compiler project according to the programming language of the code base), compile the code, execute with a valid test case, and single step through the code again from main till the last level of function. In between the function calls keep moting the values of variables passed, returned, various code paths taken, various code paths avoided, etc.
3.) Keep repeating 1.) and 2.) in iteratively till you are comfortable up to a point that you can change some code/add some code/find a bug in exisitng code/fix the bug!
-AD
I don't know about this being "the best way", but something I did at a recent job was to write a code spider/parser (in Ruby) that went through and built a call tree (and a reverse call tree) which I could later query. This was slightly non-trivial because we had PHP which called Perl which called SQL functions/procedures. Any other code-crawling tools would help in a similar fashion (i.e. javadoc, rdoc, perldoc, Doxygen etc.).
Reading any unit tests or specs can be quite enlightening.
Documenting things helps (either for yourself, or for other teammates, current and future). Read any existing documentation.
Of course, don't underestimate the power of simply asking a fellow teammate (or your boss!) questions. Early on, I asked as often as necessary "do we have a function/script/foo that does X?"
Go over the core libraries and read the function declarations. If it's C/C++, this means only the headers. Document whatever you don't understand.
The last time I did this, one of the comments I inserted was "This class is never used".
Do try to understand the code by fixing bugs in it. Do correct or maintain documentation. Don't modify comments in the code itself, that risks introducing new bugs.
In our line of work, generally speaking we do no changes to production code without good reason. This includes cosmetic changes; even these can introduce bugs.
No matter how disgusting a section of code seems, don't be tempted to rewrite it unless you have a bugfix or other change to do. If you spot a bug (or possible bug) when reading the code trying to learn it, record the bug for later triage, but don't attempt to fix it.
Another Procedure...
After reading Andy Hunt's "Pragmatic Thinking and Learning - Refactor Your Wetware" (which doesn't address this directly), I picked up a few tips that may be worth mentioning:
Observe Behavior:
If there's a UI, all the better. Use the app and get a mental map of relationships (e.g. links, modals, etc). Look at HTTP request if it helps, but don't put too much emphasis on it -- you just want a light, friendly acquaintance with app.
Acknowledge the Folder Structure:
Once again, this is light. Just see what belongs where, and hope that the structure is semantic enough -- you can always get some top-level information from here.
Analyze Call-Stacks, Top-Down:
Go through and list on paper or some other medium, but try not to type it -- this gets different parts of your brain engaged (build it out of Legos if you have to) -- function-calls, Objects, and variables that are closest to top-level first. Look at constants and modules, make sure you don't dive into fine-grained features if you can help it.
MindMap It!:
Maybe the most important step. Create a very rough draft mapping of your current understanding of the code. Make sure you run through the mindmap quickly. This allows an even spread of different parts of your brain to (mostly R-Mode) to have a say in the map.
Create clouds, boxes, etc. Wherever you initially think they should go on the paper. Feel free to denote boxes with syntactic symbols (e.g. 'F'-Function, 'f'-closure, 'C'-Constant, 'V'-Global Var, 'v'-low-level var, etc). Use arrows: Incoming array for arguments, Outgoing for returns, or what comes more naturally to you.
Start drawing connections to denote relationships. Its ok if it looks messy - this is a first draft.
Make a quick rough revision. Its its too hard to read, do another quick organization of it, but don't do more than one revision.
Open the Debugger:
Validate or invalidate any notions you had after the mapping. Track variables, arguments, returns, etc.
Track HTTP requests etc to get an idea of where the data is coming from. Look at the headers themselves but don't dive into the details of the request body.
MindMap Again!:
Now you should have a decent idea of most of the top-level functionality.
Create a new MindMap that has anything you missed in the first one. You can take more time with this one and even add some relatively small details -- but don't be afraid of what previous notions they may conflict with.
Compare this map with your last one and eliminate any question you had before, jot down new questions, and jot down conflicting perspectives.
Revise this map if its too hazy. Revise as much as you want, but keep revisions to a minimum.
Pretend Its Not Code:
If you can put it into mechanical terms, do so. The most important part of this is to come up with a metaphor for the app's behavior and/or smaller parts of the code. Think of ridiculous things, seriously. If it was an animal, a monster, a star, a robot. What kind would it be. If it was in Star Trek, what would they use it for. Think of many things to weigh it against.
Synthesis over Analysis:
Now you want to see not 'what' but 'how'. Any low-level parts that through you for a loop could be taken out and put into a sterile environment (you control its inputs). What sort of outputs are you getting. Is the system more complex than you originally thought? Simpler? Does it need improvements?
Contribute Something, Dude!:
Write a test, fix a bug, comment it, abstract it. You should have enough ability to start making minor contributions and FAILING IS OK :)! Note on any changes you made in commits, chat, email. If you did something dastardly, you guys can catch it before it goes to production -- if something is wrong, its a great way to get a teammate to clear things up for you. Usually listening to a teammate talk will clear a lot up that made your MindMaps clash.
In a nutshell, the most important thing to do is use a top-down fashion of getting as many different parts of your brain engaged as possible. It may even help to close your laptop and face your seat out the window if possible. Studies have shown that enforcing a deadline creates a "Pressure Hangover" for ~2.5 days after the deadline, which is why deadlines are often best to have on a Friday. So, BE RELAXED, THERE'S NO TIMECRUNCH, AND NOW PROVIDE YOURSELF WITH AN ENVIRONMENT THAT'S SAFE TO FAIL IN. Most of this can be fairly rushed through until you get down to details. Make sure that you don't bypass understanding of high-level topics.
Hope this helps you as well :)
All really good answers here. Just wanted to add few more things:
One can pair architectural understanding with flash cards and re-visiting those can solidify understanding. I find questions such as "Which part of code does X functionality ?", where X could be a useful functionality in your code base.
I also like to open a buffer in emacs and start re-writing some parts of the code base that I want to familiarize myself with and add my own comments etc.
One thing vi and emacs users can do is use tags. Tags are contained in a file ( usually called TAGS ). You generate one or more tags files by a command ( etags for emacs vtags for vi ). Then we you edit source code and you see a confusing function or variable you load the tags file and it will take you to where the function is declared ( not perfect by good enough ). I've actually written some macros that let you navigate source using Alt-cursor,
sort of like popd and pushd in many flavors of UNIX.
BubbaT
The first thing I do before going down into code is to use the application (as several different users, if necessary) to understand all the functionalities and see how they connect (how information flows inside the application).
After that I examine the framework in which the application was built, so that I can make a direct relationship between all the interfaces I have just seen with some View or UI code.
Then I look at the database and any database commands handling layer (if applicable), to understand how that information (which users manipulate) is stored and how it goes to and comes from the application
Finally, after learning where data comes from and how it is displayed I look at the business logic layer to see how data gets transformed.
I believe every application architecture can de divided like this and knowning the overall function (a who is who in your application) might be beneficial before really debugging it or adding new stuff - that is, if you have enough time to do so.
And yes, it also helps a lot to talk with someone who developed the current version of the software. However, if he/she is going to leave the company soon, keep a note on his/her wish list (what they wanted to do for the project but were unable to because of budget contraints).
create documentation for each thing you figured out from the codebase.
find out how it works by exprimentation - changing a few lines here and there and see what happens.
use geany as it speeds up the searching of commonly used variables and functions in the program and adds it to autocomplete.
find out if you can contact the orignal developers of the code base, through facebook or through googling for them.
find out the original purpose of the code and see if the code still fits that purpose or should be rewritten from scratch, in fulfillment of the intended purpose.
find out what frameworks did the code use, what editors did they use to produce the code.
the easiest way to deduce how a code works is by actually replicating how a certain part would have been done by you and rechecking the code if there is such a part.
it's reverse engineering - figuring out something by just trying to reengineer the solution.
most computer programmers have experience in coding, and there are certain patterns that you could look up if that's present in the code.
there are two types of code, object oriented and structurally oriented.
if you know how to do both, you're good to go, but if you aren't familiar with one or the other, you'd have to relearn how to program in that fashion to understand why it was coded that way.
in objected oriented code, you can easily create diagrams documenting the behaviors and methods of each object class.
if it's structurally oriented, meaning by function, create a functions list documenting what each function does and where it appears in the code..
i haven't done either of the above myself, as i'm a web developer it is relatively easy to figure out starting from index.php to the rest of the other pages how something works.
goodluck.