The Implications of Modern Day Software Development Abstractions [closed] - language-agnostic

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am currently doing a dissertation about the implications or dangers that today's software development practices or teachings may have on the long term effects of programming.
Just to make it clear: I am not attacking the use abstractions in programming. Every programmer knows that abstractions are the bases for modularity.
What I want to investigate with this dissertation are the positive and negative effects abstractions can have in software development. As regards the positive, I am sure that I can find many sources that can confirm this. But what about the negative effects of abstractions? Do you have any stories to share that talk about when certain abstractions failed on you?
The main concern is that many programmers today are programming against abstractions without having the faintest idea of what the abstraction is doing under-the-covers. This may very well lead to bugs and bad design. So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
Taking a simple example from Joel's Back to Basics, C's strcat:
void strcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
}
The above function hosts the issue that if you are doing string concatenation, the function is always starting from the beginning of the dest pointer to find the null terminator character, whereas if you write the function as follows, you will return a pointer to where the concatenated string is, which in turn allows you to pass this new pointer to the concatenation function as the *dest parameter:
char* mystrcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
return --dest;
}
Now this is obviously a very simple as regards abstractions, but it is the same concept I shall be investigating.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Can you please give your opinions and your says as regards this subject?
Thank you for your time and I appreciate every comment.

First of all, abstractions are inevitable because they help us to deal with the mind-blowing complexity of things.
Abstractions are also inevitable because it is more and more required of an individual to undertake more tasks or even complete projects. To address the problem, one uses libraries which wrap lower-level concepts and expose more complex behavior.
Naturally, a developer has less and less time to know the intrinsics of the things. The latest concern I heard about on SO pages is starting to learn JavaScript with jQuery library, ignoring the raw JavaScript at all.
The issue is about the balance between:
Know the little tiniest details of some technology and be a master of it, but at the same time being unable to work with anything else.
Superficial knowledge of a wide variety of technologies and tools which however proves sufficient for common everyday tasks which allows an individual to perform in multiple areas possibly covering all sides of some (moderately big) project.
Take your pick.
Some work requires the one, another position requires the other.
So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
It would be nice if people knew what is happening behind the scenes. This knowledge comes with time and practice, up to a certain degree. Depends on what kind of tasks you have. You certainly shouldn't blame people for not knowing everything. If you wish a person to be able to perform in a variety of fields, it is inevitable he won't have time to cover each up to the last bit.
What is essential, is the knowledge of the basic building blocks. Data structures, algorithms, complexity. That should provide a basis for everything else.
Knowing tiniest details of some particular technology is good, but not essential. Anyway, you can't learn them all. They're too many and they keep coming.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Schools shouldn't be teaching programming languages at all. They're to teach basics of theoretical and practical CS, social skills, communication, team work. To cover a vast variety of topics and problems to provide a wide angle view for their graduates. This will help them to find their way. Whatever they need to know in details, they'll do it on their own.

An example where abstraction has failed:
In this case, a piece of software was needed to communicate to many different third party data processors. The communication was done through various messaging protocols; the transport method/protocol is not important in this case. Just assume everyone communicated through messaging.
The idea was to abstract the features of each of these third parties into a single, unified message format. It seemed relatively straightforward because each of the third parties performed a similar service. The problem was that some third parties used different terms to explain similar features. It was also found that some third parties had additional features that other third parties did not have.
The designers of the abstraction did not see through the difference of third party terms nor did they think it was reasonable to limit the scope of the unified features to only support the common features of the third parties. Instead, a single, monolithic message schema was developed to support any and all features of the third parties considered at the time. In what was probably considered a future-proofing move, they added a means of also passing an infinite number of name/value pairs along with the monolithic message in case there were future data elements that the monolithic message could not handle.
Early on, it became clear that changing the monolithic message was going to be difficult due to so many people using it in mission critical systems. The use of the name/value pairs increased. Each name that could be used was documented inside a large spreadsheet, and developers were required to consult the spreadsheet to avoid duplication of name value function. The list got so large, however, that it was found that there were frequently collisions in purposes of name values.
The majority of the monolithic message's fields now have no purpose and are kept mainly for backwards compatibility. There are name values that can be used to replace fields in the monolithic message. The majority of the interfacing is now done through the name/value pairs. In cases where the client is intending to communicate with more than one third party, each client needs to reconcile the name values available for each third party. It would be almost simpler to interface directly to the third party themselves.
I believe this illustrates that, from a consumer of the monolithic message perspective, that it is important that developers of the consuming code not know what is happening under the covers. If the designers had considered that the consumers of the monolithic message should not have to understand the abstraction in great detail, the monolithic message and it's associated name/value pairs might never have happened. Documenting the abstraction with assertions regarding input and expected output would make life so much simpler.
As for colleges not teaching C and Lisp....they are cheating the students. You get a better understanding of what is going on with the machine and OS with C. You get a bit of a different perspective on processing data and approaching problems with Lisp. I have used some of the ideas I learned using Lisp in programs written in C, C++, .Net, and Java. Learning Java after knowing even just C is not very difficult. The OO part is really not programming language specific, so perhaps using Java for that is acceptable.

An understanding of fundamentals of algorithms (e.g. time complexity) and some knowledge about the metal is essential to designing/writing smells-good code.
I would suggest, though, that just as important is education in modern abstractions and profiling. I feel that modern abstractions make me so much more productive than I would be without them that they are at least as important as good fundamentals, if not more so.
An important element that lacked in my education was the use of profilers. When used routinely and correctly, profilers can help mitigate problems with poor fundamentals.

Since you quote Joel Spolsky, I take it your aware of his "Law of Leaky Abstractions"? I'll mention it for future readers. http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Green & Blackwell's Ironies of Abstractions talks a bit about the effort of learning the abstraction. http://homepage.ntlworld.com/greenery/workStuff/Papers/index.html
The term "astronaut architecture" is a reaction to over-abstraction.
I know I certainly curse abstraction when I haven't touched Java or C# in a while and i want to write to a file, but have to instance a Stream...Writer...Adaptor....Handler....
Also, Patterns, as in Gang Of Four. Seemed great when I first read about them in the mid-90's, but can never remember factory, facade, interface, helper, worker, flyweight....

Related

What is the best Pro OOP argument? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am trying to get a couple team-members on to the OOP mindset, who currently think in terms of procedural programming.
However I am having a hard time putting into terms the "why" all this is good, and "why" they should want to benefit from it.
They use a different language than I do, and I am lacking the communication skills to explain this to them in a way that makes them "want" to learn the OOP way of doing things.
What are some good language independent books, articles, or arguments anyone can give or point to?
OOP is good for a multi-developer team because it easily allows abstraction, encapsulation, inheritance and polymorphism. These are the big buzz words of OOP and they are the big buzz words for good reasons.
Abstraction: Allows other members of your team to use code that you write without having to understand the implementation details. This reduces the amount of necessary communication. Think of The Mythical Man Month wherein it is detailed that communication is one of the highest costs facing a development team.
Encapsulation: Allows you to change your implementation details without impacting users of your code. As such, it reduces code maintenance costs.
Inheritance: Allows your team to reuse and extend your implementations with reduced costs.
Polymorphism: Allows your team to use different implementations of a given abstraction. If your team is writing code to read and parse data from a Stream, because of polymorphism it can now work with FileStreams, MemoryStreams and PigeonStreams seamlessly and with significantly reduced costs.
OOP is not a holy grail. It is inappropriate for some teams because the costs of using it could be higher than the costs of not using it. For example, if you try to design for polymorphism but never have multiple implementations of a given abstraction then you have probably increased your costs.
Always give examples.
Take a bit of their code you think is bad. Re-write it to be better. Explain why it is better. Your colleagues will either agree or disagree.
Nobody uses (or should use) techniques because they're good techniques, they (should) use them because they produce good results. The advantages of very simple use of classes and objects are usually fairly easy to see, for instance when you have an array of objects with n properties instead of n arrays, one for each field you care about, and so on.
Comparing procedural to OOP, the biggest winner by far is encapsulation. OOP doesn't mean that you get encapsulation automatically, but the process of doing it is free compared with procedural code.
Abstraction helps manage the complexity of an application: only the information that's required is exposed.
There are many ways to go about this: OOP is not the only discipline to promote this strategy.
Of course, it is not because one claims to do OOP that one builds an application without abundant "abstraction leaks" thereby defeating the strategy...
I have a bit strange thought. I don't know but there probably some areas exist where OOP is unnecessary or even bad (very-very IMHO: javascript programming).
You and your team probably work in one of these areas. In other case you'd failed many years ago due to teams which use oop and all its benefits (like different frameworks, UML and so on) would simply do their job more efficiently.
I mean that if you still work well without oop then, maybe, just leave it.
The killer phrase: With OOP you can model the world "as it is" *cough*.
OOP didn't make sense to me until I was working on an application that connected to two different databases. I needed a function called getEmployeeName() for both databases. I decided to create two objects, one for each database, to encapsulate the functions that ran against each one (there were no functions that ran against both simultaneously). Not the epitomy of OOP, but a good start for me.
Most of my code is still procedural, but I'm more aware of situations where objects would make sense in my code. I'm not of the mindset that everything needs to be one way or the other.
Re-use of existing code through hierarchies.
The killer argument is IMHO that it takes much less time to re-design your code. Here is a similar question explaining why.
Having the ability to pass an entire object around that has a bunch of methods/functions you can call using that object. For example, let's say you have want to pass a message around you only need to pass one object and everyone who gets that object will have access to all it's functions.
Also, you can declare some objects' functions as public and some as private. There is also the concept of a friend function where only objects that are related through OO hierarchies have access to their friend's functions.
Objects help keep functions near the data they use and encapsulates it all into one entity that can be easily passed around.

So was that Data Structures & Algorithms course really useful after all?

I remember when I was in DSA I was like wtf O(n) and wondering where would I use it other than in grad school or if you're not a PhD like Bloch. Somehow uses for it does pop up in business analysis, so I was wondering when have you guys had to call up your Big O skills to see how to write an algorithm, which data structure did you use to fit or whether you had to actually create a new ds (like your own implementation of a splay tree or trie).
Understanding Data Structures has been fundamental to many of the projects I've worked on, and that goes beyond the ten minute song 'n dance one does when asked such a question in an interview situation.
Granted that modern environments with all sorts of collection classes can make light work of storing and accessing large amounts of data, but having an understanding that a particular problem is best solved with a particular data structure can be a great timesaver. And by "timesaver" I mean "the difference between something working and not working".
Honestly, being able to answer that stuff is my biggest criterion for taking interviewees seriously in an interview. Knowing how basic data structures work, basic O(n) analysis, and some light theory is really crucial to being able to write large applications successfully.
It's important in the interview because it's important in the job. I've worked with techs in the past that were self taught, without taking the data structures course or reading a data structures book, and their code is occasionally bad in ways they should have seen coming.
If you don't know that n2 is going to run slowly compared to n log n, you've got more to learn.
As far as the later half of the data structures courses, it isn't generally applicable to most tech jobs, but if you ever do wind up needing it, you'll wish you had paid more attention.
Big-O notation is one of the basic notations used when describing algorithms implemented by a particular library. For example, all documentation on STL that I've seen describes various operations in terms of big-O, so naturally you have to e.g. understand the difference between O(1), O(log n) and O(n) to understand the implications of your choice of STL containers and algorithms. MSDN also does that for .NET classes, and IIRC Java documentation does that for standard Java classes. So, I'd say that knowing the notation is pretty much a requirement for understanding documentation of most popular frameworks out there.
Sure (even though I'm a humble MS in EE -- no PhD, no CS, differently from my colleague Joshua Block), I write a lot of stuff that needs to be highly scalable (or components that may need to be reused in highly scalable apps), so big-O considerations are most always at work in my design (and it's not hard to take them into account). The data structures I use are almost always from Python's simple but rich supply (which I did lend a hand developing;-), rarely is a totally custom one needed (rather than building on top of list, dict, etc); but when it does happen (e.g. the bitvectors in my open source project gmpy), no big deal.
I was able to use B-Trees right when I learned about them in algorithm class (that was about 15 years ago when there were much less open source implementations available). And even later the knowledge about the differences of e. g. container classes came in handy...
Absolutely: even though stacks, queues, etc. are pretty straightforward, it helps to have been introduced to them in a disciplined fashion.
B-Tree's and more advanced sorting are a bit more difficult so learning them early was a big benefit and I have indeed had to implement each of them at various points.
Finally, I created an algorithm for single-connected components a few years back that was significantly better than the one our signal-processing team was using but I couldn't convince them that it was better until I could show that it was O(n) complexity rather than O(nlogn).
...just to name a few examples.
Of course, if you are content to remain a CRUD-system hacker with no real desire to do more than collect a paycheck, then it may not be necessary...
I found my knowledge of data structures very useful when I needed to implement a customizable event-driven system about ten years ago. That's the biggie, but I use that sort of knowledge fairly frequently in lesser ways.
For me, knowing the exact algorithms has been... nice as background knowledge. However, the thing that's been the most useful is the more general background of having to pay attention to how different pieces of an algorithm interact. For instance, there can be places in code where moving one piece of code (ie, outside a loop) can make a huge difference in both time and space.
Its less of the specific knowledge the course taught and, rather, more that it acted like several years of experience. The course took something that might take years to encounter (have drilled into you) all the variations of in pure "real world experience" and condensed it.
The title of your question asks about data structures and algorithms, but the body of your question focuses on complexity analysis, so I'll focus on that too:
There are lots of programming jobs where being able to do complexity analysis is at least occasionally useful. See What career can I hope for if I like algorithms? for some examples of these.
I can think of several instances in my career where either I or a co-worker have discovered a a piece of code where the (usually time, sometimes space) complexity was higher that it should have been. eg: something that was quadratic or cubic when it could have been linear or nlog(n). Such code would work fine when given small inputs, but on larger inputs would quickly become really slow or consume all available memory. Knowing alternative algorithms and data structures, their complexities, and also how to analyze the complexity to build new algorithms is vital in being able to correct these problems (or avoid them in the first place).
Networking is all I've used it: in an implementation of traveling salesman.
Unfortunately I do a lot of "line of business" and "forms over data" apps, so most problems I work on can be solved by hammering together arrays, linked lists, and hash tables. However, I've had the chance to work my data structures magic here and there:
Due to weird complex business rules, I worked on an application which used a custom thread pool implemented as a leftist-heap.
My dev team struggled to write a complex multithreaded app. It was plagued with race conditions, dead locks, and lousy performance due to very fine-grained locking. We re-worked the code to share state between threads, opting to write a very light-weight wrapper to facilitate message passing. Next, we converting our linked lists and hash tables to immutable stacks and immutable style and immutable red-black trees, we had no more problems with thread safety or performance. The resulting code was immaculate and surprisingly readable.
Frequently, a business rules engine requires you to roll your own state machine, which is very naturally modelled as a graph where vertexes and states and edges are transitions between states.
If for no other reasons, I'm glad I took the time to readable about data structures and algorithms simply to be able picture novel problems a little differently, especially combinatorial problems and graph problems. Graph theory is no longer a synonym for "scary".

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

What is Functional Decomposition? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Functional Decomposition, what is it useful for and what are its pros/cons? Where are there some worked examples of how it is used?
Functional Decomposition is the process of taking a complex process and breaking it down into its smaller, simpler parts.
For instance, think about using an ATM. You could decompose the process into:
Walk up to the ATM
Insert your bank card
Enter your pin
well...you get the point.
You can think of programming the same way. Think of the software running that ATM:
Code for reading the card
PIN verification
Transfer Processing
Each of which can be broken down further. Once you've reached the most decomposed pieces of a subsystem, you can think about how to start coding those pieces. You then compose those small parts into the greater whole. Check out this Wikipedia Article:
Decomposition (programming)
The benefit of functional decomposition is that once you start coding, you are working on the simplest components you can possibly work with for your application. Therefore developing and testing those components becomes much easier (not to mention you are better able to architect your code and project to fit your needs).
The obvious downside is the time investment. To perform functional decomposition on a complex system takes more than a trivial amount of time BEFORE coding begins.
Personally, I think that amount of time is well worth it.
It's the same as WorkBreakDown Structures (WBS), mindMapping and top down development - basically breaking a large problem into smaller, more comprehensible sub-parts.
Pros
allows for a proactive approach to programming (resiting the urge to code)
helps identify the complex and/or risk areas of a project (in the ATM example, security is probably the more complex component)
helps identify ALL components of a project - the #1 cause of project/code failure (via Capers Jones) is missing pieces - things not thought of until late in the project (gee, I didn't realize I had to check the person's balance prior to handing out the $)
allows for decoupling of components for better programming, sharing of code and distribution of work
Cons - there are no real CONS in doing a decomposition, however there are some common mistakes
not breaking down far enough or breaking down to far - each person needs to determine the happy level of detail needed to provide them with the insight to the component without overdoing it (don't break down into programming lines of code...)
not using pre-existing patterns/code modules into consideration (rework)
not reviewing with clients to ensure the scope is correct
not using the breakdown when actually coding (like designing a house than forgetting about the plan and just starting to nail some boards together)
Here's an example: your C compiler.
First there's the preprocessor: it handles #include and #define and all the macros. You give it a file name and some options and it returns a really long string. Let's call this function preprocess(filename).
Then there's the lexical analyzer. It takes a string and breaks it into tokens. Call it lex(string). The parser takes tokens and turns them into a tree, call it parse(tokens). Then there's a function for converting a tree to a DAG of blocks, call it dag(tree). Call the code emitter emit(dag), which takes a DAG of blocks and spits out assembler.
The compiler is then:
emit(dag(parse(lex(preprocess(filename)))));
We've decomposed a big, difficult to understand function (the compile function) into a bunch of smaller, easier to understand functions. You don't have to do it as a pipeline, you could write your program as:
process_data(parse_input(), parse_config())
This is more typical; compilers are fairly deep programs, most programs are broad by comparison.
Functional decomposition is a way of breaking down the complex problem into simpler problems based on the tasks that need to be performed rather than the the data relationships. This term is usually associated with the older procedure-oriented design.
A short description about the difference between procedure-oriented and object-oriented design.
Functional decomposition is helpful prior to creating functional requirements documents. If you need software for something, functional decomposition answers the question "What are the functions this software must provide". Decomposing is needed to define fine-grain functions. "I need software for energy efficiency measurement" is too general. That's why we break this into smaller pieces until the point where we clearly understand all the functions the systems need to provide. This can be later used as a checklist for completeness of a system.
A functional requirements document (FD) is basically a textual representation of functional decomposition. Coding directly from the FD may be ok for procedural languages, but it is not good enough for object-oriented solutions, because it doesn't identify objects. Neither is good for usability planning and testing.
My opinion is that you should take some time to create a FD, but not to use it too much of the time. Consult every person that knows the process you are following with your system to find all the functions needed.
I have a lot of experience in software design, development, and selling, and I use functional decomposition as the first step of development. I use it as a base for the contract, so the client knows what they will get and I know what I must provide.

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.