Related
I am quite new in programming and what haunts me about it is not really the coding (well at least not until the present moment!) itself, but some words/concepts that are really important to understand. My doubt is with the word "ABSTRACTION". I have already searched dictionaries and saw some videos of people giving very clear explanations of the word. So, I know that abstraction is when you take into consideration only the things that are important and leave out everything else (putting in very simple and direct language), like for instance, if you are going to change a light bulb, you do not need to know the manufacturer of the light bulb or the light socket. You also do not need to know the materials used to manufacture the light bulb. However, the problem is when you read some texts or listen to people using the word and it does not seem to fit the meaning and then you start to wonder if they misused the word (which I think is very unlikely) or it is because there is another obscure meaning that I have not found yet or maybe it is just because I am too dumb to understand it. Below I put excerpts from articles I was reading and bolded and capitalized the part where the word appears so you guys have a context and understand where my problem is. Thank you.
"A paradigm programming provides and determines the view that the programmer has on the structuring and execution of the programme. For example, in object-oriented programming, programmers MAY ABSTRACT A PROGRAMME AS A COLLECTION OF OBJECTS that interact with each other, while in functional programming, programmers ABSTRACT THE PROGRAMME as a sequence of functions executed in a stacked fashion."
"A tuple space has the function of creating a SHARED MEMORY ABSTRACTION over a distributed system, where everyone can read and write to it."
It's easy to understand if you replace abstract/abstraction with one of its synonyms conceptualize/conceptualization. In your first two examples "abstract a programme" means "think of a programme as"... or "conceptualize a programme as"... When we make an abstraction we forget about some details, and think about that thing in other terms.
Side advice from a fellow beginner:
As someone who started learning computer science independently less than a year ago, I can tell you right now there will be lots of tricky terms like this. Try not to get too caught up in them. Often times if you just keep learning, you'll experience first hand what these terms mean without even realizing it. Bits and pieces will add up. The takeaway from this being, don't let what you don't know slow you down. Sometimes it's ok to keep going and just not know for a while.
These seem to fit the definition you put up earlier. For object oriented programming, the mindset is to consider "objects" as the essential (important) aspect of a program and abstract all other considerations away. Same thing for functional programming where "functions" are the defining aspect abstracting other considerations as secondary.
The tuple space may be a little trickier but if you consider that variations in memory storage models are abstracted away in favour of a higher level concept focusing on a collection of values, then you see what the abstraction relates to.
Abstract
adjective
existing in thought or as an idea but not having a physical or concrete existence.
relating to or denoting art that does not attempt to represent external reality, but rather seeks to achieve its effect using shapes, colours, and textures.
verb
consider something theoretically or separately from (something else).
extract or remove (something).
noun
a summary of the contents of a book, article, or speech.
an abstract work of art.
There you have your answer. Ask 100 people what an abstract painting is, you will get at least 100 answers. Why should programmers behave differently?
Lets see what Oracle has to say about abstract classes:
Abstract classes are similar to interfaces. You cannot instantiate them, and they may contain a mix of methods declared with or without an implementation. However, with abstract classes, you can declare fields that are not static and final, and define public, protected, and private concrete methods.
Consider using abstract classes if any of these statements apply to your situation:
You want to share code among several closely related classes.
You expect that classes that extend your abstract class have many common methods or fields, or require access modifiers other than public (such as protected and private).
You want to declare non-static or non-final fields. This enables you to define methods that can access and modify the state of the object to which they belong.
Compare that with the definition of abstract in the above section. I think you get a pretty good idea of abstractness in computer programming.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
There is a lot of confusion about this and I'd like to know, what exactly is the difference between depreciated, deprecated and obsolete, in a programming context, but also in general.
I know I could just look at an online dictionary, and I have, even at many, but they don't all agree, or there are differences in what they say. So I decided to just ask here, considering I also want an answer in a programming context.
If I understand right, deprecated means it shouldn't be used anymore, because it has been replaced by a better alternative, or just because it has been abandoned. Obsolete means it doesn't work anymore, was removed, or doesn't work as it should anymore. And depreciated, if I understand right, once more, has completely nothing to do with programming and just means something has a lowered value, or was made worse.
Am I right, or am I wrong, and if I am wrong, what exactly do each of these mean?
You are correct.
Deprecated means that it is still in use, but only for historical purposes and it will be removed probably in the next big release. It is recommended that you do not use deprecated functions or features - even if they are present in the current library for example.
Obsolete means that is already out-of-use.
Depreciated means the monetary value of something has decreased over time. E.g., cars typically depreciate in value.
Also for more precise definitions of the terms in the context of the English language I recommend using https://english.stackexchange.com/.
Records are obsolete, CDs are deprecated, and the music industry is depreciated.
In the context of describing APIs and such, "depreciated" is a misreading, misspelling, and mispronunciation of "deprecated".
I'm thinking people have just seen "depreciated" so often in other contexts, and "deprecated" so rarely, that they don't even register the "i" or lack thereof. It doesn't exactly help that their definitions are similar either.
Obsolete: should not be used any more
Deprecated: should be avoided in new code, and likely to become obsolete in a later version of the API
Depreciated: usually a typo for deprecated (depreciation is where the value of goods goes down over time, e.g. if you buy a new computer its resale value goes down month by month)
With all due respect, this is a slight pet peeve of mine and the selected answer for this is actually wrong.
Granted language evolves, e.g., "google" is now a verb, apparently. Through what's known as "common use", it has earned its way into official dictionaries. However, "google" was a new word representing something heretofore non-existent in our speech.
Common use does not cover blatantly changing the meaning of a word just because we didn't understand its definition in the first place, no matter how many people keep repeating it.
The entire English-speaking computer industry seems to use "deprecate" to mean some feature that is being phased out or no longer relevant. Not bad, just not recommended. Usually, because there is a new and better replacement.
The actual definition of deprecate is to put down, or speak negatively about, or to express disapproval, or make fun of someone or something through degradation.
It comes from Latin de- (against) precari (to pray). To "pray against" to a 21st century person probably conjures up thoughts of warding off evil spirits or something, which is probably where the disconnect occurs with people. In fact, to pray or to pray for something meant to wish good upon, to speak about in a positive way. To pray against would be to speak ill of or to put down or denigrate. See this excerpt from the Oxford English Dictionary.
Express disapproval of:
(as adjective deprecating) he sniffed in a deprecating way
another term for depreciate ( sense 2).
he deprecates the value of children’s television
What people generally mean to convey when using deprecate, in the IT industry anyway, and perhaps others, is that something has lost value. Something has lost relevance. Something has fallen out of favor. Not that it has no value, it is just not as valuable as before (probably due to being replaced by something new.) We have two words that deal with concept in English and the first is "depreciate". See this excerpt from the Oxford English Dictionary.
Diminish in value over a period of time:
the pound is expected to depreciate against the dollar
Disparage or belittle (something):
Notice that definition 2 sounds like deprecate. So, ironically, deprecate can mean depreciate in some contexts, just not the one commonly used by IT folk.
Also, just because currency depreciation is a nice common use of the word depreciate, and therefore easy to cite as an example, doesn't mean it's the only context in which the word is relevant. It's just an example. ONE example.
The correct transitive verb for this is "obsolete". You obsolete something because its value has depreciated.
See this excerpt from the Oxford English Dictionary.
Verb - Cause something to be or become obsolete by replacing it with something new.
It bugs me, it just bugs me. I don't know why. Maybe because I see it everywhere. In every computer book I read, every lecture I attend, and on every technical site on the internet, someone invariably drops the d-bomb sooner or later. If this one ends up in the dictionary at some point, I will concede, but conclude that the gatekeepers of the English lexicon have become weak and have lost their way... or at the very least, lost their nerve. Even Wikipedia espouses this misuse, and indeed, defends it. I've already edited the page thrice, and they keep removing my edits.
Something is depreciated until it is obsolete. Deprecate, in the context of IT, makes no sense at all, unless you're putting down someone's performance or work or product or the fact that they still wear parachute pants.
Conclusion: The entire IT industry uses deprecate incorrectly. It may be common use. It may be some huge mis-understanding. But it is still, completely, wrong.
In computer software standards and documentation, the term deprecation is used to indicate discouragement of usage of a particular software feature, usually because it has been superseded by a newer/better version. The deprecated feature still works in the current version of the software, but it may raise error messages or warnings recommending an alternate practice.
The Obsolete attribute marks a program entity as one that is no longer recommended for use. Each use of an entity marked obsolete will subsequently generate a warning or an error since they are no longer in use or does not exist.
EDIT:
depreciated : Not sure how this relates to programming
I wouldn't say obsolete means it doesn't work anymore. In my mind obsolete just means there are better alternatives. A thing becomes obsolete because of something else. Deprecated means you shouldn't use it, although there might not be any alternatives. A thing becomes deprecated because someone says it is -- it is prescriptive.
"Obsolete" means "has been replaced".
"Depreciated" means "has less value than its original value".
"Deprecated" means to expressly disprove of and was popularised due to misspellings in two technical articles where the authors used deprecated without an "i". One in 1999 and the other in 2002 referenced in the dictionaries as origins.
Prior to that time frame we were reading comments like // depreciated in API documentation including the MS MSDN.
The use of deprecated in the tech industry is therefore completely incorrect and evidence of how a technical writer can produce a bug that can live in a language and someone should finally put the bug to rest.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In some sports certain techniques or elements are named after the athlete who invented or first performed them—for example, Biellmann spin.
Is their widespread use of such names for programming techniques and idioms? What are they? To be clear, I am explicitly not asking about algorithms, which are quite often named after their creators.
For example, one is Schwartzian transform, but I can't recall any more.
The functional programming technique currying is named after its (re)-inventor, Haskell Curry.
Boolean logic is named after George Boole
Duff's device is pretty famous and seems to me to qualify as technique/idiom.
I used to do a "Carmack" which was referring to the "fast inverse square root" but according to the Wikipedia entry the technique was probably found by the smarties at SGI in 1990 or so.
Even if it doesn't fit your description, it's still a pretty amazing read :)
Kleene closure: it's the * operator in regular expressions. It means "0 or more of what precedes it".
At one point in time, the Karnaugh Map could have been considered a technique to facilitate programming (albeit at a low level).
Markov Chains are named after Andrey Markov and used in programming to generate:
Google PageRank
Generating Spam-Mail Texts
Mnemonic Codewords to replace IDs/Hashvalues
The graphics world is full of eponymous techniques:
Bresenham's line algorithm
Bézier curve
Gouraud shading
Phong shading
Fisher-Yates shuffle, the standard way to implement an in-place random shuffle on an array.
Please edit to add more if found...
In Standard ML and other functional programming languages which use tuple and record literals, I sometimes see literals written thus:
( first
, second
, third
)
or
{ name = "Atwood"
, age = 37
, position = "founder"
, reports_to = NONE
}
This highly idiomatic layout, as opposed to layout where the commas or semicolons appear at the end of the line, is something that I have always heard referred to as MacQueen style, after Dave MacQueen (formerly of Bell Labs, now at the University of Chicago).
K&R (Kernighan and Ritchie) and Allman indentation styles.
I think timsort would qualify. It's used in python and open jdk 7
How about anything related to Bayes: Bayesian filtering, Bayesian inference, Bayesian classification. While rooted in statistics, these techniques have found their ways into plenty of programming-related applications.
Carmack's Reverse:
Depth fail
Around 2000, several people discovered that Heidmann's method can be made to work for all camera positions by reversing the depth. Instead of counting the shadow surfaces in front of the object's surface, the surfaces behind it can be counted just as easily, with the same end result. This solves the problem of the eye being in shadow, since shadow volumes between the eye and the object are not counted, but introduces the condition that the rear end of the shadow volume must be capped, or shadows will end up missing where the volume points backward to infinity.
Disable writes to the depth and colour buffers.
Use front-face culling.
Set the stencil operation to increment on depth fail (only count shadows behind the object).
Render the shadow volumes.
Use back-face culling.
Set the stencil operation to decrement on depth fail.
Render the shadow volumes.
The depth fail method has the same considerations regarding the stencil buffer's precision as the depth pass method. Also, similar to depth pass, it is sometimes referred to as the z-fail method.
William Bilodeau and Michael Songy discovered this technique in October 1998, and presented the technique at Creativity, a Creative Labs developer's conference, in 19991. Sim Dietrich presented this technique at a Creative Labs developer's forum in 1999 [2]. A few months later, William Bilodeau and Michael Songy filed a US patent application for the technique the same year, US patent 6384822, entitled "Method for rendering shadows using a shadow volume and a stencil buffer" issued in 2002. John Carmack of id Software independently discovered the algorithm in 2000 during the development of Doom 3 [3]. Since he advertised the technique to the larger public, it is often known as Carmack's Reverse.
ADL - Argument Dependent Lookup is also known as Koenig lookup (after Andrew Koenig although I don't think he appreciates it, as it didn't turn out the way he originally planned it)
Exception guarantees are often called Abrahams guarantees (Dave Abrahams) see (http://en.wikipedia.org/wiki/Abrahams_guarantees)
Liskov substitution principle http://en.wikipedia.org/wiki/Liskov_substitution_principle - Barabara Liskov
I am shocked that no one has mentioned Backus–Naur Form (BNF), named after John Backus and Peter Naur.
The method of constructing programs by computing weakest preconditions, as expounded in Edsger Dijkstra's book A Discipline of Programming, is usually referred to as Dijkstra's Method. It's more of a programming methodology than a technique, but it might qualify.
Several hard to fix or unusual software bugs has been categorized after famous scientists. Heisenbug could be the most known example.
Boyer-Moore string search algorithm: it can find a string inside a string of length N with fewer than N operations.
Seriously shocked to see that no one yet has mentioned Hindley Milner Type Inference.
In C++, the Barton-Nackman trick.
The BWT (Burroughs Wheeler Transform) is pretty important in data compression.
Jensen's Device
How about: Ada named after Ada Lovelace the first computer programmer??
Perhaps Hungarian notation might qualify? It was invented by Charles Simonyi (who was Hungarian).
In C++, the Schwartz counter (aka Nifty Counter) idiom is used to prevent multiple, static initialization of shared resources. It's named after Jerry Schwartz, original creator of the C++ iostreams at AT&T.
I know the rule of thumb is that a noun used by the user is potentially a class. Similarly, a verb may be made into an action class e.g. predicate
Given a description from the user, how do you -
identify what is not not to be made into a class
The only real answer is experience. However, some things fairly obviously (to me, anyway) cannot be modelled in your design. For example if the use case says:
"and then the parcel is put on the UPS van"
There is no need to model the van. You can make decisions of this kind by considering the system boundaries - you don't and can't control the van. However,
"we make a request to UPS for pickup"
might well result in a UPSPickup object.
The rules are simple.
Everything is an object.
All objects belong to classes.
In rare (very rare circumstances) you have some specialized class/object confusion.
A "library" of all static methods. This is an implementation choice, and no user can see this.
A Singleton where there can only be one object of the given class. This does happen sometimes.
In an OO language it is not a question of what to be made into a class, but rather 'what class does this data/functionality go into?'
Like other software architecture aspects there are rules, but ultimately it is an art that requires experience. There are lots of books on software design, but a simple reference is Coupling and Cohesion.
Cohesion of a single module/component is the degree to which its responsibilities form a meaningful unit; higher cohesion is better.
Coupling between modules/components is their degree of mutual interdependence; lower coupling is better.
In the same line as Database Normalization - is there an approach to object normalization, not design pattern, but the same mathematical like approach to normalizing object creation. For example: first normal form: no repeating fields....
here's some links to DB Normalization:
http://en.wikipedia.org/wiki/Database_normalization
http://databases.about.com/od/specificproducts/a/normalization.htm
Would this make object creation and self-documentation better?
Here's a link to a book about class normalization (guess we're really talking about classes)
http://www.agiledata.org/essays/classNormalization.html
Normalization has a mathematical foundation in predicate logic, and a clear and specific goal that the same piece of information never be represented twice in a single model; the purpose of this goal is to eliminate the possibility of inconsistent information in a data model. It can be shown via mathematical proof that if a data model has certain specific properties (that it passes tests for 1st Normal Form (1NF), 2NF, 3NF, etc.) that it is free from redundant data representation, i.e. it is Normalized.
Object orientation has no such underlying mathematical basis, and indeed, no clear and specific goal. It is simply a design idea for introducing more abstraction. The DRY principle, Command-Query Separation, Liskov Substitution Principle, Open-Closed Principle, Tell-Don't-Ask, Dependency Inversion Principle, and other heuristics for improving quality of code (many of which apply to code in general, not just object oriented programs) are not absolute in nature; they are guidelines that programmers have found useful in improving understandability, maintainability, and testability of their code.
With a relational data model, you can say with absolute certainty whether it is "normalized" or not, because it must pass ALL the tests for normal form, and they are quite specific. With an object model, on the other hand, because the goal of "understandable, maintainable, testable, etc" is rather vague, you cannot say with any certainty whether you have met that goal. With many of the design heuristics, you cannot even say for sure whether you have followed them. Have you followed the DRY principle if you're applying patterns to your design? Surely repeated use of a pattern isn't DRY? Furthermore, some of these heuristics or principles aren't always even necessarily good advice all the time. I do try to follow Command-Query Separation, but such useful things as a Stack or a Queue violate that concept in order to give us a rather elegant and useful result.
I guess the Single Responsible Principle is at least related to this. Or at least, violation of the SRP is similar to a lack of normalization in some ways.
(It's possible I'm talking rubbish. I'm pretty tired.)
Interesting.
You may also be interested in looking at the Law of Demeter.
Another thing you may be interested in is c2's FearOfAddingClasses, as, arguably, the same reasoning that lead programmers to denormalise databases also leads to god classes and other code smells. For both OO and DB normalisation, we want to decompose everything. For databases this means more tables, for OO, more classes.
Now, it is worth bearing in mind the object relational impedance mismatch, that is, probably not everything will translate cleanly.
Object relational models or 'persistence layers', usually have 1-to-1 mappings between object attributes and database fields. So, can we normalise? Say we have department object with employee1, employee2 ... etc. attributes. Obviously that should be replaced with a list of employees. So we can say 1NF works.
With that in mind, let's go straight for the kill and look at 6NF database design, a good example is Anchor Modeling, (ignore the naming convention). Anchor Modeling/6NF provides highly decomposed and flexible database schemas; how does this translate to OO 'normalisation'?
Anchor Modeling has these kinds of relationships:
Anchors - unique object IDs.
Attributes, which translate to object attributes: (Anchor, value, metadata).
Ties - relationships between two or more objects (themselves anchors): (Anchor, Anchor... , metadata)
Knots, attributed Ties.
Attribute metadata can be anything - who changed an attribute, when, why, etc.
The OO translation of this is looks extremely flexible:
Anchors suggest attribute-less placeholders, like a proxy which knows how to deal with the attribute composition.
Attributes suggest classes representing attributes and what they belong to. This suggests applying reuse to how attributes are looked up and dealt with, e.g automatic constraint checking, etc. From this we have a basis to generically implement the GOF-style Structural patterns.
Ties and Knots suggest classes representing relationships between objects. A basis for generic implementation of the Behavioural design patterns?
Interesting and desirable properties of Anchor Modeling that also translate across are:
All this requires replacing inheritance with composition (good) in the exposed objects.
Attribute have owners rather than owners having attributes. Although this make attribute lookup more complex, it neatly solves certain aliasing problems, as there can only ever be one owner.
No need for NULL. This translates to clearer NULL handling. Empty-case attribute classes could provide methods for handling the lack of a particular attribute, instead of performing NULL-checking everywhere.
Attribute metadata. Attribute-level full historisation and blaming: 'play' objects back in time, see what changed, when and why, etc. (if required - metadata is entirely optional)
There would probably be a lot of very simple classes (which is good), and a very declarative programming style (also good).
Thanks for such a thought provoking question, I hope this is useful for you.
Perhaps you're taking this from a relational point-of-view, but I would posit that the principles of interfaces and inheritance correspond to normalization in the world of OOP.
For example, a Person abstract class containing FirstName, LastName, Gender and BirthDate can be used by classes such as Employee, User, Member etc. as a valid base class, without a need to repeat the definitions of those attributes in such subclasses.
The principle of DRY, (a core principle of Andy Hunt and Dave Thomas's book The Pragmatic Programmer), and the constant emphasis of object-oriented programming on re-use, also correspond to the efficiencies offered by Normalization in relational databases.
At first glance, I'd say that the objectives of Code Refactoring are similar in an abstract way to the objectives of normalization. But that's pretty abstract.
Update: I almost wrote earlier that "we need to get Jon Skeet in on this one." I posted my answer and who beat me? You guessed it...
Object Role Modeling (not to be confused with Object Relational Mapping) is the closest thing I know of to normalization for objects. It doesn't have as mathematical a foundation as normalization, but it's a start.
In a fairly ad-hoc and untutored fashion, that will probably cause purists to scoff, and perhaps rightly, I think of a database table as being a set of objects of a particular type, and vice versa. Then I take my thoughts from there. Viewed this way, it doesn't seem to me like there's anything particularly special you have to do to use normal form in your everyday programming. Each object's identity will do for starters as its primary key, and references (pointers, etc.) will do by way of foreign keys. Then just follow the same rules.
(My objects usually end up in 3NF, or some approximation thereof. I treat this all more as guidelines, and, like I said, "untutored".)
If the rules are followed properly, each bit of information then ends up in one place, the interrelationships are clear, and everything is structured such that introducing inconsistencies takes some work. One could say that this approach produces good results on this basis, and I would agree.
One downside is that the end result can feel a bit like a tangle of spaghetti, particularly after some time away, and it's hard to shake the constant lingering sensation, even though it's usually false, that surely a few of all these links could be removed...
Object oriented design is rational but it does not have the same mathematically well-defined basis as the Relational Model. There is nothing exactly equivalent to the well-defined normal forms of database design.
Whether this is a strength or a weakness of Object oriented design is a matter of interpretation.
I second the SRP. The Open Closed Principle applies as well to "normalization" although I might stretch the meaning of the word, in that it should be possible to extend the system by adding new implementations, without modifying the existing code. objectmentor about OCP
good question, sorry i can't answer in depth
I've been working on object normalization off and on for over 20 years. It's deep and complicated and beautiful, and is the subject of my second planned book, Object Mechanics II. ONF = Object Normal Form, you heard it here first! ;-)
since potentially patentable technology lurks within, I am not at liberty to say more, except that normalizing the data is the really easy part ;-)
ADDENDUM: change of plans - see https://softwareengineering.stackexchange.com/questions/84598/object-oriented-normalization