Possible to create a problem set for Solidity? - ethereum

This question is for all those that are decently familiar with the Solidity programming language (used for creating smart contracts on Ethereum & other blockchains).
I was wondering whether it was possible to create a problem set for Solidity, specifically. From what I've observed, the only "equivalent" that exists out there are exercises that ask someone to identify a vulnerability / weakness in a smart contract or to create code capable of exploiting a given contract.
However, I'm looking for something that more closely resembles the problems classically created for other programming languages like, "Write a python/C++ program to check whether the given number is even or not" (won't bother providing answer for brevity's sake). I'm also wondering whether creating a functional equivalent for solidity is even practical (or possible).
I'm not knowledgeable enough about the field of programming to know if there are any other languages that are used in the same way Solidity is. What I mean is that solidity isn't like other programming languages that are constructed for general purposes. For example, that example programming problem I gave could not be retrofitted to ask, "Write a solidity program to check whether the given number is even or not" (if you disagree, however, please correct me on this, because that would profoundly change the trajectory of my current research).
I stated this assumption to help provide context for why I'm making this request in the first place.
I tried scouring the internet for "solidity problem sets" and each time I thought I found a potential solution, I only came across exercises that require you to find a vulnerability / exploit or CTF-based challenges, which essentially address the same issue. Frankly, I haven't even seen another conversation online where someone is either looking for or making this same request. If I'm wrong on that, please feel free to correct me.

Related

The Implications of Modern Day Software Development Abstractions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am currently doing a dissertation about the implications or dangers that today's software development practices or teachings may have on the long term effects of programming.
Just to make it clear: I am not attacking the use abstractions in programming. Every programmer knows that abstractions are the bases for modularity.
What I want to investigate with this dissertation are the positive and negative effects abstractions can have in software development. As regards the positive, I am sure that I can find many sources that can confirm this. But what about the negative effects of abstractions? Do you have any stories to share that talk about when certain abstractions failed on you?
The main concern is that many programmers today are programming against abstractions without having the faintest idea of what the abstraction is doing under-the-covers. This may very well lead to bugs and bad design. So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
Taking a simple example from Joel's Back to Basics, C's strcat:
void strcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
}
The above function hosts the issue that if you are doing string concatenation, the function is always starting from the beginning of the dest pointer to find the null terminator character, whereas if you write the function as follows, you will return a pointer to where the concatenated string is, which in turn allows you to pass this new pointer to the concatenation function as the *dest parameter:
char* mystrcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
return --dest;
}
Now this is obviously a very simple as regards abstractions, but it is the same concept I shall be investigating.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Can you please give your opinions and your says as regards this subject?
Thank you for your time and I appreciate every comment.
First of all, abstractions are inevitable because they help us to deal with the mind-blowing complexity of things.
Abstractions are also inevitable because it is more and more required of an individual to undertake more tasks or even complete projects. To address the problem, one uses libraries which wrap lower-level concepts and expose more complex behavior.
Naturally, a developer has less and less time to know the intrinsics of the things. The latest concern I heard about on SO pages is starting to learn JavaScript with jQuery library, ignoring the raw JavaScript at all.
The issue is about the balance between:
Know the little tiniest details of some technology and be a master of it, but at the same time being unable to work with anything else.
Superficial knowledge of a wide variety of technologies and tools which however proves sufficient for common everyday tasks which allows an individual to perform in multiple areas possibly covering all sides of some (moderately big) project.
Take your pick.
Some work requires the one, another position requires the other.
So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
It would be nice if people knew what is happening behind the scenes. This knowledge comes with time and practice, up to a certain degree. Depends on what kind of tasks you have. You certainly shouldn't blame people for not knowing everything. If you wish a person to be able to perform in a variety of fields, it is inevitable he won't have time to cover each up to the last bit.
What is essential, is the knowledge of the basic building blocks. Data structures, algorithms, complexity. That should provide a basis for everything else.
Knowing tiniest details of some particular technology is good, but not essential. Anyway, you can't learn them all. They're too many and they keep coming.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Schools shouldn't be teaching programming languages at all. They're to teach basics of theoretical and practical CS, social skills, communication, team work. To cover a vast variety of topics and problems to provide a wide angle view for their graduates. This will help them to find their way. Whatever they need to know in details, they'll do it on their own.
An example where abstraction has failed:
In this case, a piece of software was needed to communicate to many different third party data processors. The communication was done through various messaging protocols; the transport method/protocol is not important in this case. Just assume everyone communicated through messaging.
The idea was to abstract the features of each of these third parties into a single, unified message format. It seemed relatively straightforward because each of the third parties performed a similar service. The problem was that some third parties used different terms to explain similar features. It was also found that some third parties had additional features that other third parties did not have.
The designers of the abstraction did not see through the difference of third party terms nor did they think it was reasonable to limit the scope of the unified features to only support the common features of the third parties. Instead, a single, monolithic message schema was developed to support any and all features of the third parties considered at the time. In what was probably considered a future-proofing move, they added a means of also passing an infinite number of name/value pairs along with the monolithic message in case there were future data elements that the monolithic message could not handle.
Early on, it became clear that changing the monolithic message was going to be difficult due to so many people using it in mission critical systems. The use of the name/value pairs increased. Each name that could be used was documented inside a large spreadsheet, and developers were required to consult the spreadsheet to avoid duplication of name value function. The list got so large, however, that it was found that there were frequently collisions in purposes of name values.
The majority of the monolithic message's fields now have no purpose and are kept mainly for backwards compatibility. There are name values that can be used to replace fields in the monolithic message. The majority of the interfacing is now done through the name/value pairs. In cases where the client is intending to communicate with more than one third party, each client needs to reconcile the name values available for each third party. It would be almost simpler to interface directly to the third party themselves.
I believe this illustrates that, from a consumer of the monolithic message perspective, that it is important that developers of the consuming code not know what is happening under the covers. If the designers had considered that the consumers of the monolithic message should not have to understand the abstraction in great detail, the monolithic message and it's associated name/value pairs might never have happened. Documenting the abstraction with assertions regarding input and expected output would make life so much simpler.
As for colleges not teaching C and Lisp....they are cheating the students. You get a better understanding of what is going on with the machine and OS with C. You get a bit of a different perspective on processing data and approaching problems with Lisp. I have used some of the ideas I learned using Lisp in programs written in C, C++, .Net, and Java. Learning Java after knowing even just C is not very difficult. The OO part is really not programming language specific, so perhaps using Java for that is acceptable.
An understanding of fundamentals of algorithms (e.g. time complexity) and some knowledge about the metal is essential to designing/writing smells-good code.
I would suggest, though, that just as important is education in modern abstractions and profiling. I feel that modern abstractions make me so much more productive than I would be without them that they are at least as important as good fundamentals, if not more so.
An important element that lacked in my education was the use of profilers. When used routinely and correctly, profilers can help mitigate problems with poor fundamentals.
Since you quote Joel Spolsky, I take it your aware of his "Law of Leaky Abstractions"? I'll mention it for future readers. http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Green & Blackwell's Ironies of Abstractions talks a bit about the effort of learning the abstraction. http://homepage.ntlworld.com/greenery/workStuff/Papers/index.html
The term "astronaut architecture" is a reaction to over-abstraction.
I know I certainly curse abstraction when I haven't touched Java or C# in a while and i want to write to a file, but have to instance a Stream...Writer...Adaptor....Handler....
Also, Patterns, as in Gang Of Four. Seemed great when I first read about them in the mid-90's, but can never remember factory, facade, interface, helper, worker, flyweight....

What are some authoritative/respected "best known implementation" websites/resources?

EDIT:
Wow, the initial response to this question was quite negative. I think I might have triggered some pretty strong emotions by using the word "best"; it seems like a few people latched onto that word and decided to dismiss my question right away.
Obviously, there are many, many situations in which no single approach is "best", or at least, what ends up being the best solution to one problem will often not be the best solution for other, even similar, problems. I get that. But now let me try to elaborate on the reasoning behind what I'm actually asking.
I tend to find it easiest to explain myself using analogies, so here goes. In my current job I work almost exclusively in .NET. .NET has a lot of functionality built into the framework. A prime example is the System.Collections.Generic namespace, which has a bunch of collection classes that (almost) no .NET developer in his/her right mind would bother re-developing from scratch, because very good implementations are already there. If I am working on a problem that requires a doubly linked list, I'm not going to decide, "Okay, time to write a doubly linked list class"; I'm just going to use the LinkedList<T> that's already there, or, at most, extend it or wrap it with my own class that adds some extra functionality.
Am I saying the "best" version of a doubly linked list is LinkedList<T> from .NET? Of course not. That would be absurd. But I highly doubt .NET's implementation of LinkedList<T> is drastically different from most other established libraries' implementations of collections that are intended to serve the same purpose (that of a doubly linked list). On the other hand, I am relatively confident that if I were to write my own implementation from scratch, there'd be a considerable number of issues with it, in terms of robustness, performance, flexibility, etc. for one simple reason: not that I'm stupid, or lazy, or don't care about good code--simply that I'm one person, and I'm not an expert on linked lists, and I haven't thought of everything that needs to be taken into consideration when designing one.
But I happen to be a developer who does take an interest in how things are implemented internally. And so it would be nice if I could check out a page where some variant of a well thought-out design for a linked list--or for any fairly established concept for which robust, efficient implementations have been written--were available to view. (By the way, yes I am aware that the source code for .NET's LinkedList<T> is available. I'm just using that as an example; really I am talking about all problems with solutions for which good, working implementations exist.)
Now, I talked about this being something that is open; let me elaborate on that. I am not talking about sites like SourceForge.net, or CodePlex, or Google Code. These are all sites for hosting projects, i.e., applications or libraries tailored for some specific industry or field or otherwise categorizable purpose. What I'm talking about is something like this:
http://en.wikibooks.org/wiki/Category:Algorithms_and_data_structures
Maybe I should have just provided that link in the first place, as it probably illustrates what I'm getting at better than anything I've written so far. But I think the main point that differentiates what I'm asking about from any other site I've seen is that I was specifically wondering if there could be some way to work on a new problem--so, something for which there aren't necessarily any well-known, established implementations, again as in my linked list example--collaboratively, in a wiki-esque fashion, but not tied to any specific open-source project.
So, as a conclusion of sorts, I was kind of envisioning a situation like the following: I find myself faced with a new problem. Maybe it isn't common enough to be something that is addressed in a framework like .NET. But it's common enough that some developers here and there are independently working on it. If a website exists like what I'm imagining, maybe at some point one of those developers working on the problem could post an idea on that website, and over time others might discover it and suggest improvements/modifications, and given enough time and participation, a pretty darn good implementation might result from all this collaboration. And from there, eventually, something like this implementation might be considered fairly "standard", just like a linked list implementation, or a quicksort implementation, or, I don't know, some well-known pseudo-random number generator.
Does this make any more sense to anyone now? I feel quite confident that what I'm talking about is not absurd, but hey, if that's what people think, then maybe it is.
Open source projects are very popular. Some of these are libraries suited for specific purposes, the best of which include some very well-written code.
However, if you're interested in contributing to an open source project, finding a project that is well-suited to your skills can be quite a task. At the same time, if you're interesting in using an open source project in your own work, finding a project that is well-suited to your needs can also be difficult, especially when, for example, open-source library X has a lot of functionality you could use, as does library Y, and these two libraries' capabilities overlap so that integrating both into your code could be messy.
We've all seen questions, here on Stack Overflow and elsewhere on the web, posted by one developer: "How would I implement this idea?" and answered by others, often accompanied by a plethora of example code. Sometimes these answers link to an open source project/library that provides functionality similar to what the poster is asking about.
My question is: are there any well-known websites or other sources that are open in nature and provide "best-known implementations" for common (or even not-so-common) programming problems, but not associated with any particular open source project?
As a generic example, suppose I have a need for some algorithm that does X. I post a question on SO or some other site requesting ideas, asking for suggestions on how best to implement it. One person points me to project P1, which contains some code that performs something very similar to this algorithm. Another person points me to project P2. Someone else writes some sample code and says, "maybe you could do it like this."
It seems to me, if there are all these different versions of this idea floating around out in the world, it would make sense for there to be a site, somewhat in the vein of Wikipedia, where a quasi-"official" implementation ("official" is not the right word; I'm just having trouble thinking of a better one right now) could be published and modified as improvements are developed/discovered.
I feel like I have stumbled across a few different sites like this in the past, but I'm interested to know if anyone else has found any resources like what I'm describing.
The very idea is absurd. It means that there's one, single opinion on "best-known implementations" with no changes based on other people having better ideas.
It implies a that best practices are static and can be accumulated into a single repository.
If they could be collected, then Google would have them and would simply charge for access.
Interestingly, they don't have all the best practices. Interestingly, they have to expend mountains of computing power looking for more information. Then people (like you) have to read and think and judge and decide.
The read-think-judge-decide is really hard to eliminate. Unless, of course, you want someone to think for you. In which case, there are many companies who have a single solution that requires less thinking. Call Microsoft or Oracle or IBM. They have solutions that are all in one place, unified best practices, no reading, no thinking, no judging, no deciding required.
Open -- by definition -- means it's impossible to have a single authoritative source.
Here is something, maybe not the best implementations. But a book called Design Patterns contains what is considered by many programmers some of the best patterns to follow!

What are the best practices for Design by Contract programming

What are the best practices for Design by Contract programming.
At college I learned the design by contract paradigma
(in an OO environment)
We've learned three ways to tackle the problem :
1) Total Programming : Covers all possible exceptional cases in its
effect (cf. Math)
2) Nominal Programming : Only 'promises' the right effects when the preconditions are met. (otherwise effect is undefined)
3) Defensive Programming : Use exceptions to signal illegal invocations of methods
Now, we have focussed in different OO scenarios on the correct use in each situation, but we haven't learned WHEN to use WHICH...
(Mostly the tactics where inforced by the exercice..)
Now I think it's very very strange that I haven't asked my teacher (but then again, during lectures, noone has)
Personally, I never use nominal now, and tend to replace preconditions with exceptions (so i rather use : throws IllegalDivisionByZero, than stating 'precondition : divider should differ from zero) and only program total what makes sense (so I wouldn't return a conventional value on division by zero), but this method is just based on personal findings and likes.
so I am asking you guys :
Are there any best practises??
I didn't know about this division, and it doesn't really reflect my experience.
Total Programming is virtually impossible. You could not guarantee that you cover all exceptional cases. So basically you should limit your scope and reject the situations that are out of scope (that's the role of the Pre-conditions)
Nominal Programming is not desired. Undefined effect should be banned.
Defensive Programming is a must. You should always signal illegal invocations of methods.
I'm in favour of the implementation of the complete Design-by-Contract elements, which is, in my opinions a practical and affortable version of the Total Programming
Preconditions (a kind of Defensive Programming) to signal illegal invocation of the method. Try to limit your scope as much as you can so that you could simplify the code. Avoid complex implementation if possible by narrowing a little bit the scope.
Postconditions to raise an error if the desired effect is not obtained. Even if it your fault, you should notify the caller that you miss your goal.
Invariants to check that the object consistency is preserved.
It all boils down to what responsibilities do you wish to assign to the client and the implementer of the contract.
In defensive programming you force the implementer to check for error conditions which can be costy or even impossible in some cases. Imagine a contract specified by the binarySearch for example your input array has to be sorted. you can't detect this while running the algorithm. you have to do a manual check for it which will actually bump the execution time an order of magnitude. to back my opinion up is the signature of the method from the javadocs.
Another point is People and frameworks now tend to implement exception translation mechanisms which is used mainly to translate checked exceptions (defensive style) to runtime exceptions that will just pop up if something wrong happens. this way the client and implementer of the contract has less to worry about while dealing with each other.
Again this is my personal opinion backed only with what limited experience I have, I'd love to hear more about this subject.
...but we haven't learned WHEN
to use WHICH...
I think the best practice is to be "as defensive as possible". Do your runtime checks if you can. As #MahdeTo has mentioned sometimes that's impossible for performance reasons; in such cases fall back on undefined or unsatisfactory behavior.
That said, be explicit in your documentation as to what has runtime checks and what does not.
Like much of computing "it depends" is probably the best answer.
Design by contract/programming by contract can help development greatly by explicitly documenting the conditions for a function. Just the documentation can be a help without even making it into (compiled) code.
Where feasible I recommend defensive - checking every condition. BUT only for development and debug builds. In this way most invalid assumptions are caught when the conditions are broken. A good build system would allow you to turn the different condition types on and off at a module or file level - as well as globally.
The actions taken in release versions of software then depend upon the system and how the condition is triggered ( usual distinction between external and internal interfaces ). The release version could be 'total programming' - all conditions give a defined result (which can include errors or NaN)
To me "nominal programming" is a dead end in the real world. You assume that if you passed the right values in (which of course you did) then the value you receive is good. If your assumption was wrong - you break down.
I think that test driven programming is the answer. Before actually implementing the module, you first create a unit test (call it a contract). Then gradually implement the functionality and make sure the contract is still valid as you go. Usually I start with plain stubs and mockups, then gradually fill out the rest replacing the stabs with real stuff. Keep improving and making the test stronger. At the end you end up with a robust implementation of said module plus you've got a fantastic test bed - coded implementation of the contract. Later on, if someone modifies the module, first you see if it can still fit the test bed. If it doesn't, the contract is broken - reject the changes. Or, the contract is outdated, - fix the unit tests. And so on.. Boring cycle of software development :)

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Does Design By Contract Work For You? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you use Design by Contract professionally? Is it something you have to do from the beginning of a project, or can you change gears and start to incorporate it into your software development lifecycle? What have you found to be the pros/cons of the design approach?
I came across the Design by Contract approach in a grad school course. In the academic setting, it seemed to be a pretty useful technique. But I don't currently use Design by Contract professionally, and I don't know any other developers that are using it. It would be good to hear about its actual usage from the SO crowd.
I can't recommend it highly enough. It's particularly nice if you have a suite that takes inline documentation contract specifications, like so:
// #returns null iff x = 0
public foo(int x) {
...
}
and turns them into generated unit tests, like so:
public test_foo_returns_null_iff_x_equals_0() {
assertNull foo(0);
}
That way, you can actually see the tests you're running, but they're auto-generated. Generated tests shouldn't be checked into source control, by the way.
You really get to appreciate design by contract when you have an interface between to applications that have to talk to each other.
Without contracts this situation quickly becomes a game of blame tennis. The teams keep knocking accusations back and forth and huge amounts of time get wasted.
With a contract, the blame is clear.
Did the caller satisfy the preconditions? If not the client team need to fix it.
Given a valid request, did the receiver satisfy the post conditions? If not the server team need to fix that.
Did both parties adhere to the contract, but the result is unsatisfactory? The contract is insufficient and the issue needs to be escalated.
For this you don't need to have the contracts implemented in the form of assertions, you just need to make sure they are documented and agreed on by all parties.
If you look into STL, boost, MFC, ATL and many open source projects, you can see there are so many ASSERTION statements and that makes project going further more safely.
Design-By-Contract! It really works in real product.
Frank Krueger writes:
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue.
I have two responses to this:
Null was just an example. For square(x), I'd want to test that the square root of the result is (approximately) the value of the parameter. For setters, I'd want to test that the value actually changed. For atomic operations, I'd want to check that all component operations succeeded or all failed (really one test for success and n tests for failure). For factory methods in weakly-typed languages, I want to check that the right kind of object is returned. The list goes on and on. Basically, anything that can be tested in one line of code is a very good candidate for a code contract in a prologue comment.
I disagree that you shouldn't test things because they generate runtime exceptions. If anything, you should test things that might generate runtime exceptions. I like runtime exceptions because they make the system fail fast, which helps debugging. But the null in the example was a result value for some possible input. There's an argument to be made for never returning null, but if you're going to, you should test it.
It's absolutely foolish to not design by contract when doing anything in an SOA realm, and it's always helpful if you're working on any sort of modular work, where bits & pieces might be swapped out later on, especially if any black boxen are involved.
In lieu of more expressive type systems, I would absolutely use design by contract on military grade projects.
For weakly typed languages or languages with dynamic scope (PHP, JavaScript), functional contracts are also very handy.
For everything else, I would toss it aside an rely upon beta testers and unit tests.
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue. If you are more interested in documentation, then I would use annotations that can be used with static analyzers and the like (to make sure the code isn't breaking your annotations for example).
A stronger type system coupled with Design by Contract seems to be the way to go. Take a look at Spec# for an example:
The Spec# programming language. Spec#
is an extension of the object-oriented
language C#. It extends the type
system to include non-null types and
checked exceptions. It provides
method contracts in the form of pre-
and postconditions as well as object
invariants.
Both Unit testing and Design by Contract are valuable test approaches in my experince.
I have tried using Design by Contract in a System Automatic Testing framework and my experience is that is gives a flexibility and possibilities not easily obtained by unit testing. For example its possible to run longer sequence and verify that
the respons times are within limits every time an action is executed.
Looking at the presentations at InfoQ it appears that Design by contract is a valuable addition to the conventional Unit tests in the integration phase of components.
For example it possible to create a mock interface first and then use the component after-
or when a new version of a component is released.
I have not found a toolkit covering all my design requirement to design by contract testing
in the .Net/Microsoft platform.
I find it telling that Go programming language has no constructs that make design by contract possible. panic/defer/recover aren't exactly that as defer and recover logic make it possible to ignore panic, IOW to ignore broken contract. What's needed at very least is some form of unrecoverable panic, which is assert really. Or, at best, direct language support of design by contract constructs (pre and post-conditions, implementation and class invariants). But given strong-headedness of language purists at the helm of Go ship, I give little change of any of this done.
One can implement assert-like behaviour by checking for special assert error in last defer function in panicking function and calling runtime.Breakpoint() to dump stack during recovery. To be assert-like that behaviour needs to be conditional. Of course this approach fells apart when new defer function is added after the one doing assert. Which will happen in large project exactly at the wrong time, resulting in missed bugs.
My point is that is that assert is useful in so many ways that having to dance around it may be a headache.
I don't actually use Design by Contract, on a daily basis. I do, however know that it has been incorporated into the D language, as part of the language.
Yes, it does! Actually a few years ago, I designed a little framework for Argument Validation. I was doing a SOA project, in which the different back-end system, did all kind of validation and checking. But to increase response times (in cases where the input was invalid, and to reduce to load those back-end systems), we started to validate the input parameters of the provided services. Not only for Not Null, but also for String patterns. Or values from within sets. And also the cases where parameters had dependencies between them.
Now I realize we implemented at that time a small design by contract framework :)
Here is the link for those who are interested in the small Java Argument Validation framework. Which is implemented as plain Java solution.