Related
I've just read this post about Panic/Recover in Go and I'm not clear on how this differs from try/catch in other mainstream languages.
Panic/Recover are function scoped. It's like saying that you're only allowed one try/catch block in each function and the try has to cover the whole function. This makes it really annoying to use Panic/Recover in the same way that java/python/c# etc. use exceptions. This is intentional.
This also encourages people to use Panic/Recover in the way that it was designed to be used. You're supposed to recover() from a panic() and then return an error value to the caller.
I keep looking at this question trying to think of the best way to answer it. It's easiest to just point to the idiomatic uses for panic/recover as opposed to try/catch &| exceptions in other languages, or the concepts behind those idioms (which can be basically summed up as "exceptions should only occur in truly exceptional circumstances")
But as to what the actual difference is between them? I'll try to summarize as best I can.
One of the main differences compared to try/catch blocks is the way control flows. In a typical try/catch scenario, the code after the catch block will run unless it propagates the error. This is not so with panic/recover. A panic aborts the current function and begins to unwind the stack, running deferred functions (the only place recover does anything) as it encounters them.
Really I'd take that even further: panic/recover is almost nothing like try/catch in the sense that try and catch are (or at least act like) control structures, and panic/recover are not.
This really stems out of the fact that recover is built around the defer mechanism, which (as far as I can tell) is a fairly unique concept in Go.
There are certainly more, which I'll add if I can actuate my thoughts a bit better.
I think we all agree that panic is throw, recover is catch, and defer is finally.
The big difference seems that recover goes inside defer. Going back to traditional terms, it lets you decide exactly at which point of your finally you want to bother catching anything, or not at all.
defer is a mechanism not only for handling errors but also for doing a comfortable and controlled cleanup. Now panic works like raise() in other languages. With the help of the function recover() you've got the chance to catch this panic while it goes up the call stack. This way it's almost similar to try/catch. But while the latter works on blocks panic/recover works on function level.
Rob Pike about the reason for this solution: "We don't want to encourage the conflation of errors and exceptions that occur in languages such as Java.". Instead of having a large number of different exceptions with an even larger number of usages one should do everything to avoid runtime errors, deliver proper error return values after determination and use panic/recover only if there's no other way.
I think the Panic is the same as throw,Recover is the same as catch. The difference is at defer. At the begin, I think defer is the same as finally,but later, I find defer is more flexible than finally. defer can be place at any scope of your function and remember the value of parameter at that moment and also can change the returned return value,the the panic can be at any where after defer. but because the missing of block of try, we can't process the "exception" unless the whole function returned. I don't think this is a disadvantage,maybe GO want to make your method only do one thing, any exception should make this thing can't go on.
and since panic must after defer, it make you must process its "exception" before use it.
this is just understanding of myself .
"go" is unlike other languages a concurrent language, meaning that parts of the program runs in parallel. This means that if the designers want to have a similar mechanism as catch/throw , the mechanism has to meticulously redefined in this context. That explains the differences between the two mechanisms.
I understand the reasons for compiler/interpreter language extensions but why is behaviour that has no valid definition allowed to fail silently/do weird things rather then throwing a compiler error? Is it because of the extra difficulty(impossible or simply time consuming) for the compiler to catch them)?
P.S. what languages have undefined behaviour and which don't?
P.P.S. Are there instances of undefined behaviour which is not impossible/takes too long to catch in compilation and if so are there any good reasons/excuses for those.
The concept of undefined behaviour is required in languages like C and C++, because detecting the conditions that cause it would be impossible or prohibitively expensive. Take for example this code:
int * p = new int(0);
// lots of conditional code, somewhere in which we do
int * q = p;
// lots more conditional code, somewhere in which we do
delete p;
// even more conditional code, somewhere in which we do
delete q;
Here the pointer has been deleted twice, resulting in undefind behaviour. Detecting the error is too hard to do for a language like C or C++.
Largely because, to accomplish certain purposes, it's necessary. Just for example, C and C++ were originally used to write operating systems, including things like device drivers. To do that, they used (among other things) direct access to specific hardware locations that represented I/O devices. Preventing access to those locations would have prevented C from being used for its intended purpose (and C++ was specifically targeted at allowing all the same capabilities as C).
Another factor is a really basic decision between specifying a language and specifying a platform. To use the same examples, C and C++ are both based on a conscious decision to restrict the definition to the language, and leave the platform surrounding that language separate. Quite a few of the alternatives, with Java and .NET as a couple of the most obvious examples, specify entire platforms instead.
Both of these reflect basic differences in attitude about the design. One of the basic precepts of the design of C (largely retained in C++) was "trust the programmer". Though it was never stated quite so directly, the basic "sandbox" concept of Java was/is based on the idea that you should not trust the programmer.
As far as what languages do/don't have undefined behavior, that's the dirty little secret: for all practical purposes, all of them have undefined behavior. Some languages (again, C and C++ are prime examples) go to considerable effort to point out what behavior is undefined, while many others either try to claim it doesn't exist (e.g., Java) or mostly ignore many of the "dark corners" where it arises (e.g., Pascal, most .NET).
The ones that claim it doesn't exist generally produce the biggest problems. Java, for example, includes quite a few rules that try to guarantee consistent floating point results. In the process, they make it impossible to execute Java efficiently on quite a bit of hardware -- but floating point results still aren't really guaranteed to be consistent. Worse, the floating point model they mandate isn't exactly perfect so under some circumstances it prevents getting the best results you could (or at least makes you do a lot of extra work to get around what it mandates).
To their credit, Sun/Oracle has (finally) started to notice the problem, and is now working on a considerably different floating point model that should be an improvement. I'm not sure if this has been incorporated in Java yet, but I suspect that when/if it is, there will be a fairly substantial "rift" between code for the old model and code for the new model.
Because different operating systems operate differently (...), and you can't just say "crash in this case", because it could be something the operating system could do better.
As far as I know I've never encountered a SHOULD construct in a computer language, but then again I don't know that many languages, compared to the hundreds out there.
Anyways SHOULD and other modal verbs are very common in natural languages, and their meanings are quite clear when writing documentation and legal-binding contracts, so they aren't really gray terms, and theoretically could be expressed in programming terms (I guess).
For example an ASSERT, upholds in a sense a MUST construct.
Are there any actual examples of this sort of thing? Any research about it?
I'm guessing some Rule-Based systems, and perhaps fuzzy logic algorithms work like this.
I think of try as "should" and catch and finally as "in case it doesn't"
The exact meaning of should in natural language is also not clear cut. When you say "the wheel should fit in the row" - what does that mean exactly? It may mean the same as must, but then there is no point in a construct for it. Else, at what confidence do you need to be for this to be satisfied? What is the action in case the wheel does not fit?
In the senses you referred to, there are some equivalents, though I do not know of language that use the word should for them:
Testing/assertion
ASSERT is often a language directive, macro, or testing library function. In the sense that ASSERT corresponds to must, some languages and testing frameworks define macros for "warning assertions" which will spit a warning message if the check fails but not bail out or fail the test - that would correspond to should.
Exception handling
In some terms, you can consider exception thrown as analogues - if an exception is caught, the program can handle the case where something is not as it should be. But sometimes the exception describes the failure of something to be as it must be for the program to work, in which case the exception will not be caught or the handler will make the program fail gracefully. This is however not always the case - sometimes code is executed to test something that may be or perhaps is even unlikely to be, and an exception is caught expecting that it will usually be thrown.
Constraint logic
One common meaning of must and should in various formal natural language documents is in terms of constraints - must specifying a constraint that you always have to satisfy, and if you cannot then your state is incompatible, while should means that you will always satisfy the constraint if it is possible given the state and the constraints implied by must, but if it is not possible that is still valid. In informal constraint logic, this occurs when there are "external constraints" in the context - so verifying that the "solution" is satisfactory with respect to the "should constraints" may be possible only with knowledge of the context, and given the context you may also be able to satisfy different subsets of the "should constraints" but not at the same time. For that reason, some constraint logic specification languages (whether you call them "programming languages" depends on your definition) have the concepts of ordering of constraints - the first level of constraints corresponds to must, the next level corresponds to should, and a constraint has to be satisfied if possible given all constraints external to it (in previous levels), even if that conflicts with some constraints in the next levels which will then not be satisfied.
#Simon Perhaps a Try/finally is closest to should. anything in the Try should run, but not always. A webservice should open the socket but if it doesn't, we don't care.
This sort of modality is used in RSpec - dsl for building tests in behavior-driven style.
Modal verbs like "should", "may", "might" may cause confusion, so RFC 2119 gives an definition to point all noses in the same direction:
SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
From this definition it should (no pun intended) be clear that it's meant to be used in specifications, rather than program code, where you want things deterministic. At least I do. I can imagine it's usable in AI, though.
Well Should might be found in a prolog type language, as a softer inference? Ie the result logically should be x but might not be. You can say the result is probably x but not unequivocally?
What is the behavior you expect from the program if the result is not as it SHOULD? In the ASSERT case, it is an exception (AssertException or similar). SHOULD the program throw an exception or just ignore the result? To me it seems that there is nothing in between. Either the result is accepted or not.
Otherwise you SHOULD specify what behaviour you expect. :-)
Back to the assertion: If the assertion fails, an exception is thrown. It is up to you what you do with that exception. In java/C#, e.g., you can catch it and then do anything you want, so you define whether the assert has a MUST or a SHOULD semantic.
Well, Java2K has similar concepts. It SHOULD be doing what it's told...
SCNR.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Google's Go language has no exceptions as a design choice, and Linus of Linux fame has called exceptions crap. Why?
Exceptions make it really easy to write code where an exception being thrown will break invariants and leave objects in an inconsistent state. They essentially force you to remember that most every statement you make can potentially throw, and handle that correctly. Doing so can be tricky and counter-intuitive.
Consider something like this as a simple example:
class Frobber
{
int m_NumberOfFrobs;
FrobManager m_FrobManager;
public:
void Frob()
{
m_NumberOfFrobs++;
m_FrobManager.HandleFrob(new FrobObject());
}
};
Assuming the FrobManager will delete the FrobObject, this looks OK, right? Or maybe not... Imagine then if either FrobManager::HandleFrob() or operator new throws an exception. In this example, the increment of m_NumberOfFrobs does not get rolled back. Thus, anyone using this instance of Frobber is going to have a possibly corrupted object.
This example may seem stupid (ok, I had to stretch myself a bit to construct one :-)), but, the takeaway is that if a programmer isn't constantly thinking of exceptions, and making sure that every permutation of state gets rolled back whenever there are throws, you get into trouble this way.
As an example, you can think of it like you think of mutexes. Inside a critical section, you rely on several statements to make sure that data structures are not corrupted and that other threads can't see your intermediate values. If any one of those statements just randomly doesn't run, you end up in a world of pain. Now take away locks and concurrency, and think about each method like that. Think of each method as a transaction of permutations on object state, if you will. At the start of your method call, the object should be clean state, and at the end there should also be a clean state. In between, variable foo may be inconsistent with bar, but your code will eventually rectify that. What exceptions mean is that any one of your statements can interrupt you at any time. The onus is on you in each individual method to get it right and roll back when that happens, or order your operations so throws don't effect object state. If you get it wrong (and it's easy to make this kind of mistake), then the caller ends up seeing your intermediate values.
Methods like RAII, which C++ programmers love to mention as the ultimate solution to this problem, go a long way to protect against this. But they aren't a silver bullet. It will make sure you release resources on a throw, but doesn't free you from having to think about corruption of object state and callers seeing intermediate values. So, for a lot of people, it's easier to say, by fiat of coding style, no exceptions. If you restrict the kind of code you write, it's harder to introduce these bugs. If you don't, it's fairly easy to make a mistake.
Entire books have been written about exception safe coding in C++. Lots of experts have gotten it wrong. If it's really that complex and has so many nuances, maybe that's a good sign that you need to ignore that feature. :-)
The reason for Go not having exceptions is explained in the Go language design FAQ:
Exceptions are a similar story. A
number of designs for exceptions have
been proposed but each adds
significant complexity to the language
and run-time. By their very nature,
exceptions span functions and perhaps
even goroutines; they have
wide-ranging implications. There is
also concern about the effect they
would have on the libraries. They are,
by definition, exceptional yet
experience with other languages that
support them show they have profound
effect on library and interface
specification. It would be nice to
find a design that allows them to be
truly exceptional without encouraging
common errors to turn into special
control flow that requires every
programmer to compensate.
Like generics, exceptions remain an
open issue.
In other words, they haven't yet figured out how to support exceptions in Go in a way that they think is satisfactory. They are not saying that Exceptions are bad per se;
UPDATE - May 2012
The Go designers have now climbed down off the fence. Their FAQ now says this:
We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.
Go takes a different approach. For plain error handling, Go's multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go's other features, makes error handling pleasant but quite different from that in other languages.
Go also has a couple of built-in functions to signal and recover from truly exceptional conditions. The recovery mechanism is executed only as part of a function's state being torn down after an error, which is sufficient to handle catastrophe but requires no extra control structures and, when used well, can result in clean error-handling code.
See the Defer, Panic, and Recover article for details.
So the short answer is that they can do it differently using multi-value return. (And they do have a form of exception handling anyway.)
... and Linus of Linux fame has called exceptions crap.
If you want to know why Linus thinks exceptions are crap, the best thing is to look for his writings on the topic. The only thing I've tracked down so far is this quote that is embedded in a couple of emails on C++:
"The whole C++ exception handling thing is fundamentally broken. It's especially broken for kernels."
You'll note that he's talking about C++ exceptions in particular, and not exceptions in general. (And C++ exceptions do apparently have some issues that make them tricky to use correctly.)
My conclusion is that Linus hasn't called exceptions (in general) "crap" at all!
Exceptions are not bad per se, but if you know they are going to happen a lot, they can be expensive in terms of performance.
The rule of thumb is that exceptions should flag exceptional conditions, and that you should not use them for control of program flow.
I disagree with "only throw exceptions in an exceptional situation." While generally true, it's misleading. Exceptions are for error conditions (execution failures).
Regardless of the language you use, pick up a copy of Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition). The chapter on exception throwing is without peer. Some quotes from the first edition (the 2nd's at my work):
DO NOT return error codes.
Error codes can be easily ignored, and often are.
Exceptions are the primary means of reporting errors in frameworks.
A good rule of thumb is that if a method does not do what its name suggests, it should be considered a method-level failure, resulting in an exception.
DO NOT use exceptions for the normal flow of control, if possible.
There are pages of notes on the benefits of exceptions (API consistency, choice of location of error handling code, improved robustness, etc.) There's a section on performance that includes several patterns (Tester-Doer, Try-Parse).
Exceptions and exception handling are not bad. Like any other feature, they can be misused.
From the perspective of golang, I guess not having exception handling keeps the compiling process simple and safe.
From the perspective of Linus, I understand that kernel code is ALL about corner cases. So it makes sense to refuse exceptions.
Exceptions make sense in code were it's okay to drop the current task on the floor, and where common case code has more importance than error handling. But they require code generation from the compiler.
For example, they are fine in most high-level, user-facing code, such as web and desktop application code.
Exceptions in and of themselves are not "bad", it's the way that exceptions are sometimes handled that tends to be bad. There are several guidelines that can be applied when handling exceptions to help alleviate some of these issues. Some of these include (but are surely not limited to):
Do not use exceptions to control program flow - i.e. do not rely on "catch" statements to change the flow of logic. Not only does this tend to hide various details around the logic, it can lead to poor performance.
Do not throw exceptions from within a function when a returned "status" would make more sense - only throw exceptions in an exceptional situation. Creating exceptions is an expensive, performance-intensive operation. For example, if you call a method to open a file and that file does not exist, throw a "FileNotFound" exception. If you call a method that determines whether a customer account exists, return a boolean value, do not return a "CustomerNotFound" exception.
When determining whether or not to handle an exception, do not use a "try...catch" clause unless you can do something useful with the exception. If you are not able to handle the exception, you should just let it bubble up the call stack. Otherwise, exceptions may get "swallowed" by the handler and the details will get lost (unless you rethrow the exception).
Typical arguments are that there's no way to tell what exceptions will come out of a particular piece of code (depending on language) and that they are too much like gotos, making it difficult to mentally trace execution.
http://www.joelonsoftware.com/items/2003/10/13.html
There is definitely no consensus on this issue. I would say that from the point of view of a hard-core C programmer like Linus, exceptions are definitely a bad idea. A typical Java programmer is in a vastly different situation, though.
Exceptions aren't bad. They fit in well with C++'s RAII model, which is the most elegant thing about C++. If you have a bunch of code already that's not exception safe, then they're bad in that context. If you're writing really low level software, like the linux OS, then they're bad. If you like littering your code with a bunch of error return checks, then they not helpful. If you don't have a plan for resource control when an exception is thrown (that C++ destructors provides) then they're bad.
A great use-case for exceptions is thus....
Say you are on a project and every controller (around 20 different major ones) extends a single superclass controller with an action method. Then every controller does a bunch of stuff different from each other calling objects B, C, D in one case and F, G, D in another case. Exceptions come to the rescue here in many cases where there was tons of return code and EVERY controller was handling it differently. I whacked all that code, threw the proper exception from "D", caught it in the superclass controller action method and now all our controllers are consistent. Previously D was returning null for MULTIPLE different error cases that we want to tell the end-user about but couldn't and I didn't want to turn the StreamResponse into a nasty ErrorOrStreamResponse object (mixing a data structure with errors in my opinion is a bad smell and I see lots of code return a "Stream" or other type of entity with error info embedded in it(it should really be the function returns the success structure OR the error structure which I can do with exceptions vs. return codes)....though the C# way of multiple responses is something I might consider sometimes though in many cases, the exception can skip a whole lot of layers(layers that I don't need to clean up resources on either).
yes, we have to worry about each level and any resource cleanup/leaks but in general none of our controllers had any resources to clean up after.
thank god we had exceptions or I would have been in for a huge refactor and wasted too much time on something that should be a simple programming problem.
Theoretically they are really bad. In perfect mathematical world you cannot get exception situations. Look at the functional languages, they have no side effects, so they virtually do not have source for unexceptional situations.
But, reality is another story. We always have situations that are "unexpected". This is why we need exceptions.
I think we can think of exceptions as of syntax sugar for ExceptionSituationObserver. You just get notifications of exceptions. Nothing more.
With Go, I think they will introduce something that will deal with "unexpected" situations. I can guess that they will try to make it sound less destructive as exceptions and more as application logic. But this is just my guess.
The exception-handling paradigm of C++, which forms a partial basis for that of Java, and in turn .net, introduces some good concepts, but also has some severe limitations. One of the key design intentions of exception handling is to allow methods to ensure that they will either satisfy their post-conditions or throw an exception, and also ensure that any cleanup which needs to happen before a method can exit, will happen. Unfortunately, the exception-handling paradigms of C++, Java, and .net all fail to provide any good means of handling the situation where unexpected factors prevent the expected cleanup from being performed. This in turn means that one must either risk having everything come to a screeching halt if something unexpected happens (the C++ approach to handling an exception occurs during stack unwinding), accept the possibility that a condition which cannot be resolved due to a problem that occurred during stack-unwinding cleanup will be mistaken for one which can be resolved (and could have been, had the cleanup succeeded), or accept the possibility that an unresolvable problem whose stack-unwinding cleanup triggers an exception that would typically be resolvable, might go unnoticed as code which handles the latter problem declares it "resolved".
Even if exception handling would generally be good, it's not unreasonable to regard as unacceptable an exception-handling paradigm that fails to provide a good means for handling problems that occur when cleaning up after other problems. That isn't to say that a framework couldn't be designed with an exception-handling paradigm that could ensure sensible behavior even in multiple-failure scenarios, but none of the top languages or frameworks can as yet do so.
I havent read all of the other answers, so this ma yhave already been mentioned, but one criticism is that they cause programs to break in long chains, making it difficult to track down errors when debugging the code. For example, if Foo() calls Bar() which calls Wah() which calls ToString() then accidentily pushing the wrong data into ToString() ends up looking like an error in Foo(), an almost completely unrelated function.
For me the issue is very simple. Many programmers use exception handler inappropriately. More language resource is better. Be capable to handle exceptions is good. One example of bad use is a value that must be integer not be verified, or another input that may divide and not be checked for division of zero... exception handling may be an easy way to avoid more work and hard thinking, the programmer may want to do a dirty shortcut and apply an exception handling... The statement: "a professional code NEVER fails" might be illusory, if some of the issues processed by the algorithm are uncertain by its own nature. Perhaps in the unknown situations by nature is good come into play the exception handler. Good programming practices are a matter of debate.
Exception not being handled is generally bad.
Exception handled badly is bad (of course).
The 'goodness/badness' of exception handling depends on the context/scope and the appropriateness, and not for the sake of doing it.
Okay, boring answer here. I guess it depends on the language really. Where an exception can leave allocated resources behind, they should be avoided. In scripting languages they just desert or overjump parts of the application flow. That's dislikable in itself, yet escaping near-fatal errors with exceptions is an acceptable idea.
For error-signaling I generally prefer error signals. All depends on the API, use case and severity, or if logging suffices. Also I'm trying to redefine the behaviour and throw Phonebooks() instead. The idea being that "Exceptions" are often dead ends, but a "Phonebook" contains helpful information on error recovery or alternative execution routes. (Not found a good use case yet, but keep trying.)
What factors determine which approach is more appropriate?
I think both have their places.
You shouldn't simply use DoSomethingToThing(Thing n) just because you think "Functional programming is good". Likewise you shouldn't simply use Thing.DoSomething() because "Object Oriented programming is good".
I think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand.
For example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style.
Example:
fileHandle.close();
Most of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents.
CounterExample:
string x = "Hello World";
submitHttpRequest( x );
In this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp()
Needless to say, these are not mutually exclusive. You'll probably actually have
networkConnection.submitHttpRequest(x)
in which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code.
To be object-oriented, tell, don't ask : http://www.pragmaticprogrammer.com/articles/tell-dont-ask.
So, Thing.DoSomething() rather than DoSomethingToThing(Thing n).
If you're dealing with internal state of a thing, Thing.DoSomething() makes more sense, because even if you change the internal representation of Thing, or how it works, the code talking to it doesn't have to change. If you're dealing with a collection of Things, or writing some utility methods, procedural-style DoSomethingToThing() might make more sense or be more straight-forward; but still, can usually be represented as a method on the object representing that collection: for instance
GetTotalPriceofThings();
vs
Cart.getTotal();
It really depends on how object oriented your code is.
Thing.DoSomething is appropriate if Thing is the subject of your sentence.
DoSomethingToThing(Thing n) is appropriate if Thing is the object of your sentence.
ThingA.DoSomethingToThingB(ThingB m) is an unavoidable combination, since in all the languages I can think of, functions belong to one class and are not mutually owned. But this makes sense because you can have a subject and an object.
Active voice is more straightforward than passive voice, so make sure your sentence has a subject that isn't just "the computer". This means, use form 1 and form 3 frequently, and use form 2 rarely.
For clarity:
// Form 1: "File handle, close."
fileHandle.close();
// Form 2: "(Computer,) close the file handle."
close(fileHandle);
// Form 3: "File handle, write the contents of another file handle."
fileHandle.writeContentsOf(anotherFileHandle);
I agree with Orion, but I'm going to rephrase the decision process.
You have a noun and a verb / an object and an action.
If many objects of this type will use this action, try to make the action part of the object.
Otherwise, try to group the action separately, but with related actions.
I like the File / string examples. There are many string operations, such as "SendAsHTTPReply", which won't happen for your average string, but do happen often in a certain setting. However, you basically will always close a File (hopefully), so it makes perfect sense to put the Close action in the class interface.
Another way to think of this is as buying part of an entertainment system. It makes sense to bundle a TV remote with a TV, because you always use them together. But it would be strange to bundle a power cable for a specific VCR with a TV, since many customers will never use this. The key idea is how often will this action be used on this object?
Not nearly enough information here. It depends if your language even supports the construct "Thing.something" or equivalent (ie. it's an OO language). If so, it's far more appropriate because that's the OO paradigm (members should be associated with the object they act on). In a procedural style, of course, DoSomethingtoThing() is your only choice... or ThingDoSomething()
DoSomethingToThing(Thing n) would be more of a functional approach whereas Thing.DoSomething() would be more of an object oriented approach.
That is the Object Oriented versus Procedural Programming choice :)
I think the well documented OO advantages apply to the Thing.DoSomething()
This has been asked Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
Here are a couple of factors to consider:
Can you modify or extend the Thing class. If not, use the former
Can Thing be instantiated. If not, use the later as a static method
If Thing actually get modified (i.e. has properties that change), prefer the latter. If Thing is not modified the latter is just as acceptable.
Otherwise, as objects are meant to map on to real world object, choose the method that seems more grounded in reality.
Even if you aren't working in an OO language, where you would have Thing.DoSomething(), for the overall readability of your code, having a set of functions like:
ThingDoSomething()
ThingDoAnotherTask()
ThingWeDoSomethingElse()
then
AnotherThingDoSomething()
and so on is far better.
All the code that works on "Thing" is on the one location. Of course, the "DoSomething" and other tasks should be named consistently - so you have a ThingOneRead(), a ThingTwoRead()... by now you should get point. When you go back to work on the code in twelve months time, you will appreciate taking the time to make things logical.
In general, if "something" is an action that "thing" naturally knows how to do, then you should use thing.doSomething(). That's good OO encapsulation, because otherwise DoSomethingToThing(thing) would have to access potential internal information of "thing".
For example invoice.getTotal()
If "something" is not naturally part of "thing's" domain model, then one option is to use a helper method.
For example: Logger.log(invoice)
If DoingSomething to an object is likely to produce a different result in another scenario, then i'd suggest you oneThing.DoSomethingToThing(anotherThing).
For example you may have two was of saving thing in you program so you might adopt a DatabaseObject.Save(thing) SessionObject.Save(thing) would be more advantageous than thing.Save() or thing.SaveToDatabase or thing.SaveToSession().
I rarely pass no parameters to a class, unless I'm retrieving public properties.
To add to Aeon's answer, it depends on the the thing and what you want to do to it. So if you are writing Thing, and DoSomething alters the internal state of Thing, then the best approach is Thing.DoSomething. However, if the action does more than change the internal state, then DoSomething(Thing) makes more sense. For example:
Collection.Add(Thing)
is better than
Thing.AddSelfToCollection(Collection)
And if you didn't write Thing, and cannot create a derived class, then you have no chocie but to do DoSomething(Thing)
Even in object oriented programming it might be useful to use a function call instead of a method (or for that matter calling a method of an object other than the one we call it on). Imagine a simple database persistence framework where you'd like to just call save() on an object. Instead of including an SQL statement in every class you'd like to have saved, thus complicating code, spreading SQL all across the code and making changing the storage engine a PITA, you could create an Interface defining save(Class1), save(Class2) etc. and its implementation. Then you'd actually be calling databaseSaver.save(class1) and have everything in one place.
I have to agree with Kevin Conner
Also keep in mind the caller of either of the 2 forms. The caller is probably a method of some other object that definitely does something to your Thing :)