Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I am wondering... I have read about Go some time ago and I tried to program something in it. I seems quite interesting. But I have reached handling "exceptions" in this language. I have read about their approach and it seems reasonable. I would like to know what are the advantages of the standard exceptional approach over the Go's style? What are the pros and cons?
Edit To be straight: I do not want to make any holy war about exceptions. I just wonder if this style of handling errors has any advantages? What are actual advantages of this style over standard exceptions? Is it worth wondering at all?
panic/recover is moral equivalent of try/catch exceptions. There is superficial difference (syntax) and a subtle, but important, difference of intended use.
The best explanations of problems with exceptions in general is "Cleaner, more elegant, wrong" and that's a good overview of pros/cons of exceptions vs. returning error codes.
Go designers decided that error handling by returning error codes from functions is the idiomatic Go way and the language supports multiple return values to make it syntactically easy. While panic/recover is provided, the difference is not of functionality but intended use.
Other languages exposing exceptions promote their use and in practice they are used frequently (sometimes even misused).
Go discourages the use of panic/recover. You can do it but you're only supposed to do it in very limited scenarios.
If you look at Go's own standard library, most uses of panic are for signaling fatal errors, indicating either an internal error (i.e. bug) in the library code or calling the library with wrong data (e.g. passing non-json data to json decoding functions).
But as the article you linked to points out: "The convention in the Go libraries is that even when a package uses panic internally, its external API still presents explicit error return values."
This is different from languages like C#, Java, Python or C++, where a lot of standard library code can throw exceptions to signal errors. Those languages want you to use exceptions. Go discourages the use of panic/recover.
To summarize:
idiomatic Go style is to use error codes to tell the caller about errors
use panic/recover only in rare cases:
to "crash" your program when encountering internal inconsistency indicating bugs in your code. It's basically a debugging aid.
if it dramatically simplifies error handling in your code (but if the code is to be used by others, never expose such panics to callers)
In practice the important thing is to use language's idiomatic style. In Go that's returning error codes and avoiding panic/recover. In C# that's using exceptions to signal some of the errors.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I often feel that after iterating over my code for a number of times I am left with functions or classes or other lines of code in general which made sense in the previous revision but are not very useful for the new revision. I know that a profiler can tell you what part of your code was called when you run your test cases? But how does one go about identifying what part of the code never got called to remove it so that whats left is more readable? For example, is there a quick way to know which functions in your code are not being called from anywhere and can be safely removed. It may sound like a trivial question for a small code base, but when your code base grows over the years, this becomes an important and not so trivial question.
To summarize the question, for different languages, what is the best approach to remove dead code? Are there any lanaguage agnostic solutions or strategies for this. Or does each language provide a tool for identifying dead code.
We normally program in Java or Python or Objective-C.
The term you're looking for is "code coverage" and there are various tools that will generate that information. You would have to make sure that you exercise every possible path through your code in order to be able to detect "dead code" with such a tool though, which is only possible with a really extensive set of tests.
Most compilers have some level of dead code detection, but that only detects code that cannot possibly be called, not code that will never be called due to program logic, etc..
edit:
for Python specifically: How can you find unused functions in Python code?
for Java: How to find unused/dead code in java projects, Java: Dead code elimination
for Objective-C: Xcode -- finding dead methods in a project, Cleaning up Objective-C code
For functions, try a global search on the function name, and analyze what you get. Dead code inside a function will usually be findable.
If you suspect a function of being unused, you can remove it, or comment it out, and see if what you've got still compiles.
This only works on functions that are unused because they are no longer called. Functionality that is never used because the control path through the code is no longer active is harder to find, and code analysis tools won't do well at finding it either.
You can use the code coverage report to find out the function which are not used or part of function which is never executed.
Based on the logic, you can treat them as dead code/unused code.
Popular code coverage tools that can be used:
C/C++: gcov & lcov
Python: Coverage.py
Java: JCov
Objective-C: xccov
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
On my previous job, providing all methods with javadoc was mandatory, which resulted in things like:
/**
* Sets the Frobber.
*
* #param frobber The frobber
*/
public setFrobber(Frobber frobber) { ... }
As you can see, the documentation takes up space and work, but adds little to the code.
Should documenting all methods be mandatory or optional? Is there a rule for which methods to document? What are pros and cons of requiring every method to be documented?
"providing all methods with javadoc was mandatory"
I strongly suspect that documenting all methods was mandatory, but providing javadoc comments was all that could be automatically enforced and hence all that was uniformly done.
Personally I think it's better to have no javadoc than completely useless javadoc - at least you can see from a glance at the HTML which methods are undocumented, because there are no descriptions of the parameters etc.
Documentation is frequently underrated, because it always seems less important and urgent when you're writing the code, than it does when you're using it later. But the style and form of documentation is often overrated - auto-generated XML nonsense is still nonsense. Given the choice, I'd rather have the code comment // Sets this object to use the specified frobber for all future frobbing, than your zero-information javadoc.
For all I know from your docs, the function doesn't actually modify this object at all, it might call the set() function on frobber, or it might be while(!frobber.isset()) { refrigerator.add(frobber); sleep(3600); refrigerator.remove(frobber); } Hence it "sets the frobber". I'm sure I read somewhere that "set" is the word with the most distinct definitions in the OED. Brief descriptions are ambiguous and hence misleading, and the purpose of documentation is to stop people relying on your source, and hence on details of your current implementation. My comment doesn't really take any longer to write than it took to add "Sets the frobber" and "the frobber" to the IDE-generated javadoc stub. It doesn't explain what frobbing is or when this object does it (hopefully that's elsewhere in the class docs) but at least it tries to tell you what the function does.
As for when to mandate documentation - I think every interface must be documented. If you're not defining Java interface s, the "interface" is every public and protected method, and every package-protected method unless the package is tiny. Implementation doesn't have to be documented, although it should be commented if the way it works is non-obvious. Documentation might be as simple as the sentence in my comment above - you don't necessarily need a separate sentence for each parameter if the method description already says what they are.
If you have code review, then IMO the answer is to review comments and documentation at the same time. If you don't have code review, then you need a cone of shame for whichever developer most recently forced someone else to come over and ask what the code actually does.
The same applies to anyone who relied on undocumented behaviour of a function, with a result that an implementation change that didn't change the interface, breaks their code. The way you enforce that code be documented, is to complain that you can't call it until you know what it guarantees to do. Arbitrary rules like, "javadoc comments must exist" become less important, at least for functions that other developers need to call.
For big projects or frameworks/libraries or even open source project that you are creating, it is mandatory. For small personal or private projects it is optional. Having said that, it is always a good idea to document your code so if you come back to your project after a year whether small or big, you know what it was doing. This really helps greatly.
You should always document your code. especially if someone else work or will work on your code. Maybe you didn't have a chance yet to work on legacy not-documented code but it can be a real pain!
About the comment itself, one thing to avoid is writing a comment because it is mandatory, Just think a few second and you'll find something to tell about your method, something that's not already in the method name, something that might not be obvious to the next developer. Explain what your method does, what are the corner cases, what it expect as input.
And remember :
Always code as if the guy who ends up
maintaining your code will be a
violent psychopath who knows where you
live.
it applies to comments too :)
It's much easier to maintain "self-documenting" code. If you choose good function and variable names, keep functions short (eg. < 10 lines with only a single idea per function), this will help keep the purpose of the code clear. And you won't have to try to keep the comments up to date - the only thing worse than no comments is comments that are wrong!
There's a good and recent summary of various points of view at InfoQ.
Documentation of code is very important. But Javadoc (or similar tools) are not the only and not the best method for this. The biggest downside is, that Javadoc-documentation must be kept up to date. If the method is changed, but the description stays the same, this documentation can do more trouble than good.
To avoid the problem with documentation not in sync with the code, use code to document. Unit-tests show how your code is used and asserts in the code can ensure that parameters and return-values are validated. In a project I added asserts to a calculation, that the probabilities in this calculation are always between 0 and 1. Later this assert triggered in a use case and pointed me directly to a bug.
The most important documentation is a good naming. If you set a Frobber, then setFrobber is a good name. The Javadoc given in your example adds nothing to this naming. frobIt would be a not so good name, method3 would be very bad. Code reviews should help to get good naming.
Javadocs and ither documentation should be added, if the other methods aren't sufficient. But in this case you need to take care, that this documentation is always up to date.
Q: Should documenting all methods be mandatory or optional?
A: Mandatory.
Q: Is there a rule for which methods to document?
A: All of them.
Q: What are pros and cons of requiring every method to be documented?
A: Pros: Smart people can spend time focusing on code writing, not code figuring-out. Code is well explained. Code can be passed to newbies. Cons: Whining. Stale comments.
A focus on quality commenting obviates the 'code is self-documenting' issues.
In the case of getters and setters, not every get and set is trivial. Sometimes it is, that's great. When it isn't, the comment should note the information. It's better to be conservative and always have comments than unconservative and have to scrap code and waste time figuring it out.
Final example: The Carmack Inverse Square Root code. Self-documenting, eh?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I just came across an idea in The Structure And Interpretation of Computer Programs:
Data is just dumb code, and code is just smart data
I fail to understand what it means. Can some one help me to understand it better?
This is one of the fundamental lessons of SICP and one of the most powerful ideas of computer science. It works like this:
What we think of as "code" doesn't actually have the power to do anything by itself. Code defines a program only within a context of interpretation -- outside of that context, it is just a stream of characters. (Really a stream of bits, which is really a stream of electrical impulses. But let's keep it simple.) The meaning of code is defined by the system within which you run it -- and this system just treats your code as data that tells it what you wanted to do. C source code is interpreted by a C compiler as data describing an object file you want it to create. An object file is treated by the loader as data describing some machine instructions you want to queue up for execution. Machine instructions are interpreted by the CPU as data defining the sequence of state transitions it should undergo.
Interpreted languages often contain mechanisms for treating data as code, which means you can pass code into a function in some form and then execute it -- or even generate code at run time:
#!/usr/bin/perl
# Note that the above line explicitly defines the interpretive context for the
# rest of this file. Without the context of a Perl interpreter, this script
# doesn't do anything.
sub foo {
my ($expression) = #_;
# $expression is just a string that happens to be valid Perl
print "$expression = " . eval("$expression") . "\n";
}
foo("1 + 1 + 2 + 3 + 5 + 8"); # sum of first six Fibonacci numbers
foo(join(' + ', map { $_ * $_ } (1..10))); # sum of first ten squares
Some languages like scheme have a concept of "first-class functions", which means that you can treat a function as data and pass it around without evaluating it until you really want to.
The upshot is that the division between "code" and "data" is pretty much arbitrary, a function of perspective only. The lower the level of abstraction, the "smarter" the code has to be: it has to contain more information about how it should be executed. On the other hand, the more information the interpreter supplies, the more dumb the code can be, until it starts to look like data with no smarts at all.
One of the most powerful ways to write code is as a simple description of what you need: Data which will be turned into code describing how to get you what you need by the interpretive context. We call this "declarative programming".
For a concrete example, consider HTML. HTML does not describe a Turing-complete programming language. It is merely structured data. Its structure contains some smarts that let it control the behavior of its interpretive context -- but not a lot of smarts. On the other hand, it contains more smarts than the paragraphs of text that appear on an average web page: Those are pretty dumb data.
In the context of security: Due to buffer overflows, what you thought of as data and thus harmless (such as an image) can become executed as code and p0wn your machine.
In the context of software development: Many developers are very afraid of "hardcoding" things and very keen on extracting parameters that might have to change into configuration files. This is often based on the idea that config files are just "data" and thus can be changed easily (perhapy by customers) without raising the issues (compilation, deployment, testing) that changing anything in the code would.
What these developers don't realize is that since this "data" influences the behaviour of the program, it really is code; it could break the program and the only reason not to require complete testing after such a change is that, if done correctly, the configurable values have a very specific, well-documented effect and any invalid value or a broken file structure will be caught by the program.
However, what all too often happens is that the config file structure becomes a programming language in its own right, complete with control flow and everything - one that's badly documented, has a quirky syntax and parser and which only the most experienced developers in the team can touch without breaking the application completely.
So, in a language like Scheme, even code is treated as first class data. You can treat functions and lambda expressions much like you treat other code, say passing them into other functions and lambda expressions. I recommend continuing with the text as this will all become quite clear.
This is something you should come to understand from writing in a compiler.
One common step in compilers is to transform the program into an abstract syntax tree. Representation will often be like trees such as [+, 2, 3] where + is the root, and 2, 3 are the children.
Lisp languages simply treats this as its data. So there is no separation between data and code which are both lists that look like AST trees.
Code is definitely data, but data is definitely not always code. Let's take a basic example - customer name. It's nothing to do with code, it's a functional (essential), as opposed to a technical (accidental) aspect of an application.
You could probably say that any technical/accidental data is code and that functional/essential data is not.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As a programmer, I often look at some features of the language I'm currently using and think to myself "This is pretty hard to do for a programmer, and could be taken care of automatically by the machine".
One example of such a feature is memory management, which has been automatic for a while in a variety of languages. While memory management is not that hard to do manually most of the time, doing it perfectly all the way through your application without leaking memory is extremely hard. Automation has made it easy again so that we programmers could concentrate on more critical questions.
Are there any features that you think programming languages should automate because the reward/difficulty ratio is just too low (say, for example concurrency)?
This question is intended to be a brainstorm about what the future of programming could be like, and what languages could do for us to let us focus on more important tasks, so please post your wishes even if you don't think automation is practical/feasible. Good answers will point to stuff that is genuinely hard to do in many languages, as opposed to single-language pet-peeves.
Whatever the language can do for me automatically, I will want a way of doing for myself.
Concurrent programming/parallelism that is (semi-)automated, opposed to having to mess around with threads, callbacks, and synchronisation. Being able to parallelise for loops, such as:
Parallel.ForEach(fooList, item =>
{
item.PerformLongTask();
}
is just made of win.
Certain languages already support such functionality to a degree, however. Notably, F# has asynchronous workflows. Coming with the release of .NET 4.0, the Parallel Extensions library will make concurrency much easier in C# and VB.NET. I believe Python also has some sort of concurrency library, though I personally haven't used it.
What would also be cool is fully automated parallelism in purely functional languages, i.e. not having to change your code even slightly and automatically have it run near optimally across multiple cores. Note that this can only be done with purely functional languages (such as Haskell, but not CAML/F#). Still, constructs such as example given above would be very handy for automating parallelism in object-oriented and other languages.
I would imagine that libraries, design patterns, and even entire programming languages oriented towards simple and high-level support for parallelism will become increasingly widespread in the near future, as desktop computers start to move from 2 cores to 4 and then 8 cores and the advantage of automated concurrency becomes much more evident.
exec("Build a system to keep the customer happy, based on requirements.txt");
In Java, create beans less verbosely.
For example:
bean Student
{
String name;
int id;
type1 property1;
type2 property2;
}
and this would create a bean private fields, default accessors, toString, hashCode, equals, etc.
In Java I would like a keyword that would make the entire class immutable.
E.g.
public immutable class Xyz {
}
And the compiler would warn me if any conditions of immutability were broken.
Concurrency. That was my main idea when asking this question. This is going to get more and more important with time, since current CPUs already have up to 8 logical cores (4 cores + hyperthreading), and 12 logical cores will appear in a few months. In the future, we are going to have a hell of a lot of cores at our disposal, but most programing languages only make it easy for us to use one at a time.
The Threads + Synchronization model that is exposed by most programming languages is extremely low level, and very close to what the CPU does. To me, the current level of concurrency language support is roughly equivalent to the memory management support in C: Not integrated, but some things can be delegated to the OS (malloc, free).
I wish some language would come up with a suitable abstraction that either makes the Threads + Synchronization model easier, or that simply completely hides it for us (just as automatic memory management make good old malloc/free obsolete in Java).
Some functional languages such as Erlang have a reputation of having good multithreading support, but the brain-switch required to do functional programming doesn't really make the whole ordeal much easier.
.Net:
A warning when manipulating strings with methods such as Replace and not returning the value (new string) to a variable, because if you don't know that a string is immutable this issue will frustrate you.
In C++, type inference for variable declarations, so that I don't need to write
for (vector<some_longwinded_type>::const_iterator i = v.begin(); i != v.end(); ++i) {
...
}
Luckily this is coming in C++1x in the form of auto:
for (auto i = v.begin(); i != v.end(); ++i) {
...
}
Coffee. I mean the language is call Java - so it should be able to make my coffee! I hate getting up from programming, going to the coffee pot, and finding out someone from marketing has taken the last cup and not made another pot.
Persistence, it seems to me we write far too much code to deal with persistence when this really should be a configuration problem.
In C++, enum-to-string.
In Ada, the language defines the 'image attribute of an enumerated type as a function that returns a string corresponding to the textual representation of an enumeration value.
C++ provides no such clean facility. It takes several lines of very arcane preprocessor macro black magic to get a rough equivalent.
For languages that provide operator overloading, provide automatically generated overloads for symmetric operations when only one operation is defined. For example, if the programmer provides an equality operator but not an inequality operator, the language could easily generate one. In C++, the same could be done for copy constructors and assignment operators.
I think that automatically generating one-side of a symmetric operation would be nice. Of course, I would definitely want to be able to explicitly say don't do that when needed. I guess providing the implementation of both sides with one of them being private and empty could do the job.
Everything that LINQ does. C# has spoiled me and I now find it hard to do anything with collections in any other language. In Python I use list comprehensions a lot, but they are not quite as powerful as LINQ. I haven't found any other language that makes working with collections as easy as in C#.
In Visual Studio environment I want "Remove unused usings" to run across all file in the project. I find it a significant loss of time to have to manually open each individual file and call this operation of a file basis.
From a dynamic languages perspective, I'd like to see better tool support. Steve Yegge has a great post on this. For instance, there are lots of cases where a tool could look inside various code paths and determine if the methods or functions existed and provide the equivalent of the compiler smoke test. Obviously, if you're using lots and lots of truly dynamic code, this won't work, but the fact is, you probably aren't, so it would be pretty nice if Python, for instance, would tell you that .ToLowerCase() wasn't a valid function at compile time, rather than waiting until you hit the else clause.
s = "a Mixed Case String"
if True:
s = s.lower()
else:
s = s.ToLowerCase()
Easy: initialize variables in C/C++ just like C# does. It would have saved me multiple sessions of debugging in other people's code.
Of course there would be a keyword when you specifically do not want to init a var.
noinit float myVal; // undefined
float my2ndVal; // 0.0f