A term that I see every now and then is "Cyclomatic Complexity". Here on SO I saw some Questions about "how to calculate the CC of Language X" or "How do I do Y with the minimum amount of CC", but I'm not sure I really understand what it is.
On the NDepend Website, I saw an explanation that basically says "The number of decisions in a method. Each if, for, && etc. adds +1 to the CC "score"). Is that really it? If yes, why is this bad? I can see that one might want to keep the number of if-statements fairly low to keep the code easy to understand, but is this really everything to it?
Or is there some deeper concept to it?
I'm not aware of a deeper concept. I believe it's generally considered in the context of a maintainability index. The more branches there are within a particular method, the more difficult it is to maintain a mental model of that method's operation (generally).
Methods with higher cyclomatic complexity are also more difficult to obtain full code coverage on in unit tests. (Thanks Mark W!)
That brings all the other aspects of maintainability in, of course. Likelihood of errors/regressions/so forth. The core concept is pretty straight-forward, though.
Cyclomatic complexity measures the number of times you must execute a block of code with varying parameters in order to execute every path through that block. A higher count is bad because it increases the chances for logical errors escaping your testing strategy.
Cyclocmatic complexity = Number of decision points + 1
The decision points may be your conditional statements like if, if … else, switch , for loop, while loop etc.
The following chart describes the type of the application.
Cyclomatic Complexity lies 1 – 10 To be considered Normal
applicatinon
Cyclomatic Complexity lies 11 – 20 Moderate application
Cyclomatic Complexity lies 21 – 50 Risky application
Cyclomatic Complexity lies more than 50 Unstable application
Wikipedia may be your friend on this one: Definition of cyclomatic complexity
Basically, you have to imagine your program as a control flow graph and then
The complexity is (...) defined as:
M = E − N + 2P
where
M = cyclomatic complexity,
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components
CC is a concept that attempts to capture how complex your program is and how hard it is to test it in a single integer number.
Yep, that's really it. The more execution paths your code can take, the more things that must be tested, and the higher probability of error.
Another interesting point I've heard:
The places in your code with the biggest indents should have the highest CC. These are generally the most important areas to ensure testing coverage because it's expected that they'll be harder to read/maintain. As other answers note, these are also the more difficult regions of code to ensure coverage.
Cyclomatic Complexity really is just a scary buzzword. In fact it's a measure of code complexity used in software development to point out more complex parts of code (more likely to be buggy, and therefore has to be very carefully and thoroughly tested). You can calculate it using the E-N+2P formula, but I would suggest you have this calculated automatically by a plugin. I have heard of a rule of thumb that you should strive to keep the CC below 5 to maintain good readability and maintainability of your code.
I have just recently experimented with the Eclipse Metrics Plugin on my Java projects, and it has a really nice and concise Help file which will of course integrate with your regular Eclipse help and you can read some more definitions of various complexity measures and tips and tricks on improving your code.
That's it, the idea is that a method which has a low CC has less forks, looping etc which all make a method more complex. Imagine reviewing 500,000 lines of code, with an analyzer and seeing a couple methods which have oder of magnitude higher CC. This lets you then focus on refactoring those methods for better understanding (It's also common that a high CC has a high bug rate)
Each decision point in a routine (loop, switch, if, etc...) essentially boils down to an if statement equivalent. For each if you have 2 codepaths that can be taken. So with the 1st branch there's 2 code paths, with the second there are 4 possible paths, with the 3rd there are 8 and so on. There are at least 2**N code paths where N is the number of branches.
This makes it difficult to understand the behavior of code and to test it when N grows beyond some small number.
The answers provided so far do not mention the correlation of software quality to cyclomatic complexity. Research has shown that having a lower cyclomatic complexity metric should help develop software that is of higher quality. It can help with software quality attributes of readability, maintainability, and portability. In general one should attempt to obtain a cyclomatic complexity metric of between 5-10.
One of the reasons for using metrics like cyclomatic complexity is that in general a human being can only keep track of about 7 (plus or minus 2) pieces of information simultaneously in your brain. Therefore, if your software is overly complex with multiple decision paths, it is unlikely that you will be able to visualize how your software will behave (i.e. it will have a high cyclomatic complexity metric). This would most likely lead to developing erroneous or bug ridden software. More information about this can be found here and also on Wikipedia.
Cyclomatic complexity is computed using the control flow graph. The Number of quantitative measure of linearly independent paths through a program's source code is called as Cyclomatic Complexity ( if/ if else / for / while )
Cyclomatric complexity is basically a metric to figure out areas of code that needs more attension for the maintainability. It would be basically an input to the refactoring.
It definitely gives an indication of code improvement area in terms of avoiding deep nested loop, conditions etc.
That's sort of it. However, each branch of a "case" or "switch" statement tends to count as 1. In effect, this means CC hates case statements, and any code that requires them (command processors, state machines, etc).
Consider the control flow graph of your function, with an additional edge running from the exit to the entrance. The cyclomatic complexity is the maximum number of cuts we can make without separating the graph into two pieces.
For example:
function F:
if condition1:
...
else:
...
if condition2:
...
else:
...
Control Flow Graph
You can probably intuitively see why the linked graph has a cyclomatic complexity of 3.
Cyclomatric complexity is a measure of how complex a unit of software is.It measures the number of different paths a program might follow with conditional logic constructs (If ,while,for,switch & cases etc....). If you will like to learn more about calculating it here is a wonderful youtube video you can watch https://www.youtube.com/watch?v=PlCGomvu-NM
It is important in designing test cases because it reveals the different paths or scenarios a program can take .
"To have good testability and maintainability, McCabe recommends
that no program module should exceed a cyclomatic complexity of 10"(Marsic,2012, p. 232).
Reference:
Marsic., I. (2012, September). Software Engineering. Rutgers University. Retrieved from www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf
Related
The ultimate in optimizing compilers would be one that searched among the space of programs for a program equivalent to the original but faster. This has been done in practice for very small basic blocks: https://en.wikipedia.org/wiki/Superoptimization
It sounds like the hard part is the exponential nature of the search space, but actually it's not; the hard part is, supposing you find what you're looking for, how do you prove that the new, faster program is really equivalent to the original?
Last time I looked into it, some progress had been made on proving certain properties of programs in certain contexts, particularly at a very small scale when you are talking about scalar variables or small fixed bit vectors, but not really on proving equivalence of programs at a larger scale when you are talking about complex data structures.
Has anyone figured out a way to do this yet, even 'modulo solving this NP-hard search problem that we don't know how to solve yet'?
Edit: Yes, we all know about the halting problem. It's defined in terms of the general case. Humans are an existence proof that this can be done for many practical cases of interest.
You're asking a fairly broad question, but let me see if I can get you going.
John Regehr does a really nice job surveying some relevant papers on superoptimizers: https://blog.regehr.org/archives/923
The thing is you don't really need to prove whole program equivalence for these types of optimizations. Instead you just need to prove that given the CPU is in a particular state, 2 sequences of code modify the CPU state in the same way. To prove this across many optimizations (i.e. at scale), typically you might first throw some random inputs at both sequences. If they're not equivalent bits of code then you might get lucky and very quickly show this (proof by contradiction) and you can move on. If you haven't found a contradiction, you can now try to prove equivalence via a computationally expensive SAT solver. (As an aside, the STOKE paper that Regehr mentions is particularly interesting if you're interested in superoptimizers.)
Now looking at whole program semantic equivalence, one approach here is the one used by the CompCert compiler. Essentially that compiler is proving this theorem:
If CompCert is able to translate C code X into assembly code Y then X and Y are semantically equivalent.
In addition CompCert does apply a few compiler optimizations and indeed these optimizations are often the areas that traditional compilers get wrong. Perhaps something like CompCert is what you're after in which case, the compiler goes about things via a series of refinement passes where it proves that if each pass succeeds, the results are semantically equivalent to the previous pass.
I am currently reading / learning Erlang, and it is often noted that it is not (really) suitable for 'heavy number crunching'. Now I often come across this phrase or similar, but never really know what 'heavy' exactly means.
How does one decide if an operation is computationally intensive? Can it be quantified before testing?
Edit:
is there a difference between the quantity of calculations, the complexity of the algorithm or the size of the input values.
for example 1000 computaions of 28303 / 4 vs 100 computations of 239847982628763482 / 238742
When you are talking about Erlang specifically, I doubt that you in general want to develop applications that require intensive number crunching with it. That is - you don't learn Erlang to code a physics engine in it. So don't worry about Erlang being too slow for you.
Moving from Erlang to the question in general, these things almost always come down to relativity. Let's ignore number crunching and ask a general question about programming: How fast is fast enough?
Well, fast enough depends on:
what you want to do with the application
how often you want to do it
how fast your users expect it to happen
If reading a file in some program takes 1ms or 1000ms - is 1000 ms to be considered "too slow"?
If ten files have to be read in quick succession - yes, probably way too slow. Imagine an XML parser that takes 1 second to simply read an XML file from disk - horrible!
If a file on the other hand only has to be read when a user manually clicks a button every 15 minutes or so then it's not a problem, e.g. in Microsoft Word.
The reason nobody says exactly what too slow is, is because it doesn't really matter. The same goes for your specific question. A language should rarely, if ever, be shunned for being "slow".
And last but not least, if you develop some monstrous project in Erlang and, down the road, realise that dagnabbit! you really need to crunch those numbers - then you do your research, find good libraries and implement algorithms in the language best suited for it, and then interop with that small library.
With this sort of thing you'll know it when you see it! Usually this refers to situations when it matters if you pick an int, float, double etc. Things like physical simulations or monte carlo methods, where you want to do millions of calculations.
To be honest, in reality you just write those bits in C and use your favourite other language to run them.
i once asked a question about number crunching in couch DB mapreduce: CouchDB Views: How much processing is acceptable in map reduce?
whats interesting in one of the answers is this:
suppose you had 10,000 documents and they take 1 second each to
process (which is way higher than I have ever seen). That is 10,000
seconds or 2.8 hours to completely build the view. However once the
view is complete, querying any row (?key=...) or row slice
(?startkey=...&endkey=...) takes the same time as querying for
documents directly. Lookup time is O(log n) for the document count.
In other words, even if it takes 1 second per document to execute the
map, it will take a few milliseconds to fetch the result. (Of course,
the view must build first, since it is actually an index.)
I think if you think about your current question in those terms, its an interesting angle to think of your question. on the topic of the language's speed / optimization:
How does one decide if an operation is computationally intensive?
Facebook asked this question about PHP, and ended up writing HIP HOP to solve the problem -- it compiles PHP into C++. They said the reason php is much slower than C++ is because the PHP language is all dynamic lookup, and therefore much processing is required to do anything with variables, arrays, dynamic typing (which is a source of slowdown), etc.
So, a question you can ask is: is erlang dynamic-lookup? static typing? compiled?
is there a difference between the quantity of calculations, the
complexity of the algorithm or the size of the input values. For
example 1000 computaions of 28303 / 4 vs 100 computations of
239847982628763482 / 238742
So, with that said, the fact that you can even grant specific types to numbers of different kinds means you SHOULD be using the right types, and that will definitely cause performance increase.
suitability for number-crunching depends on the library support and inherent nature of the language. for example, a pure functional language will not allow any mutable variables, which makes it extremely interesting to implement any equation solving type problems. Erlang probably falls in to this category.
What is the purpose of abstraction in coding:
Programmer's efficiency or program's efficiency?
Our professor said that it is used merely for helping the programmer comprehend & modify programs faster to suit different scenarios. He also contended that it adds an extra burden on the program's performance. I am not exactly clear by what this means.
Could someone kindly elaborate?
I would say he's about half right.
The biggest purpose is indeed to help the programmer. The computer couldn't care less how abstracted your program is. However, there is a related, but different, benefit - code reuse. This isn't just for readability though, abstraction is what lets us plug various components into our programs that were written by others. If everything were just mixed together in one code file, and with absolutely no abstraction, you would never be able to write anything even moderately complex, because you'd be starting with the bare metal every single time. Just writing text on the screen could be a week long project.
About performance, that's a questionable claim. I'm sure it depends on the type and depth of the abstraction, but in most cases I don't think the system will notice a hit. Especially modern compiled languages, which actually "un-abstract" the code for you (things like loop unrolling and function inlining) sometimes to make it easier on the system.
Your professor is correct; abstraction in coding exists to make it easier to do the coding, and it increases the workload of the computer in running the program. The trick, though, is to make the (hopefully very tiny) increase in computer workload be dwarfed by the increase in programmer efficiency.
For example, on an extremely low-level; object-oriented code is an abstraction that helps the programmer, but adds some overhead to the program in the end in extra 'stuff' in memory, and extra function calls.
Since Abstraction is really the process of pulling out common pieces of functionality into re-usable components (be it abstract classes, parent classes, interfaces, etc.) I would say that it is most definitely a Programmer's efficiency.
Saying that Abstraction comes at the cost of performance is treading on unstable ground at best though. With most modern languages, abstraction (thus enhanced flexibility) can be had a little to no cost to the performance of the application.
What abstraction is is effectively outlined in the link Tesserex posted. To your professor's point about adding an additional burden on the program, this is actually fairly true. However, the burden in modern systems is negligible. Think of it in terms of what actually happens when you call a method: each additional method you call requires adding a number of additional data structures to the stack and then handling the return values also placed on the stack. So for instance, calling
c = add(a, b);
which looks something like
public int add(int a, int b){
return a + b;
}
requires pushing two integers onto the stack for the parameters and then pushing an additional one onto the stack for the return value. However, no memory interaction is required if both values are already in registers -- it's a simple, one-instruction call. Given that memory operations are much slower than register operations, you can see where the notion of a performance hit comes from.
Ultimately, every method call you make is going to increase the overhead of your program a little bit. However as #Tesserex points out, it's minute in most modern computer systems and as #Andrew Barber points out, that compromise is usually totally dwarfed by the increase in programmer efficiency.
Abstraction is a tool to make it easier for the programmer. The abstraction may or may not have an effect on the runtime performance of the system.
For an example of an abstraction that doesn't alter performance consider assembly. The pneumonics like mov and add are an abstraction that makes opcodes easier to remember, as compared to remembering byte-codes and other instruction encoding details. However, given the 1 to 1 mapping I'd suggest its clear that this abstraction has 0 effect on final performance.
There's not a clear-cut situation that abstraction makes life easier for the programmer at the expense of more work for the computer.
Although a higher level of abstraction typically adds at least a small amount of overhead to executing a discrete unit of code, it's also what allows the programmer to think about a problem in larger "units" so he can do a better job of understanding an entire problem, and avoid executing mane (or at least some) of those discrete units of code.
Therefore, a higher level of abstraction will often lead to faster-executing programs as long as you avoid adding too much overhead. The problem, of course, is that there's no easy or simple definition of how much overhead is too much. That stems largely from the fact that the amount of overhead that's acceptable depends heavily on the problem being solved, and the degree to which working at a higher level of abstraction allows the programmer to recognize operations that are truly unnecessary, and eliminate them.
In physics, its the ability for particles to exist in multiple/parallel dynamic states at a particular point in time. In computing, would it be the ability of a data bit to equal 1 or 0 at the same time, a third value like NULL[unknown] or multiple values?.. How can this technology be applied to: computer processors, programming, security, etc.?.. Has anyone built a practical quantum computer or developed a quantum programming language where, for example, the program code dynamically changes or is autonomous?
I have done research in quantum computing, and here is what I hope is an informed answer.
It is often said that qubits as you see them in a quantum computer can exist in a "superposition" of 0 and 1. This is true, but in a more subtle way than you might first guess. Even with a classical computer with randomness, a bit can exist in a superposition of 0 and 1, in the sense that it is 0 with some probability and 1 with some probability. Just as when you roll a die and don't look at the outcome, or receive e-mail that you haven't yet read, you can view its state as a superposition of the possibilities. Now, this may sound like just flim-flam, but the fact is that this type of superposition is a kind of parallelism and that algorithms that make use of it can be faster than other algorithms. It is called randomized computation, and instead of superposition you can say that the bit is in a probabilistic state.
The difference between that and a qubit is that a qubit can have a fat set of possible superpositions with more properties. The set of probabilistic states of an ordinary bit is a line segment, because all there is a probability of 0 or 1. The set of states of a qubit is a round 3-dimensional ball. Now, probabilistic bit strings are more complicated and more interesting than just individual probabilistic bits, and the same is true of strings of qubits. If you can make qubits like this, then actually some computational tasks wouldn't be any easier than before, just as randomized algorithms don't help with all problems. But some computational problems, for example factoring numbers, have new quantum algorithms that are much faster than any known classical algorithm. It is not a matter of clock speed or Moore's law, because the first useful qubits could be fairly slow and expensive. It is only sort-of parallel computation, just as an algorithm that makes random choices is only in weak sense making all choices in parallel. But it is "randomized algorithms on steroids"; that's my favorite summary for outsiders.
Now the bad news. In order for a classical bit to be in a superposition, it has be a random choice that is secret from you. Once you look a flipped coin, the coin "collapses" to either heads for sure or tails for sure. The difference between that and a qubit is that in order for a qubit to work as one, its state has to be secret from the rest of the physical universe, not just from you. It has to be secret from wisps of air, from nearby atoms, etc. On the other hand, for qubits to be useful for a quantum computer, there has to be a way to manipulate them while keeping their state a secret. Otherwise its quantum randomness or quantum coherence is wrecked. Making qubits at all isn't easy, but it is done routinely. Making qubits that you can manipulate with quantum gates, without revealing what is in them to the physical environment, is incredibly difficult.
People don't know how to do that except in very limited toy demonstrations. But if they could do it well enough to make quantum computers, then some hard computational problems would be much easier for these computers. Others wouldn't be easier at all, and great deal is unknown about which ones can be accelerated and by how much. It would definitely have various effects on cryptography; it would break the widely used forms of public-key cryptography. But other kinds of public-key cryptography have been proposed that could be okay. Moreover quantum computing is related to the quantum key distribution technique which looks very safe, and secret-key cryptography would almost certainly still be fairly safe.
The other factor where the word "quantum" computing is used regards an "entangled pair". Essentially if you can create an entangled pair of particles which have a physical "spin", quantum physics dictates that the spin on each electron will always be opposite.
If you could create an entangled pair and then separate them, you could use the device to transmit data without interception, by changing the spin on one of the particles. You can then create a signal which is modulated by the particle's information which is theoretically unbreakable, as you cannot know what spin was on the particles at any given time by intercepting the information in between the two signal points.
A whole lot of very interested organisations are researching this technique for secure communications.
Yes, there is quantum encryption, by which if someone tries to spy on your communication, it destroys the datastream such that neither they nor you can read it.
However, the real power of quantum computing lies in that a qubit can have a superposition of 0 and 1. Big deal. However, if you have, say, eight qubits, you can now represent a superposition of all integers from 0 to 255. This lets you do some rather interesting things in polynomial instead of exponential time. Factorization of large numbers (IE, breaking RSA, etc.) is one of them.
There are a number of applications of quantum computing.
One huge one is the ability to solve NP-hard problems in P-time, by using the indeterminacy of qubits to essentially brute-force the problem in parallel.
(The struck-out sentence is false. Quantum computers do not work by brute-forcing all solutions in parallel, and they are not believed to be able to solve NP-complete problems in polynomial time. See e.g. here.)
Just a update of quantum computing industry base on Greg Kuperberg's answer:
D-Wave 2 System is using quantum annealing.
The superposition quantum states will collapse to a unique state when a observation happened. The current technologies of quantum annealing is apply a physical force to 2 quantum bits, the force adds constrains to qubits so when observation happened, the qubit will have higher probability to collapse to a result that we are willing to see.
Reference:
How does a quantum machine work
I monitor recent non-peer reviewed articles on the subject, this is what I extrapolate from what I have read. a qubit, in addition to what has been said above. namely that they can hold values in superposition, they can also hold multiple bits, for example spin up/+ spin down/+ spin -/vertical , I need to abbreviate +H,-H,+V,-V Left+, LH,LV also not all of the combinations are valid and there are additional values that can be placed on the type of qubit
each used similar to ram vs rom etc. photon with a wavelength, electron with a charge, photon with a charge, photon with a spin, you get the idea, some combinations are not valid and some require additional algorithms in order to pass the argument to the next variable(location where data is stored) or qubit(location of superposition of values to be returned, if you will simply because the use of wires is by necessity limited due to size and space. One of the greatest challenges is controlling or removing Q.(quantum) decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. November 2011 researchers factorised 143 using 4 qubits. that same year, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset.May 2013, Google Inc announced that it was launching the Q. AI. Lab, hopefully to boost AI. I really do Hope I didn't waste anyones time with things they already knew. If you learned something please up.
As I can not yet comment, it really depends on what type of qubit you are working with to know the number of states for example the UNSW silicon Q. bit" vs a Diamond-neutron-valency or a SSD NMR Phosphorus - silicon vs Liquid NMR of the same.
I had started working on GPGPU some days ago and successfully implemented cholesky factorization with good performacne and I attended a conference on High Performance Computing where some people said that "GPGPU is a Hack".
I am still confused what does it mean and why they were saying it hack. One said that this is hack because you are converting your problem into a matrix and doing operations on it. But still I am confused that does people think it is a hack or if yes then why?
Can anyone help me, why they called it a hack while I found nothing wrong with it.
One possible reason for such opinion is that the GPU was not originally intended for general purpose computations. Also programming a GPU is less traditional and more hardcore and therefore more likely to be perceived as a hack.
The point that "you convert the problem into a matrix" is not reasonable at all. Whatever task you solve with writing code you choose reasonable data structures. In case of GPU matrices are likely the most reasonable datastructures and it's not a hack but just a natural choice to use them.
However I suppose that it's a matter of time for GPGPU becoming widespread. People just have to get used to the idea. After all who cares which unit of the computer runs the program?
On the GPU, having efficient memory access is paramount to achieving optimal performance. This often involves restructuring or even choosing entirely new algorithms and data structures. This is reason why GPU programming can be perceived as a hack.
Secondly, adapting an existing algorithm to run on the GPU is not in and of itself science. The relatively low scientific contribution of some GPU algorithm-related papers has led to a negative perception of GPU programming as strictly "engineering".
Obviously, only the person who said that can say for certain why he said it, but, here's my take:
A "Hack" is not a bad thing.
It forces people to learn new programming languages and concepts. For people who are just trying to model the weather or protein folding or drug reactions, this is an unwelcome annoyance. They didn't really want to learn FORTRAN (or whatever) in the first place, and now the have to learn another programming system.
The programming tools are NOT very mature yet.
The hardware isn't as reliable as CPUs (yet) so all of the calculations have to be done twice to make sure you've got the right answer. One reason for this is that GPUs don't come with error-correcting memory yet, so if you're trying to build a supercomputer with thousands of processors, the probability of a cosmic ray flipping a bit in you numbers approaches certainty.
As for the comment "you are converting your problem into a matrix and doing operations on it", I think that shows a lot of ignorance. Virtually ALL of high-performance computing fits that description!
One of the major problems in GPGPU for the past few years and probably for the next few is that programming them for arbitrary tasks was not very easy. Up until DX10 there was no integer support among GPUs and branching is still very poor. This is very much a situation where in order to get maximum benefit you have to write your code in a very awkward manner to extract all sorts of efficiency gains from the GPU. This is because you're running on hardware that is still dedicated to processing polygons and textures, rather than abstract parallel tasks.
Obviously, thats my take on it and YMMV
The GPGPU harks back to the days of the math co-processor. A hack is a shortcut to solving a long winded problem. GPGPU is a hack just like NAT on top of IPV4 is a hack. Computational problems just like networks are getting bigger as we try to do more, GPGPU is an useful interim solution, whether it stays outside the core CPU chip and has separate cranky API or gets sucked into the CPU via API or manufacture is up to the path finders.
I suppose he meant that using GPGPU forced you to restructure your implementation, so that it fitted the hardware, not the problem domain. Elegant implementation should fit the latter.
Note, that the word "hack" may have several different meanings:
http://www.urbandictionary.com/define.php?term=hack