Related
So, I've done a bit of programming in my day. Java, C#, C++, and I've always had a fascination with computers in general. One thing that I would really like to learn, and, what I think would really help my programming skills, is how software tells the hardware what to do.
I'm aware that's quite the tall order: I know that's different per language; per OS. I'm not asking for an actual answer, as much as I'm asking for a starting point. Also, if this is actually a waste of time, like, if it wouldn't really help my programming and/or wouldn't be worth it because it's a massive amount of stuff to learn and it would take years for it to actually pay off, saying that would be helpful too.
I can't escape the feeling that I'm asking a stupid question.
What we commonly call hardware can be thought of as a (big) number of electrical devices that function according to some specific rules. By putting some electrons in the input(s), the output(s) will vary after after a fixed rule ( similar devices behave the same). The best known device is the transistor. Transistors can be connected in such a way that they perform logical functions, the most used being NAND ( not and). Using NAND gates any kind of logic can be(and is) implemented. To sum it up, hardware does logic functions by moving electrons around.
Now comes the interresting question. What is software? People tend to think that because there is thought involved in writing software, that it doesn't exist in the real world. Which isn't true. The program is stored in RAM* when you write it, effectively being a pattern of electrons. Now this pattern suffers some transformations ( compiler , assembler ), during those steps the pattern changes from something that is meaningfull to humans to something that can be used as input to the logic functions from above.
On a tangent: A RS flip flop is an interresting device. It uses two NAND blocks to create a memory cell.
Have you thought of hardware design? Either studying it by reading up, or by actually designing your own hardware. You could buy yourself a Raspberry PI, or Arduino, or something else if you don't want to get your hands dirty. Use any of these options to get your hands on hardware, or even use something like Vbox and write your own operating system.
Some random thoughts to consider. And, no your question isn't a stupid one at all.
what are the advantages and disadvantages of refactoring tools, in general?
Advantage
You are more likely to do the refactoring if a tool helps you.
A tool is more likely to get “rename” type refactoring right first time then you are.
A tool lets you do refactoring on a codebase without unit tests that you could not risk doing by hand.
A tool can save you lots of time.
Both the leading tools (RefactorPro/CodeRush and Resharper) will also highlight most coding errors without you having to a compile
Both the leading tools will highlight were you don’t keep to their concept of best practises.
Disadvantages
Some times the tool will change the meaning of your code without you expecting it, due to bags in the tool or use of reflection etc in your code base.
A took may make you feel safe with less unit tests…
A tool can be very slow…, so for renameing locals vars etc it can be quicker to do it by hand.
A tool will slow down the development system a lot, as the tool as to keep is database updated while you are editing code.
A tool takes time to learn.
A tool push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage.
A tool will have a large memory footprint for a large code base, however memory is cheep these days.
No tool will cope well with very large solution files.
You will have to get your boss to agree to paying for the tool, this may take longer then the time the tool saves.
You may have to get your IT department to agree to you installing the tool
You will be lost in your next job if they will not let you use the same tool :-)
Advantage: the obvious one: speed.
Disadvantages:
they push you towards the refactorings they include and you may ignore the ones they don't, to your disadvantage;
I've only tried one, with VS, and it slowed down the app noticeably. I couldn't decide if it was worth it but had to rebuild the machine and haven't re-installed it so I guess that tells you.
Code improvement suggestions. (can be
both advantage and disadvantage)
Removes code noise (advantage)
Renaming variables, methods (advantage)
I'd say that the speed of making code changes or writing code is the biggest advantage. I have CodeRush and I am lost without it.
I'd say the biggest disadvantage is the memory footprint, if you are tight on memory then its probably going to hurt more than help. But I've got 4Gb and 8Gb on each of my dev boxes so I don't really notice. (Not that they take huge amounts of memory, but if you are 2Gb or less then it is going to be noticeable)
Also, I've noticed that the two big refactoring tools for .NET (RefactorPro/CodeRush and Resharper) both have problems with web site projects (A legacy inheritance so out of my control) with their code analysis/suggestion engine. Seems to think everything is bad (actually, that's probably a fairly accurate assessment for a web site project, but I don't want to be reminded of it constantly)
A few years ago, we needed a C++ IPC library for making function calls over TCP. We chose one and used it in our application. After a while, it became clear it didn't provide all functionality we needed. In the next version of our software, we threw the third party IPC library out and replaced it by one we wrote ourselves. From then on, I sometimes doubt whether this was a good decision, because it has proven to be quite a lot of work and it obviously felt like reinventing the wheel. So my question is: are there disadvantages to code reuse that justify this reinvention?
I can suggest a few
The bugs get replicated - If you reuse a buggy code :)
Sometimes it may add an additional overhead. As an example if you just need to do a simple thing it is not advisable to use a complex BIG library that implements the required feature.
You might face with some licensing concerns.
You may need to spend some time to learn\configure the external library. This may not be effective if the re-development takes a much lower time.
Reusing a poorly documented library may get more time than expected/estimated
P.S. The reasons for writing our own library were:
Evaluating external libraries is often very difficult and it takes a lot of time. Also, some problems only become visible after a thorough evaluation.
It made it possible to introduce some features that are specific for our project.
It is easier to do maintenance and to write extensions, as you know the library through and through.
It's pretty much always case by case. You have to look at the suitability and quality of what you're trying to reuse.
The number one issue is: you can only successfully reuse code if that code is GOOD code. If it was designed poorly, has bugs, or is very fragile then you'll run into the same issues you already did run into -- you have to go do it yourself anyway because it's so hard to modify the existing code.
However, if it's a third-party library that you are considering using that you don't have the source code for, it's a little different. You can try and get the source if it's that kind of library. Some commercial library vendors are open to suggestions and feature requests.
The Golden Wisdom :: It Has To Be Usable Before It Can Be Reusable.
The biggest disadvantage (you mention it yourself) by reusing third party libraries, is that you are strongly coupled and dependent to how that library works and how it's supposed to be used, unless you manage to create a middle interface layer that can take care of it.
But it's hard to create a generic interface, since replacing an existing library with another one, more or less requires that the new functionality works in similar ways. However, you can always rewrite the code using it, but that might be very hard and take a long time.
Another aspect is that if you reinvent the wheel, you have complete control over what's happening and you can do modifications as you see fit. This can be completely impossible if you are depending on a third part library being alive and constantly providing you with updates and bug fixes. On the other hand, reusing code this way enables you to focus on other things in your software, which sometimes might be the thing to do.
There's always a trade off.
If your code relies on external resources and those go away, you may be crippling portions of many applications.
Since most reused code comes from the internet, you run into all the issues with the Bathroom Wall of Code Atwood talks about. You can run into issues with insecure or unreliable borrowed code, and the more black boxed it is, the worse.
Disadvantages of code reuse:
Debugging takes a whole lot longer since it's not your code and it's likely that it's somewhat bloated code.
Any specific requirements will also take more work since you are constrained by the code you're re-using and have to work around it's limitations.
Constant code reuse will result in the long run in a bloated and disorganized applications with hard to chase bugs - programming hell.
Re-using code can (dependently on the case) reduce the challenge and satisfaction factor for the programmer, and also waste an opportunity to develop new skills.
It depends on the case, the language and the code you want to re-use or re-write. In general I believe that the higher-level the language is, the more I tend towards code reuse. Bugs in higher-level language can have a bigger impact, and they're easier to rewrite. High level code must stay readable, neat and flexible. Of course that could be said of all code, but, somehow, rewriting a C library sounds less of a good idea than rewriting (or rather re-factoring) PHP model code.
So anyway, these are some of the arguments I'd use to promote "reinventing the wheel".
Sometimes it's just faster, more fun, and better in the long run to rewrite from scratch than having to work around bugs and limitation of a current codebase.
Wondering what you are using to keep this library you reinvented?
Initial time for create a reusable code is more expensive and time cost
When master branch has an update you need to sync it and deploy again
The bugs get replicated - If you reuse a buggy code
Reusing a poorly documented code may get more time than expected/estimated
There are a few posts on usability but none of them was useful to me.
I need a quantitative measure of usability of some part of an application.
I need to estimate it in hard numbers to be able to compare it with future versions (for e.g. reporting purposes). The simplest way is to count clicks and keystrokes, but this seems too simple (for example is the cost of filling a text field a simple sum of typing all the letters ? - I guess it is more complicated).
I need some mathematical model for that so I can estimate the numbers.
Does anyone know anything about this?
P.S. I don't need links to resources about designing user interfaces. I already have them. What I need is a mathematical apparatus to measure existing applications interface usability in hard numbers.
Thanks in advance.
http://www.techsmith.com/morae.asp
This is what Microsoft used in part when they spent millions redesigning Office 2007 with the ribbon toolbar.
Here is how Office 2007 was analyzed:
http://cs.winona.edu/CSConference/2007proceedings/caty.pdf
Be sure to check out the references at the end of the PDF too, there's a ton of good stuff there. Look up how Microsoft did Office 2007 (regardless of how you feel about it), they spent a ton of money on this stuff.
Your main ideas to approach in this are Effectiveness and Efficiency (and, in some cases, Efficacy). The basic points to remember are outlined on this webpage.
What you really want to look at doing is 'inspection' methods of measuring usability. These are typically more expensive to set up (both in terms of time, and finance), but can yield significant results if done properly. These methods include things like heuristic evaluation, which is simply comparing the system interface, and the usage of the system interface, with your usability heuristics (though, from what you've said above, this probably isn't what you're after).
More suited to your use, however, will be 'testing' methods, whereby you observe users performing tasks on your system. This is partially related to the point of effectiveness and efficiency, but can include various things, such as the "Think Aloud" concept (which works really well in certain circumstances, depending on the software being tested).
Jakob Nielsen has a decent (short) article on his website. There's another one, but it's more related to how to test in order to be representative, rather than how to perform the testing itself.
Consider measuring the time to perform critical tasks (using a new user and an experienced user) and the number of data entry errors for performing those tasks.
First you want to define goals: for example increasing the percentage of users who can complete a certain set of tasks, and reducing the time they need for it.
Then, get two cameras, a few users (5-10) give them a list of tasks to complete and ask them to think out loud. Half of the users should use the "old" system, the rest should use the new one.
Review the tapes, measure the time it took, measure success rates, discuss endlessly about interpretations.
Alternatively, you can develop a system for bucket-testing -- it works the same way, though it makes it far more difficult to find out something new. On the other hand, it's much cheaper, so you can do many more iterations. Of course that's limited to sites you can open to public testing.
That obviously implies you're trying to get comparative data between two designs. I can't think of a way of expressing usability as a value.
You might want to look into the GOMS model (Goals, Operators, Methods, and Selection rules). It is a very difficult research tool to use in my opinion, but it does provide a "mathematical" basis to measure performance in a strictly controlled environment. It is best used with "expert" users. See this very interesting case study of Project Ernestine for New England Telephone operators.
Measuring usability quantitatively is an extremely hard problem. I tackled this as a part of my doctoral work. The short answer is, yes, you can measure it; no, you can't use the results in a vacuum. You have to understand why something took longer or shorter; simply comparing numbers is worse than useless, because it's misleading.
For comparing alternate interfaces it works okay. In a longitudinal study, where users are bringing their past expertise with version 1 into their use of version 2, it's not going to be as useful. You will also need to take into account time to learn the interface, including time to re-understand the interface if the user's been away from it. Finally, if the task is of variable difficulty (and this is the usual case in the real world) then your numbers will be all over the map unless you have some way to factor out this difficulty.
GOMS (mentioned above) is a good method to use during the design phase to get an intuition about whether interface A is better than B at doing a specific task. However, it only addresses error-free performance by expert users, and only measures low-level task execution time. If the user figures out a more efficient way to do their work that you haven't thought of, you won't have a GOMS estimate for it and will have to draft one up.
Some specific measures that you could look into:
Measuring clock time for a standard task is good if you want to know what takes a long time. However, lab tests generally involve test subjects working much harder and concentrating much more than they do in everyday work, so comparing results from the lab to real users is going to be misleading.
Error rate: how often the user makes mistakes or backtracks. Especially if you notice the same sort of error occurring over and over again.
Appearance of workarounds; if your users are working around a feature, or taking a bunch of steps that you think are dumb, it may be a sign that your interface doesn't give the tools to figure out how to solve their problems.
Don't underestimate simply asking users how well they thought things went. Subjective usability is finicky but can be revealing.
One of the most common mantras in computer science and programming is to never optimize prematurely, meaning that you should not optimize anything until a problem has been identified, since code readability/maintainability is likely to suffer.
However, sometimes you might know that a particular way of doing things will perform poorly. When is it OK to optimize before identifying a problem? What sorts of optimizations are allowable right from the beginning?
For example, using as few DB connections as possible, and paying close attention to that while developing, rather than using a new connection as needed and worrying about the performance cost later
I think you are missing the point of that dictum. There's nothing wrong with doing something the most efficient way possible right from the start, provided it's also clear, straight forward, etc.
The point is that you should not tie yourself (and worse, your code) in knots trying to solve problems that may not even exist. Save that level of extreme optimizations, which are often costly in terms of development, maintenance, technical debt, bug breeding grounds, portability, etc. for cases where you really need it.
I think you're looking at this the wrong way. The point of avoiding premature optimization isn't to avoid optimizing, it's to avoid the mindset you can fall into.
Write your algorithm in the clearest way that you can first. Then make sure it's correct. Then (and only then) worry about performance. But also think about maintenance etc.
If you follow this approach, then your question answers itself. The only "optimizations" that are allowable right from the beginning are those that are at least as clear as the straightforward approach.
The best optimization you can make at any time is to pick the correct algorithm for the problem. It's amazing how often a little thought yields a better approach that will save orders of magnitude, rather than a few percent. It's a complete win.
Things to look for:
Mathematical formulas rather than iteration.
Patterns that are well known and documented.
Existing code / components
IMHO, none. Write your code without ever thinking about "optimisation". Instead, think "clarity", "correctness", "maintainability" and "testability".
From wikipedia:
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil. Yet we should not
pass up our opportunities in that
critical 3%.
- Donald Knuth
I think that sums it up. The question is knowing if you are in the 3% and what route to take. Personally I ignore most optimizations until I at least get my code working. Usually as a separate pass with a profiler so I can make sure I am optimizing things that actually matter. Often times code simply runs fast enough that anything you do will have little or no effect.
If you don't have a performance problem, then you should not sacrifice readability for performance. However, when choosing a way to implement some functionality, you should avoid using code you know is problematic from a performance point of view. So if there are 2 ways to implement a function, choose the one likely to perform better, but if it's not the most intuitive solution, make sure you put in some comments as to why you coded it that way.
As you develop in your career as a developer, you'll simply grow in awareness of better, more reasonable approaches to various problems. In most cases I can think of,
performance enhancement work resulted in code that was actually smaller and simpler than some complex tangle that evolved from working through a problem. As you get better, such simpler, faster solutions just become easier and more natural to generate.
Update: I'm voting +1 for everyone on the thread so far because the answers are so good. In particular, DWC has captured the essence of my position with some wonderful examples.
Documentation
Documenting your code is the #1 optimization (of the development process) that you can do right from the get go. As projects grow, the more people you interact with and the more people need to understand what you wrote, the more time you will spend
Toolkits
Make sure your toolkit is appropriate for the application you're developing. If you're making a small app, there's no reason to invoke the mighty power of an Eclipse based GUI system.
Complilers
Let the compiler do the tough work. Most of the time, optimization switches on a compiler will do most of the important things you need.
System Specific Optimizations
Especially in the embedded world, gain an understanding of the underlying architecture of the CPU and system you're interacting with. For example, on a Coldfire CPU, you can gain large performance improvements by ensuring that your data lies on the proper byte boundary.
Algorithms
Strive to make access algorithms O(1) or O(Log N). Strive to make iteration over a list no more than O(N). If you're dealing with large amounts of data, avoid anything more than O(N^2) if it's at all possible.
Code Tricks
Avoid, if possible. This is an optimization in itself - an optimization to make your application more maintainable in the long run.
You should avoid all optimizations if the only belief that the code you are optimizing will be slow. The only code you should optimize is when you know it is slow (preferably through a profiler).
If you write clear, easy to understand code then odds are it'll be fast enough, and if it isn't then when you go to speed it up it should be easier to do.
That being said, common sense should apply (!). Should you read a file over and over again or should you cache the results? Probably cache the results. So from a high level architecture point of view you should be thinking of optimization.
The "evil" part of optimization is the "sins" that are committed in the name of making something faster - those sins generally result in the code being very hard to understand. I am not 100% sure this is one of them.. but look at this question here, this may or may not be an example of optimization (could be the way the person thought to do it), but there are more obvious ways to solve the problem than what was chosen.
Another thing you can do, which I recently did do, is when you are writing the code and you need to decide how to do something write it both ways and run it through a profiler. Then pick the clearest way to code it unless there is a large difference in speed/memory (depending on what you are after). That way you are not guessing at what is "better" and you can document why you did it that way so that someone doesn't change it later.
The case that I was doing was using memory mapped files -vs- stream I/O... the memory mapped file was significantly faster than the other way, so I wasn't concerned if the code was harder to follow (it wasn't) because the speed up was significant.
Another case I had was deciding to "intern" String in Java or not. Doing so should save space, but at a cost of time. In my case the space savings wasn't huge, and the time was double, so I didn't do the interning. Documenting it lets someone else know not to bother interning it (or if they want to see if a newer version of Java makes it faster then they can try).
In addition to being clear and straightforward, you also have to take a reasonable amount of time to implement the code correctly. If it takes you a day to get the code to work right, instead of the two hours it would have taken if you'd just written it, then you've quite possibly wasted time you could have spent on fixing the real performance problem (Knuth's 3%).
Agree with Neil's opinion here, doing performance optimizations in code right away is a bad development practice.
IMHO, performance optimization is dependent on your system design. If your system has been designed poorly, from the perspective of performance, no amount of code optimization will get you 'good' performance - you may get relatively better performance, but not good performance.
For instance, if one intends to build an application that accesses a database, a well designed data model, that has been de-normalized just enough, if likely to yield better performance characteristics than its opposite - a poorly designed data model that has been optimized/tuned to obtain relatively better performance.
Of course, one must not forget requirements in this mix. There are implicit performance requirements that one must consider during design - designing a public facing web site often requires that you reduce server-side trips to ensure a 'high-performance' feel to the end user. That doesn't mean that you rebuild the DOM on the browser on every action and repaint the same (I've seen this in reality), but that you rebuild a portion of the DOM and let the browser do the rest (which would have been handled by a sensible designer who understood the implicit requirements).
Picking appropriate data structures. I'm not even sure it counts as optimizing but it can affect the structure of your app (thus good to do early on) and greatly increase performance.
Don't call Collection.ElementCount directly in the loop check expression if you know for sure this value will be calculated on each pass.
Instead of:
for (int i = 0; i < myArray.Count; ++i)
{
// Do something
}
Do:
int elementCount = myArray.Count;
for (int i = 0; i < elementCount ; ++i)
{
// Do something
}
A classical case.
Of course, you have to know what kind of collection it is (actually, how the Count property/method is implemented). May not necessarily be costy.