Related
How do you deal with a client who has different time estimates for the software product than yours?
I am going to describe a scenario that is not mine, but that captures broadly the same problem. I am working as a subcontractor to a large company that has a programming department. The software project we are working on is in an area that the department believe they have a handle on, but because their expertise and mine are very different we tend to get different results.
Example: At the start of the project I suggested one way of development which they rubbished as being unrealistically difficult and suggested integrating a different framework (one they are familiar with) with the programming language we are using (Python) to get more or less the same result.
Their estimate for this integration: less than a week (they haven't done the integration before).
My estimate for the integration: above two weeks.
Using my suggested way to get the result needed (including using matplotlib among other libraries used elsewhere within the project): 45 minutes. This is not an estimate, the bit was actually finished in 45 minutes.
Example: for the software to be integrated with their internal system, they needed to provide a web service for me to use. They provided a broken one, though it does work with their internal tool (doesn't work with .Net or Java mainstream packages among other options). They maintain that it is my fault that the integration has taken longer than the time estimated.
The problem is not that they don't know, the problem is that they have enough knowledge about programming to be dangerous (in my opinion). Is there some guidelines for how to deal with this type of situation? A way for expectation management? Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
If a client isn't happy with a time estimate, don't do the work. If they think they can do it better or faster, tell them to go ahead.
The one thing I never allow is for my estimates to be modified. That's something that caught me out early on in my career but we learn our lessons.
If clients were so good at doing the work, they wouldn't be hiring me. I'd simply point out that they hired me for my expertise so why are they disregarding that expertise. Of course, if they were to allow the scope of the project to change (i.e., less work), that would be another matter, and one up for discussion.
If you didn't lock in exactly what they were meant to provide as part of the deal, then it's a "he says, she says" situation and, unfortunately, the customer controls the purse strings. However, often, the greatest power you can have is the ability to just walk away.
No-one says you have to do the job.
Of course, all that advice above is worth every cent you paid for it :-)
I don't know your specific circumstances.
Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
My answer for sure. If you can avoid those projects, do it.
Some signs : people thinking they know how to do things when you can guess they can't. The "oh no let's not use this perfectly suitable tool because I don't know it" is a major indicator that the person is technically challenged.
first of all, it is no fun to be in such an environment.
So, if you like to have fun at your job, and you do not need to take this job for extenuating financial reasons, then simply do not take the job that is not fun.
Since that is hardly realistic in many cases, you will end up with the job and need to manage the situation as best you can. One way is to make sure there is a paper trail documenting your objections and concerns with the plan. Try not to be overtly negative, but try to be constructive and present valid alternatives. Here you will need to feel out the political landscape, determine if the 'boss' will be appreciative or threatened by your commentary, and act accordingly.
Many times there are other issues that management is dealing with that you are not aware of. Be cautious of this fact, and maybe ask the management team if this is the case, again without being condescending or negative.
Finally, if you have alternatives that take less time than the meetings it would take to discuss them, just try it in a sandbox, and show it off. This would go a long way to 'proving' your points. Caution here is that you could be accused of not being a team player, or of wasting resources, or not following direction. Make sure this is mitigated by doing these types of things on your own time, or after careful consideration of how long you are spending on these things as well as how vested your boss seems to be on the alternatives.
hth
I ran into the same problem with integration. Example: for the
software to be integrated with their internal system, they needed to
provide a web service for me to use...They maintain that it is my
fault that the integration has taken longer than the time estimated.
Wow very similar to what I was experiencing with a client. The best thing I can suggest is to keep good documentation. In the end that is what saved me. When it came to finger pointing I had all of the emails and facts in order and was prepared to defend my self.
One thing I would suggest is to separate out a target/goal and an estimation. I would not change my estimate unless it involved actually removing features or something is revealed that would make it easier. Tell them you will try to hit the target in anyway you can and you care about the business goal. However, your estimate will not change. If its getting no where and they are just dense then smile and nod and take it if its the only gig around.
Was just writing about this in my blog
How to estimate the WRONG way
I wish to create shareware software that contains a registration algorithm. I am looking for a programming language, that cannot be easily decompiled into readable code. For example, C# can be decompiled into readable code.
What are my options?
Edit: I'm looking for something that can be only decompiled into assembly. Delphi, for example, cannot be decompiled like C# or Java, but from what I've heard, Delphi is dying.
Delphi is not dying, it is alive and well.
As is the community, at delphifeeds.
You can also see more delphi projects, Freeware, shareware and commercial at the Delphi Wikia.
Thus I'd say Delphi is a very good choice for Software Development. Freeware, Shareware or Commercial.
Update:
On September 1st 2011 Embarcadero released Rad Studio XE2. This released adds 64 Bit Compilation, Livebindings, Native Mac OSX compiling, IOS (via XCode) and a whole lot more to the already powerful delphi Dev environment.
If your CPU is able to see the code and run it, by definition, a sufficiently talented person can do it too.
You can, however, make it harder by running your code through an obfuscator.
I'd suggest the language of business and economics can protect your program.
If you are targeting consumers, and price it at say $10, almost all people would find it easier to pay you the $10 vs going into your program and reverse engineering it.
If you are targeting corporations, and say pricing it at $10,000, it just has to be easier to get the purchasing department to approve the payment than to reverse engineer your code. For real companies who would purchase your product, it's not worth the audit risk to have unlicensed code running.
Lastly, what are the costs/benefits of protecting your code? If you write your program in assembly instead of C#, you might have far higher production costs, while reducing the chance of reverse engineering. However, does this cost outweigh the potential lost sales? Could this time be better spent adding value for people who will buy the product? Generally, trying to sell your product to people who are never pay for software is not a economic strategy.
You could write it in Perl.
(I kid, I kid! Put down the pitchforks!)
As the others said, Delphi is NOT dying.
As the others said, there is no bulletproof method.
As the others said, there are tools to make the life of a cracker much harder.
But what the others didn't say is:
Java, .NET (etc.) obfuscation is rather a toy compared with obfuscation toys for eg. Delphi and other native solutions. (of course YMMV)
A very good technology to relatively protect your executable can be found here.
Repeat after me: "Obscurity is not security."
You would be better off using a hard encryption algorithm (where "hard" doesn't mean "difficult", but "not bi-directional; not easily reversible".
Isn't this logically impossible?
If you can run the code, you can get the instructions being executed by your CPU. At that point, your algorithm is readable, for some definitions of readable.
No language is capable of that AFAIK.. since it's impossible as it can always be reverse engineered.. though a good number of developers would cry if you coded it in brainfvck though.
"I'm looking for something that can be only decompiled into assembly."
Try writing your program in assembly. That is the best possible solution.
if you really concern about people disassemble your software, make your software as a service (SaaS) http://en.wikipedia.org/wiki/Software_as_a_service
Try finding an obfuscator. As the name suggests it obfuscates the code enough that reverse-engineering it will not be trivial.
Or use C/C++. Those can be disassembled, but that's it.
Of course, this is just enough to keep the not-sufficiently-competents from understanding and reverse-engineering the code.
As Dominic said, if you can run it, it can be decompiled.
That said, I believe there are tools that obfuscate the compiled code and make it more difficult for someone to disassemble and reverse or take apart a registration process.
For example, I believe that major companies like Adobe and Microsoft use products like this, in order to make it much more difficult for folks to disassemble and crack their programs.
It's like security or cryptography or even the locks on your car/house - someone with enough time and resources can probably break through anything.
You just need to tilt the curve enough to make it sufficiently unattractive for anyone to really try, so that they'd be more likely to move onto easier targets.
I am going to make the assumption that because you're writing shareware and you mention a registration algorithm you are wanting to protect your software from a keygen or patch that bypasses the restrictions on your trial versions.
Really the most you can do is deter. Like others have mentioned there are obfuscation techniques available, but they are not preventative. There are commercial software packers available which compress the file and make it initially unreadable. But the program has to be decompressed at some point so the machine can run it, so it's still reversible.
And that is pretty much the crutch against any of the anti-reversing techniques you'll see. It has to be interpreted by the machine at some point. More modern packers use anti-debugging techniques to deter the more novice reversers. But these techniques end up being documented rather quickly on popular reversing sites. Many of the techniques are bypassed with nothing more than a simple debugger plugin.
The only way I can think of to protect your executable from being arbitrarily reversed is to run the whole thing on a server you control and just pipe the output to users. But that's not always feasible.
As far as your language options go, take a loot at this. I can't really speak to how complete it is but I'm sure some others can add languages they think of.
If you're lookig "for something that can be only decompiled into assembly", that essentially means that you want to use a language that gets compiled (or assembled) directly into a native code executable.
The usual prime suspects then are C, C++, Delphi, VB6. Of course, also assembly meets your criteria, although I doubt you'd want to write any project of decent size in it.
Very simple:
No Programminglanguage,No Programm can Protect your Software.
The Software Cracker will reversengine your App till it is just asembler and will crackt it.
All code can be read back in assembly. Someone can reverse engineer your application and see what the machine is doing.
This is not so much a matter of choosing the right language as it is finding a tool that will do code obfuscation for you. Nothing is bulletproof, but there are efforts to accomplish this sort of thing.
Eg. see this research project about Java code obfuscation.
You can't be 100% sure nobody will able to read your code, but you can make it very hard. You can encrypt your code and modify it during run time.
For example I have not heard of any successful attempts to reverse engineering Skype.
You could always write it in APL. You could deliver the source and still no one would be able to understand it.
Any code running on client-side can be reverse engineered with enough trial and error. In my opinion, make the client-side code contain only the GUI code, while running the actual security requiring authentications etc. on serverside.
On the other hand, if your app's a service which runs on the client-side, such as a game, CAD, POS or anything that needs to have high quality code on the client-side, I'd recommend storing process outputs on your server-side with an encrypted upload tool, then authenticate the client's key/account data every-time before sending their data back? It is an overkill for most projects though.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I work at a company where the rule basically is (as I understand it) that you cannot use any code unless (a) you write the code yourself or (b) there is some explicit indemnification clause guarding your use of any other code (like open source code). I am finding this making my coding difficult.
For example, coding samples in books are pretty much use "as-is". Microsoft SDK Code Samples are use "as-is". Blog posts about coding are use "as-is". There are several sites out there with code samples (including SO) that are use at your own risk. No warranties implied or indemnification against intellectual property lawsuits, blah, blah, etc.
Basically, I'm confined to using Asp.Net and the .Net Framework and nothing else and to bar my eyes from accidentally picking something up that I haven't created (ok...that may be my anal interpretation of the rule ;-).
I find this difficult because a big part of learning to code I think is reading other code. Reading blogs that have code, reading books that have code, looking at coding samples, using code from SDK samples etc. Also, I would think it is safe to use code that people have shown to be a good solution or pattern for something and freely put up for others to use. I'm not about to think that I can code everything myself. I definitely have to stand on the coding shoulders of others to reach certain heights.
It could be that I don't understand licensing very well either. From the companie's perspective (I suppose) they don't want to incur any risk of beind sued for IP infringement.
My thought is that you have to weigh risks. Taking a coding snippet from a book is low risk. Incorporating code from an open source library could be high-risk. I say make decisions based on how much risk you are willing to take.
Has anybody had experience working in a situation like this or similar to this? Is this a rare thing or is it common in some sectors? Are there others in the same position like me out there?
Any insight or guidance would be appreciated! Thanks!
Edit:
Thanks for the responses! To clear up some things: I'm not advocating stealing code. I'm talking about code that has some kind of public license that allows it to be used in its defined legal way. The key is there is no indemnification in public licenses in using the code. That means it you use it at your own legal risk (and other risk). If someone sues an open source project that you used code from, you could be roped into the lawsuit as well because you are using the code even though it had a public license.
In 2005, Microsoft was using indemnification to compete against open source venders by promising it's partners that Microsoft would protect them against IP lawsuits. http://www.microsoft.com/presspass/press/2005/jun05/06-22PartnerIndemnificationPR.mspx
So, even if the risk of being sued for IP infringement may be extremely low, it is a non-zero probability. Thus, I can't use any of it. Even if it has a public license of some sort. :-(
The "risk of beind sued for IP infringement" isn't really the right way to think about it. This isn't a "risk" thing.
Either
You have a license and can use the source. There's no risk. You have the license. There can't be a lawsuit.
Or
You don't have a license and you're in violation. Effectively, you will be sued. There's no risk here, either. You're in violation of someone's copyrights (or worse).
Companies are averse to Open Source for a variety of strange reasons. Risk of lawsuit is not one of them.
Things I've heard.
What if it has a virus?
What if it doesn't work as advertised?
What if it "crashes" something? Who do we sue?
None of these are "risk" items. They're "due diligence" items. And mostly, they're easy to address: pick products with enough users that someone else vets the code before you; QA open source as if one of your own people typed it in. Except for one.
This leads us to the real reason. [Hint: It's not "risk of lawsuit".]
There's no one to sue when you didn't perform due diligence on open source.
Most shops don't have real solid configuration management or QA policies (the kind that would stand up in court as best practices). Until they have these things in place, they don't dare think about introducing open source for which you really need solid QA and configuration management.
I think what your company is really worried about is you directly copying large segments of code for which there may be licensing issues, presenting a legal problem to the company if they are caught using it. However, you may read blogs or other non-licensed code and discover a solution which works for the particular problem you are working on. In that case, you would be better off rewriting the code (that is, look at the solution and reproduce it) as opposed to just copying the code and making modifications to it. At my company, that is what they generally recommend for using non-proprietary code.
As well, for small amounts of code (e.g. a standard implementation of a cache) where everyone implements this the same way, every time, your company is unlikely to be afraid of using outside code, as long as you are sure to test it carefully.
By "indemnification", I assume they mean assurance that the code is free of copyright or patent or maybe trade secret encumbrance that they don't know about up front, or that somebody's willing to compensate them if something like that turns up. I've never been in a company that worried about this, nor have I heard of one before.
It's not clear what you actually want here, other than sympathy (and I do have sympathy for people trapped in corporate foolishness). It sounds like the policy is quite rigid, if you're worried about sample code in books. This is a bad policy, and will hinder you, but I don't know what you can do about it. Unlike Joel's blog post on getting things done as a grunt, it sounds like you can't just start doing thing intelligently without being in clear violation of corporate policy.
Not knowing your situation, my suggestion would be to look for another job. This one will definitely stifle your professional growth, and a company with that policy is unlikely to be reasonable about it.
(It would be nice if you could assure them there was no danger, but that's not true. People have lied about copyrights, although open source projects tend not to, and only a fool would claim definitely that a large chunk of code did not infringe on any patents in the US; even if it was written a year before software patents were first awarded, that would be merely good grounds for a court fight, rather than avoiding a court fight. GPLed software is actually better than BSD software, since it requires some patent licensing downstream, but it can't deal with third-party patents. Of course, if they're that worried about being sued, writing in-house software is no solution. That can infringe on patents.)
You could rename the variables and how would they find out? Do they check every line of code ? Universities tell you that all the time, not to copy code without referencing. Why don't you try coding something and useing parts of code you find in the Internet?
Generally you will use more from communities like stack overflow or blogs than from open source projects.
Finally since the code has no warranties, its at your own risk.. well the is the same case if you came up with the code by yourself: its at your own risk.
Hope that helps... and good luck.
It could be that I don't understand licensing very well either. From the companie's perspective (I suppose) they don't want to incur any risk of beind sued for IP infringement.
My thought is that you have to weigh risks. Taking a coding snippet from a book is low risk. Incorporating code from an open source library could be high-risk. I say make decisions based on how much risk you are willing to take.
I'm not sure if I understood correctly. If you are saying that license infringement is fine when you don't get caught, I will have to disagree with you.
You can learn by reading code without breaking laws or getting fired. Just don't copy the code to your company's code base if the license doesn't allow it.
If you're not aware of the "clean room" concept, then there's always that approach. Have a friend look at some open source code and get them to tell you how they think it works. Diagram it out, and then code it yourself.
If it worked for IBM, right?
Keep in mind that not all Open Source is GPL. Your company can copy as much BSD-licensed code as they like. BSD-licensed code has made it into OS X (that's probably my biggest understatement of today) and to a lesser extent Windows NT.
I know that there is no way to fully protect our code.
I also know that if a user wants to crack our app, then he or she is not a user that would buy our app.
I also know that it is better to improve our app.. instead of being afraid of anticracking techniques.
I also know that there is no commercial tool that can protec our app....
I also know that....
Ok. Enough. I've heard everything.
I really think that adding a little protection won't hurt.
So.... have you ever used code virtulizer from oreans or vmprotect?
I've heard that they are sometimes detected as virus by some antivirus.
Any experiences that I should be aware of before buying it.
I know it creates some virtual machines and obfuscates a little the code to make it harder to find the weaknesses of our registration routines.
Is there any warning I should know?
Thanks.
Any advice would be appreciated.
Jag
In my humble opinion, you should be lucky or even eager to be pirated, because that means your product is successful and popular.
That's plain incorrect. My software that I worked many months on was cracked the moment it was released. There are organised cracking groups that feed off download.com's RSS channel etc and crack each app that appears. It's a piece of cake to extract the keygen code of any app, so my response was to:
a) resort to digital certificate key files which are impossible to forge as they are signed by a private AES key and validated by a public one embedded in the app (see: aquaticmac.com - I use the stl c++ implementation which is cross-platform), along with.
b) The excellent Code Virtualizer™. I will say that the moment I started using Code Virtualizer™ I was getting some complaints from one or two users about app crashes. When I removed it from their build the crashes ceased. Still, I'm not sure whether it was a problem with CV per se as it could have been an obscure bug in my code, but I since reshuffled my code and I have since heard no complaints.
After the above, no more cracks. Some people look at being cracked as a positive thing, as it's a free publicity channel, but those people usually haven't spent months/years on an idea only to find you're being ripped off. Quite hard to take.
Unfortunately, VM-protected software is more likely to get affected by false positives than conventional packing software. The reason for that is that since AV protection is so complicated, AV software are often unable to analyze the protected code, and may rely on either pattern libraries or may issue generic warnings for any files protected by a system it can't analyze. If your priority is to eliminate false positives, I suggest picking a widely-used protection solution, e.g. AsProtect (although Oreans' products are becoming quite popular as well).
Software VM protection is quite popular today, especially as it's now available at an accessible price for small companies and independent software developers. It also takes a considerable amount of effort to crack in comparison to non-VM techniques - the wrappers usually have the standard anti-debugging tricks that other protections have, as well as the VM protection. Since the virtual machine is generated randomly on each build, the crackers will need to analyze the VM instruction set and reverse engineer the protected code back to machine code.
The main disadvantage of VM protection is that if it's overused (used to protect excessive parts of the code), it can slow down your application considerably - so you'll need to protect just the critical parts (registration checks, etc). It also doesn't apply to certain application types - it likely won't work on DLLs that are used for injection, as well as device drivers.
I've also heard that StrongBit EXECryptor is a decent protection package at a decent price. (I'm not affiliated with said company nor guarantee any quality what-so-ever, it's just word of mouth and worth checking out IMO).
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.