Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Personally I think it's a bad practice in software development (See Clean Code by Robert C. Martin). But I'm curious to know why people still use them.
I'm talking about acronyms in filenames, variable names, class names, function names, etc.
If you answer, please specify the context (Ex: language, big/small company).
EDIT: I'm not talking about technical acronyms that are common knowledge (ex: sql, html, css, etc.), but rather acronyms within a business.
Two examples:
1) putting two letters which represent the company before each class name
SuperCompany: SCNode, SCObject
2) a specific module name
Graphic: GRTexture, GRMaterial
There is no correct answer to this question, but it is my opinion that you should only use an acronym if another programmer immediately knows it's expansion or meaning. Common examples would be names like dvdPlayer or cssClass, where a longer version would decrease the readability of your code.
If you are in doubt, don't use acronyms, but don't call your class HypertextTransferProtocolRequest instead of HttpRequest, because of a strict no-acronym codex.
Context: Medium Company
Field: Medical Engineering
Languages: Python, JavaScript, C/C++, Perl, etc. etc.
There are lots of reasons NOT to use acronyms in your source code, but in our situation we are gated/regulated by the FDA and several other government agencies that require us to put non-code relevant comments throughout our entire "system" (for auditing/documentation purposes) -- I can't see how we could get through this process without using acronyms.
On the flip-side: if I was given the choice, I'd not add 90% of what they require us to add to our source code, which would effectively eliminate all the esoteric ambiguity (acronyms and regulation tracking numbers) in our code.
So, YES, I use them, NO, I'd prefer not - but my industry requires it.
are you sure that 'clean code' says anything about acronyms? i think it says about readability. acronyms not always are unreadable and meaningless. there are at least two cases when acronyms are necessary.
one is a technical language that is well understood by other programmers (css, html, DAO, DTO, regExp, sql etc) you shouldn't avoid them, they are first class citizens. try to replace them and you will have a lot misunderstandings with other developers.
second rule is: use same language that your clients use. they won't change the names they use. they have their own acronyms (as we have SQL, CSS etc). if you start to change it in your code, you will quickly have a lot of misunderstandings with business
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
There is a lot of confusing around the internet about the SRP.
Does SRP requires:
classes/functions should do one job?
classes/functions should have only one reason for change (and we don
not care how many jobs our classes is performing, at least when we
think about SRP)
eg.
Lets assume that we have one class that performs a lot of work/jobs (I know this is bad, we should not put everything into one class)
Also, lets assume that this one class serves one feature, and this feature has only one reason for change, i.e. reason for change can came only from one actor (e.g. our CTO)
Does this code still apply to SRP?
Additionally quoting Clean Architecture by Robert C. Martin
The SOLID principles, the Single Responsibility Principle (SRP) might
be the least well understood. That’s likely because it has a
particularly inappropriate name. It is too easy for programmers to
hear the name and then assume that it means that every module should
do just one thing.
Make no mistake, there is a principle like that. A function should do
one, and onlyone, thing. We use that principle when we are refactoring
large functions intosmaller functions; we use it at the lowest levels.
But it is not one of the SOLID principles — it is not the SRP.
As always, it depends. "Single Responsibility" means just that, to be responsible for one thing.
The "one thing" could be a narrow or a some sort of a wide field. An simple example:
Imagine a class that calculates a cryptographic signature of a string and another class for encrypting a string. Both classes respect SRP because they each do just one thing.
If you tie them together in one class with two methods, one for encrypting a string and one for calculating the signature, you're clearly violating SRP. Because encrypting and signing are not related together.
But now imagine, you have a system which exchanges signed and encrypted strings that conform to some standard. So of course these two functions are related and one class has to handle both operations.
A client of this class even is not interested how the signing and encryption are related. A client just provides a string to be prepared for transmission and the class signs and encrypt the string. So this class of course respect SRP regardless of doing two things, signing and encrypting.
Back to your (bad) example with the class that performs a lot of work/jobs. When the jobs the class performs are related, there is a chance that SRP is respected. But when the jobs are not related, the class clearly violates SRP.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I hear a couple of people using the term 'programming' rather than configuring, for example:
Have you already programmed Apache's
Virtual Hosts configuration correctly, with
ServerName named FOO?
Program your .vimrc first before
starting Vim the first time.
The last is a word-by-word citation from my teacher, but I didn't dare to correct him. Is it OK to use 'programming' instead of 'configuring'?
IMHO this sounds very ugly.
Well.. ordinary people "program" their VCR, Tivo etc. So for ordinary people program == configure. Note that even programmers don't say "program the javascript". Instead people use words like "develop" or "write" for writing programs in the programming sense.
A definition I like for programming is:
creating a sequence of instructions to enable the computer to do something
So, if you configure anything you are indirectly creating a sequence of instructions. Which IMHO would "qualify" configuring as an indirect type of programming.
EDIT:
Also, computer development is far more than computer programming. To develop you need much more than only write instruction, you also need
Requirements definition
Write specifications
Planning
a lot more
I generally tend to prefer the terms 'coding' and the verb 'to code' rather than programming. It's just that bit less fuzzy and has fewer alternative meanings.
Configuration is just a form of (usually declarative rather than procedural) scripting,, i.e., programming against an API.
In most cases, what we call configuration is not sophisticated enough be worthy of the name "scripting" or "programming", but some systems based on Ruby, Python, or Lisp -- e.g., EMACS -- use the programming language as a configuration language, and then configuration really does blend into programming.
If I'd tell you what kind of things I've heard... For example, during a network security class, we had to generate SSH certificates, and one girl said that the tool that generated the keys "wasn't compiling" (of course it was already compiled and installed, she just had to use it to generate the certificates!... but I suspect that for her, anything that was to be done in the console was "to compile").
So in brief, people will always speak and write badly, just don't follow them.
I completely agree with slebetman, but I'll also add that there might be some age and/or regional issues here.
As a military brat, having lived in the US south, and now working with a bunch of europeans, I frequently run into words used in different ways that I expected. Some of it might be slang to us, but it's completely normal to the person using it, and frequently, when I look up the words in a dictionary, you'll find an alternate definition that makes perfect sense.
In this particular case, from dictionary.com, the last verb definition for 'program' is :
to set, regulate, or modify so as to produce a specific response
or reaction: Program your eating habits to eliminate sweets.
Other times, I'll find that more recent generations have taken words and used them in more limited ways, but the term has a more general meaning. (casket comes to mind, which originally just meant 'small box', but now has death connotations)
I'd say that these are incorrect usages of the term 'programming' - as you say this is simply configuration/setup.
In a sense, configuration is programming. It is a set of instructions for a computing device that has a very limited language - the set of allowable values for the parameters of the device/software.
One could view the apache server, for example, as a language interpreter, and the parameter values as the source code for that interpreter.
However, the devices are not Turing-equivalent in general (exceptions are things like emacs, where definitely it is) and I would personally reserve "programming" for cases where the language is Turing-equivalent.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This may sound like a foolish question or an observation, but i have seen that most of the times when one tries to look at the opensource code, there are no comments or just one or two lines at the start of the function telling what the function is used for e.g. register a user or data in table etc. There is no code which actually explains what exactly the function is doing etc.
Is it done intentionally (removal of comments) when a code is released to open source community to make things difficult to understand by others?
There is a line of thought that says that comments are unnecessary when the code speaks for itself. I don't believe comments would be removed on purpose, though.
I've seen both sides, and frankly code in general is insufficiently documented.
I've been congratulated and thanked for leaving copious breadcrumbs but that's because I've had to sift through too much undocumented code to want to subject anyone else to it.
Call it an ethical obligation.
My reason to document code: my short-term memory is junk. I write comments to remind myself of why I did something. Everyone else benefiting from that is gravy.
I don't think there is a practice or policy to remove comments when releasing software as Open Source. A sneaky software publisher might think that a good idea (maintaining de facto exclusivity, because nobody can't understand it, while having released an open source product) but this would cripple the Open Source project from the start and most likely render it unusable.
The code you are talking about is probably just very little documented. As ocdecio says, that can be either a good sign (the code speaks for itself and does not need comments) or a bad one (it is badly documented, bad code). Both cases are entirely possible. :)
What are you comparing it to?
I doubt that closed-source code has better comments.
As for what functions do, there is probably API documentation. No need to duplicate those in comments.
As a rule, functions should be small enough and written in a way that allows working out how it works by just reading them. A comment on top of the function describing what it does helps getting a quick overview of the function itself when reading through the whole source code file (unless the function name speaks for itself).
Many projects are organized in that way, and that is great.
However, what I am often missing when trying find my way around a larger codebase is something describing the big picture, i.e. the general architecture, principles, what goes where and similar things.
All open source is not made the same. This is what we call a generalization.
If you look at the website Ohloh, which tracks a very large amount of the open source software in existence, it paints a much different picture:
http://www.ohloh.net/languages?query=&sort=code
For instance, in the C language, there are 252+ million lines of comments, approx 1 in every 5 lines of C is a comment. For Java, nearly 1 in 3 lines is a comment. That's not bad.
Open source software has bad comments and bad documentation most of the time. There are various reasons why, some better than others. Usually they relate to laziness or the developers 'being in the moment'. None of the reasons are good reasons.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is there a general rule of thumb as to how many classes, interfaces etc should go in to a given name space before the items should be further classfied in to a new name space? Like a best practice or a community preference? Or is this all personal preference?
namespace: MyExample.Namespace
interface1
interface2
interface3
interface4
interface5
interface6
interface7
interface8
interface9
Or
namespace: MyExample.Namespace.Group1
interface1
interface2
interface3
namespace: MyExample.Namespace.Group2
interface4
interface5
interface6
namespace: MyExample.Namespace.Group3
interface7
interface8
interface9
I have not seen any rule of thumb at any reliable source but there are a few common preferences that I haven seen while working with most developers. There are a few things that help you make the namespaces.
Domain of the class
Is it a class or an interface (I have seen some developers prefer namespaces like ShopApp.Model.Interfaces ). Works really well if your interfaces are some service or data contract.
Dont have namespaces that are too deep, 3 (.) is enough. More than that may get annoying.
Be open to reorganize namespace if at anytime u feel it has become illogical or hard to manage.
Do not create namespaces just for the sake of it.
If building a library or a module, it is generally better to use only one namespace, since the primary function of a namespace is to avoid name collisions and you have the control over what names get assigned to classes and interfaces.
I don't know of any rule of thumb for the number of items, but those kinds of rules tend to be over-generalized garbage anyway. Make sure there is a logical connection between items in the same namespace. If a namespace is getting too crowded (unlikely, I hope), or the things in the namespace are only loosely related at best, consider breaking it up into multiple namespaces.
I would argue that the namespace hierarchy should only be gouverned by considerations of design and the hierarchy of the model/API.
If one namespace sports huge number of unrelated classes, rethink your design.
Contrary to what Andrew said, I would not worry about namespaces containing few classes – although it's of course true that the hierarchy should only be as fine-grained as needed to express the design.
On the other hand, I find it completely reasonable for a namespace to contain only one highly special class, or perhaps just a very small set of types, of which one encodes the task and the others provide an API (exceptions, enums for arguments …).
As an example, take System.Text.RegularExpressions (in .NET). Granted, slightly more than one class, but only just.
It is generally considered bad form to have a small number of classes in a namespace. I have always attributed this to the fact that many namespaces leads to confusion.
I would suggest that you break the classes into logical namespaces being as reasonable and practical as possible. However if you end up with only one or two classes per namespace then you might be fracturing too much and should think about consolidating.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Design patterns are great in that they distill a potentially complex technique into something idiomatic. Often just the fact that it has a name helps communication and understanding.
The downside is that it makes it easier to try to use them as a silver bullet, applying them to every situation without thinking about the motivation behind them, and taking a second to consider whether a given pattern is really appropriate for the situation.
Unlike this question, I'm not looking for design patterns that are often misused, but I'd love to see some examples of really solid design patterns put to bad use. I'm looking for cases where someone "missed the point" and either applied the wrong pattern, or even implemented it badly.
The point of this is that I'd like to be able to highlight that design patterns aren't an excuse to disable critical analysis. Also, I want to emphasise the need to understand not just what the patterns are, but why they are often a good approach.
I have an application that I maintain that uses a provider pattern for just about everything -- with little need. Multiple levels of inheritance as well. As an example, there's a data provider interface that is implemented by an abstract BaseDataProvider, that is in turn extended by a SqlDataProvider. In each of the hierarchies, there is only one concrete implementation of each type. Apparently the developer got ahold of a Microsoft document on implementing Enterprise Architecture and, because lots of people use this application, decided it needed all the flexibility to support multiple databases, multiple enterprise directories, and multiple calendering systems even though we only use MS SQL Server, Active Directory, and Exchange.
To top it all off, configuration items like credentials, urls, and paths are hard-coded all over the place AND override the data that is passed in via parameters to the more abstract classes. Changing this application is a lot like pulling on a thread in a sweater. The more you pull the more things get unraveled and you end up making changes all over the code to do something that should have been simple.
I'm slowly rewriting it -- partly because the code is awful and partly because it's really three applications rolled up into one, one of which isn't really even needed.
Well, to share a bit of experiance. In C#, I had a nicely cool design which used lots of pattern.. I really used lot of them so to make the story short, I won't name it all. But, when I actually tested with real data, the 10^6 objects didn't "run smoothly" with my beautiful design. And by profiling it, I just saw that all those indirection level of nicely polymorphisms classes, proxy, etc. were just too much.. so I guess I could have rewritten it using better pattern to make it more efficient but I had no time, so I practically hacked it procedurally and until now, well, it works way better.. sigh, sad story.
I have seen an asp.net app where the (then junior, now quite capable) developer had managed to effectively make his codebehinds singletons thinking each page was unique which worked brilliantly on his local machine right up to the point where the testers were fighting for control of the login screen.
Purely a misunderstanding of the scope of "unique" and a mind eager to use these design pattern thingies.