Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is there a general rule of thumb as to how many classes, interfaces etc should go in to a given name space before the items should be further classfied in to a new name space? Like a best practice or a community preference? Or is this all personal preference?
namespace: MyExample.Namespace
interface1
interface2
interface3
interface4
interface5
interface6
interface7
interface8
interface9
Or
namespace: MyExample.Namespace.Group1
interface1
interface2
interface3
namespace: MyExample.Namespace.Group2
interface4
interface5
interface6
namespace: MyExample.Namespace.Group3
interface7
interface8
interface9
I have not seen any rule of thumb at any reliable source but there are a few common preferences that I haven seen while working with most developers. There are a few things that help you make the namespaces.
Domain of the class
Is it a class or an interface (I have seen some developers prefer namespaces like ShopApp.Model.Interfaces ). Works really well if your interfaces are some service or data contract.
Dont have namespaces that are too deep, 3 (.) is enough. More than that may get annoying.
Be open to reorganize namespace if at anytime u feel it has become illogical or hard to manage.
Do not create namespaces just for the sake of it.
If building a library or a module, it is generally better to use only one namespace, since the primary function of a namespace is to avoid name collisions and you have the control over what names get assigned to classes and interfaces.
I don't know of any rule of thumb for the number of items, but those kinds of rules tend to be over-generalized garbage anyway. Make sure there is a logical connection between items in the same namespace. If a namespace is getting too crowded (unlikely, I hope), or the things in the namespace are only loosely related at best, consider breaking it up into multiple namespaces.
I would argue that the namespace hierarchy should only be gouverned by considerations of design and the hierarchy of the model/API.
If one namespace sports huge number of unrelated classes, rethink your design.
Contrary to what Andrew said, I would not worry about namespaces containing few classes – although it's of course true that the hierarchy should only be as fine-grained as needed to express the design.
On the other hand, I find it completely reasonable for a namespace to contain only one highly special class, or perhaps just a very small set of types, of which one encodes the task and the others provide an API (exceptions, enums for arguments …).
As an example, take System.Text.RegularExpressions (in .NET). Granted, slightly more than one class, but only just.
It is generally considered bad form to have a small number of classes in a namespace. I have always attributed this to the fact that many namespaces leads to confusion.
I would suggest that you break the classes into logical namespaces being as reasonable and practical as possible. However if you end up with only one or two classes per namespace then you might be fracturing too much and should think about consolidating.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
There is a lot of confusing around the internet about the SRP.
Does SRP requires:
classes/functions should do one job?
classes/functions should have only one reason for change (and we don
not care how many jobs our classes is performing, at least when we
think about SRP)
eg.
Lets assume that we have one class that performs a lot of work/jobs (I know this is bad, we should not put everything into one class)
Also, lets assume that this one class serves one feature, and this feature has only one reason for change, i.e. reason for change can came only from one actor (e.g. our CTO)
Does this code still apply to SRP?
Additionally quoting Clean Architecture by Robert C. Martin
The SOLID principles, the Single Responsibility Principle (SRP) might
be the least well understood. That’s likely because it has a
particularly inappropriate name. It is too easy for programmers to
hear the name and then assume that it means that every module should
do just one thing.
Make no mistake, there is a principle like that. A function should do
one, and onlyone, thing. We use that principle when we are refactoring
large functions intosmaller functions; we use it at the lowest levels.
But it is not one of the SOLID principles — it is not the SRP.
As always, it depends. "Single Responsibility" means just that, to be responsible for one thing.
The "one thing" could be a narrow or a some sort of a wide field. An simple example:
Imagine a class that calculates a cryptographic signature of a string and another class for encrypting a string. Both classes respect SRP because they each do just one thing.
If you tie them together in one class with two methods, one for encrypting a string and one for calculating the signature, you're clearly violating SRP. Because encrypting and signing are not related together.
But now imagine, you have a system which exchanges signed and encrypted strings that conform to some standard. So of course these two functions are related and one class has to handle both operations.
A client of this class even is not interested how the signing and encryption are related. A client just provides a string to be prepared for transmission and the class signs and encrypt the string. So this class of course respect SRP regardless of doing two things, signing and encrypting.
Back to your (bad) example with the class that performs a lot of work/jobs. When the jobs the class performs are related, there is a chance that SRP is respected. But when the jobs are not related, the class clearly violates SRP.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Personally I think it's a bad practice in software development (See Clean Code by Robert C. Martin). But I'm curious to know why people still use them.
I'm talking about acronyms in filenames, variable names, class names, function names, etc.
If you answer, please specify the context (Ex: language, big/small company).
EDIT: I'm not talking about technical acronyms that are common knowledge (ex: sql, html, css, etc.), but rather acronyms within a business.
Two examples:
1) putting two letters which represent the company before each class name
SuperCompany: SCNode, SCObject
2) a specific module name
Graphic: GRTexture, GRMaterial
There is no correct answer to this question, but it is my opinion that you should only use an acronym if another programmer immediately knows it's expansion or meaning. Common examples would be names like dvdPlayer or cssClass, where a longer version would decrease the readability of your code.
If you are in doubt, don't use acronyms, but don't call your class HypertextTransferProtocolRequest instead of HttpRequest, because of a strict no-acronym codex.
Context: Medium Company
Field: Medical Engineering
Languages: Python, JavaScript, C/C++, Perl, etc. etc.
There are lots of reasons NOT to use acronyms in your source code, but in our situation we are gated/regulated by the FDA and several other government agencies that require us to put non-code relevant comments throughout our entire "system" (for auditing/documentation purposes) -- I can't see how we could get through this process without using acronyms.
On the flip-side: if I was given the choice, I'd not add 90% of what they require us to add to our source code, which would effectively eliminate all the esoteric ambiguity (acronyms and regulation tracking numbers) in our code.
So, YES, I use them, NO, I'd prefer not - but my industry requires it.
are you sure that 'clean code' says anything about acronyms? i think it says about readability. acronyms not always are unreadable and meaningless. there are at least two cases when acronyms are necessary.
one is a technical language that is well understood by other programmers (css, html, DAO, DTO, regExp, sql etc) you shouldn't avoid them, they are first class citizens. try to replace them and you will have a lot misunderstandings with other developers.
second rule is: use same language that your clients use. they won't change the names they use. they have their own acronyms (as we have SQL, CSS etc). if you start to change it in your code, you will quickly have a lot of misunderstandings with business
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am trying to get a couple team-members on to the OOP mindset, who currently think in terms of procedural programming.
However I am having a hard time putting into terms the "why" all this is good, and "why" they should want to benefit from it.
They use a different language than I do, and I am lacking the communication skills to explain this to them in a way that makes them "want" to learn the OOP way of doing things.
What are some good language independent books, articles, or arguments anyone can give or point to?
OOP is good for a multi-developer team because it easily allows abstraction, encapsulation, inheritance and polymorphism. These are the big buzz words of OOP and they are the big buzz words for good reasons.
Abstraction: Allows other members of your team to use code that you write without having to understand the implementation details. This reduces the amount of necessary communication. Think of The Mythical Man Month wherein it is detailed that communication is one of the highest costs facing a development team.
Encapsulation: Allows you to change your implementation details without impacting users of your code. As such, it reduces code maintenance costs.
Inheritance: Allows your team to reuse and extend your implementations with reduced costs.
Polymorphism: Allows your team to use different implementations of a given abstraction. If your team is writing code to read and parse data from a Stream, because of polymorphism it can now work with FileStreams, MemoryStreams and PigeonStreams seamlessly and with significantly reduced costs.
OOP is not a holy grail. It is inappropriate for some teams because the costs of using it could be higher than the costs of not using it. For example, if you try to design for polymorphism but never have multiple implementations of a given abstraction then you have probably increased your costs.
Always give examples.
Take a bit of their code you think is bad. Re-write it to be better. Explain why it is better. Your colleagues will either agree or disagree.
Nobody uses (or should use) techniques because they're good techniques, they (should) use them because they produce good results. The advantages of very simple use of classes and objects are usually fairly easy to see, for instance when you have an array of objects with n properties instead of n arrays, one for each field you care about, and so on.
Comparing procedural to OOP, the biggest winner by far is encapsulation. OOP doesn't mean that you get encapsulation automatically, but the process of doing it is free compared with procedural code.
Abstraction helps manage the complexity of an application: only the information that's required is exposed.
There are many ways to go about this: OOP is not the only discipline to promote this strategy.
Of course, it is not because one claims to do OOP that one builds an application without abundant "abstraction leaks" thereby defeating the strategy...
I have a bit strange thought. I don't know but there probably some areas exist where OOP is unnecessary or even bad (very-very IMHO: javascript programming).
You and your team probably work in one of these areas. In other case you'd failed many years ago due to teams which use oop and all its benefits (like different frameworks, UML and so on) would simply do their job more efficiently.
I mean that if you still work well without oop then, maybe, just leave it.
The killer phrase: With OOP you can model the world "as it is" *cough*.
OOP didn't make sense to me until I was working on an application that connected to two different databases. I needed a function called getEmployeeName() for both databases. I decided to create two objects, one for each database, to encapsulate the functions that ran against each one (there were no functions that ran against both simultaneously). Not the epitomy of OOP, but a good start for me.
Most of my code is still procedural, but I'm more aware of situations where objects would make sense in my code. I'm not of the mindset that everything needs to be one way or the other.
Re-use of existing code through hierarchies.
The killer argument is IMHO that it takes much less time to re-design your code. Here is a similar question explaining why.
Having the ability to pass an entire object around that has a bunch of methods/functions you can call using that object. For example, let's say you have want to pass a message around you only need to pass one object and everyone who gets that object will have access to all it's functions.
Also, you can declare some objects' functions as public and some as private. There is also the concept of a friend function where only objects that are related through OO hierarchies have access to their friend's functions.
Objects help keep functions near the data they use and encapsulates it all into one entity that can be easily passed around.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I hear a couple of people using the term 'programming' rather than configuring, for example:
Have you already programmed Apache's
Virtual Hosts configuration correctly, with
ServerName named FOO?
Program your .vimrc first before
starting Vim the first time.
The last is a word-by-word citation from my teacher, but I didn't dare to correct him. Is it OK to use 'programming' instead of 'configuring'?
IMHO this sounds very ugly.
Well.. ordinary people "program" their VCR, Tivo etc. So for ordinary people program == configure. Note that even programmers don't say "program the javascript". Instead people use words like "develop" or "write" for writing programs in the programming sense.
A definition I like for programming is:
creating a sequence of instructions to enable the computer to do something
So, if you configure anything you are indirectly creating a sequence of instructions. Which IMHO would "qualify" configuring as an indirect type of programming.
EDIT:
Also, computer development is far more than computer programming. To develop you need much more than only write instruction, you also need
Requirements definition
Write specifications
Planning
a lot more
I generally tend to prefer the terms 'coding' and the verb 'to code' rather than programming. It's just that bit less fuzzy and has fewer alternative meanings.
Configuration is just a form of (usually declarative rather than procedural) scripting,, i.e., programming against an API.
In most cases, what we call configuration is not sophisticated enough be worthy of the name "scripting" or "programming", but some systems based on Ruby, Python, or Lisp -- e.g., EMACS -- use the programming language as a configuration language, and then configuration really does blend into programming.
If I'd tell you what kind of things I've heard... For example, during a network security class, we had to generate SSH certificates, and one girl said that the tool that generated the keys "wasn't compiling" (of course it was already compiled and installed, she just had to use it to generate the certificates!... but I suspect that for her, anything that was to be done in the console was "to compile").
So in brief, people will always speak and write badly, just don't follow them.
I completely agree with slebetman, but I'll also add that there might be some age and/or regional issues here.
As a military brat, having lived in the US south, and now working with a bunch of europeans, I frequently run into words used in different ways that I expected. Some of it might be slang to us, but it's completely normal to the person using it, and frequently, when I look up the words in a dictionary, you'll find an alternate definition that makes perfect sense.
In this particular case, from dictionary.com, the last verb definition for 'program' is :
to set, regulate, or modify so as to produce a specific response
or reaction: Program your eating habits to eliminate sweets.
Other times, I'll find that more recent generations have taken words and used them in more limited ways, but the term has a more general meaning. (casket comes to mind, which originally just meant 'small box', but now has death connotations)
I'd say that these are incorrect usages of the term 'programming' - as you say this is simply configuration/setup.
In a sense, configuration is programming. It is a set of instructions for a computing device that has a very limited language - the set of allowable values for the parameters of the device/software.
One could view the apache server, for example, as a language interpreter, and the parameter values as the source code for that interpreter.
However, the devices are not Turing-equivalent in general (exceptions are things like emacs, where definitely it is) and I would personally reserve "programming" for cases where the language is Turing-equivalent.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Design patterns are great in that they distill a potentially complex technique into something idiomatic. Often just the fact that it has a name helps communication and understanding.
The downside is that it makes it easier to try to use them as a silver bullet, applying them to every situation without thinking about the motivation behind them, and taking a second to consider whether a given pattern is really appropriate for the situation.
Unlike this question, I'm not looking for design patterns that are often misused, but I'd love to see some examples of really solid design patterns put to bad use. I'm looking for cases where someone "missed the point" and either applied the wrong pattern, or even implemented it badly.
The point of this is that I'd like to be able to highlight that design patterns aren't an excuse to disable critical analysis. Also, I want to emphasise the need to understand not just what the patterns are, but why they are often a good approach.
I have an application that I maintain that uses a provider pattern for just about everything -- with little need. Multiple levels of inheritance as well. As an example, there's a data provider interface that is implemented by an abstract BaseDataProvider, that is in turn extended by a SqlDataProvider. In each of the hierarchies, there is only one concrete implementation of each type. Apparently the developer got ahold of a Microsoft document on implementing Enterprise Architecture and, because lots of people use this application, decided it needed all the flexibility to support multiple databases, multiple enterprise directories, and multiple calendering systems even though we only use MS SQL Server, Active Directory, and Exchange.
To top it all off, configuration items like credentials, urls, and paths are hard-coded all over the place AND override the data that is passed in via parameters to the more abstract classes. Changing this application is a lot like pulling on a thread in a sweater. The more you pull the more things get unraveled and you end up making changes all over the code to do something that should have been simple.
I'm slowly rewriting it -- partly because the code is awful and partly because it's really three applications rolled up into one, one of which isn't really even needed.
Well, to share a bit of experiance. In C#, I had a nicely cool design which used lots of pattern.. I really used lot of them so to make the story short, I won't name it all. But, when I actually tested with real data, the 10^6 objects didn't "run smoothly" with my beautiful design. And by profiling it, I just saw that all those indirection level of nicely polymorphisms classes, proxy, etc. were just too much.. so I guess I could have rewritten it using better pattern to make it more efficient but I had no time, so I practically hacked it procedurally and until now, well, it works way better.. sigh, sad story.
I have seen an asp.net app where the (then junior, now quite capable) developer had managed to effectively make his codebehinds singletons thinking each page was unique which worked brilliantly on his local machine right up to the point where the testers were fighting for control of the login screen.
Purely a misunderstanding of the scope of "unique" and a mind eager to use these design pattern thingies.