What does "ad hoc" mean in programming? - terminology

The term "ad hoc" is used in programming. What exactly does it mean?

"Ad hoc" is a Latin phrase that can apply to anything, not just programming specifically.
It means something that was made up on the fly just to deal with a particular situation, as opposed to some systematic approach to solving problems.
Regarding programming specifically, it's probably similar to what Joel Spolsky recently called duct tape programming.

It basically means writing some quick and dirty code without the intention of reuse. User-entered queries are usually the main example. Another common occurrence is a utility to convert data sets from 1 form to another, which will have no use when the conversion is done.

Formed temporarily for a specific, non-continuing purpose, as an ad hoc committee on ice removal.
Impromptu, not planned, improvised, as an ad hoc attempt to remove the ice with a screw-driver.

Generally meaning improvised / impromtu / made up on the fly, such as ad-hoc reports or queries. Not pre-determind / pre-meditated

The antithesis of "ad hoc" (which means, "specifically for this") might be "commercial off-the-shelf" (COTS) software, which is written to solve a general category of problem (e.g. word-processing or book-keeping) for several possible customers.

ad hoc means for one specific cause or approaching a solution in an unplanned way. In ad hoc we don't have any plan but have the deadline to finish the work. Ad hoc exists in different areas like programming, testing etc. In testing if perform ad hoc if the time assigned is very less and have to deliver the kit within that min amount of time then we will go for ad hoc.
In programming, it is basically that the developer is not working according to the plan but he is working bits and pieces from the whole code. Let me describe it.. there are 2 developers 1 and 2. they have to complete 3 module say A, B, C. If there is a plan then they can decide on which module they are going to work on it. but in ad hoc they can approach any of the modules in an unplanned manner.

In the context of programming and software applications, ad hoc is typically used to signify that
some coding (or more generically, some definition/specification) is done at run-time,
rather than pre-defined and encaspsulated in the application.
Ad hoc items have the characteristic of being done to serve a particular purpose rather than a generic or pre-defined one.
Examples
One may run some ad-hoc queries in SQL to familarize oneself with the the database content. (Equvalent expression would be "writing queries on the fly"). This differs from one's writing queries in the context of a program whereby the list of columns to get, the filters to apply etc. are driven by the application's specifications.
In a very similar usage, and end-user may request the ability to run ad-hoc reports (equivalent expression/underlying concept: "a custom report feature"), which indicates the need for the application to allow end users to decide, at run time, which elements of the report they wish to see (possibly in which specific order etc.).
One may also [typically] quickly "whip-up" a small program for to serve a particular purpose, such as say to parse some input for loading a database (Possible equivalent: "Throw-away code"). Such ad-hoc programs are expected to be used once or a few times, and in the limited timeframe which surrounds the a particular task. The opposite would be to write a generic import utility which may be reused in similar but different contexts (and be use/reused over time).

Programming for a specific purpose, usually without any planning. An example would be a macro or something which is designed to do a single task and nothing else.

I've heard it used in reporting, where I take it to mean letting the user choose what columns, grouping, and aggregate functions to put into a report,

my synonym is ad hoc = case study

Related

Reusability, testability, code complexity reduction and showing-off-ability programming importance

There are lots of programming and architecture patterns. Patterns allow to make code cleaner, reusable, maintainable, more testable & at last (but not at least) to feel the follower a real cool developer.
How do you rank these considerations? What does appeal you most when you decide to apply pattern?
I wonder how many times code reusability (especially for MVP, MVC patterns) was important? For example DAL library often shared between projects (it's reusable) but how often controllers/views (abstracted via interfaces) are reused?
I think you missed the single most important one from your list - more maintainable. Code that is well and consistently structured (as you get with easily reusable code) is much more easily maintained.
And as for reusablilty, then yes, on a number of occasions, usually something like : create a web page to save/update some record. Some months later - we need to expose this as a service for a third party to consume - if your code is structured well, this should be easy and low risk, as you're only adding a new front end.
I hope most people use patterns to learn how to solve design problems in certain context. All those non-functional requrements you mention can be really important depending on stakeholder needs for a project.
As for MVC etc. it is not meant only to be reused between projects, that is often not possible or a good idea. The benefits you get from MVC should be important in the project you use that architecture. You can change independently details in view and models, you can reuse views with controllers for different models, you should be able to change persistence details without affecting your controllers and views. All this is imho very important during development of a single project.
"Code reusability" as defined in many books is more or less a myth. Try to focus more on easy to read - easy to maintain. Don't start with "reusability" in mind, will be better if you will start to think first on testability and then to reuse something. Is important to deliver, to test, to have clean code, to refactor, to not repeat yourself and less important to build from the start components that can be reused between projects. Whatever is to be reused must be a natural process, more like a discovery: you see a repetition so you build something that can be reused in that specific situation.
Code complexity reduction ranks high, if I keep things simple, I can maintain the project better and work on it faster to add/change features.
Reusability is a tool, one that has its uses, but not in every place. I usually refactor for reusability those components that show a clear history of identical use in more than three places. Otherwise, I risk running into the need of specialized behavior in a place or two, and end up splitting a component in a couple of more specialized ones that share a similar structure, but would be hard to understand if kept together.
Testability is not something I personally put a lot of energy in. However it derives in many cases from the reduced code complexity: if there are not a lot of dependencies and intricate code paths, there will be less dangers to break tests or make them more difficult to perform.
As for showing-off-ability... well... the customer is interested in how well the app performs in terms of what he wants from it, not in terms of how "cool" my code is. 'nuff said

Is there any reason to use one DataContext instance, instead of several?

For example, I have 2 methods that use one DataContext (Linq to sql).
using(DataContext data = new DataContext){
// doing something
another_datamethod(data);
}
void another_datamethod(DataContext data){
// doing
}
Use this style? Or with the same result, I can create separate "using DataContext". What benefits, I would achieve if i'll use one DataContext? Maybe some cache possibilities?
Recently, I've read numerous articles and blogs that "highly recommend" that you use multiple DataContexts for your applications, due to multiple issues including the creation of records associated with lookup tables. When I was learning LINQ-to-SQL, one of the most attractive qualities of it for me was the ability to import my complete database schema into one "big" DataContext. So, that's what I did...but a few months, in comes the contradictory information saying that what I did was a bad thing. What to do, what to do...
Nine months later, here's where I stand. My single large DataContext is still my single large DataContext. I have over thirty data repository classes accessing the sixty-plus tables contained within, and I still haven't seen a valid reason to break up my existing data-dom, or to not handle the next project using a single DataContext. The problems that the article and blog writers experienced were valid problems. However, like most things technical, there's never just one way to do things. The best investment of my time and energy was to learn and truly understand how LINQ-to-SQL does what it does. The best book that I found to help me do exactly that is Pro LINQ: Language Integrated Query in C# 2008 by Joseph C. Rattz, Jr. The LINQ-to-SQL coverage is detailed and clear, and there are plenty of examples to clarify the mystery.
So, in your case, create one big DataContext or create many smaller ones...the choice is up to you. Smaller ones clearly give better opportunity for reuse, while one big one helps increase the time you can focus on business logic and presentation code.
Datacontexts track changes and do caching, so yes caching is a possibility depending on what work you are performing.

At what point should architecture become layered?

Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
when a database, or some external resource is introduced.
but also:
always (except for the most trivial of apps) separate AT LEAST presentation tier and application tier
see:
http://en.wikipedia.org/wiki/Multitier_architecture
Layers are a mean to keep a design loosely coupled and highly cohesive.
When you start to have a few classes (either implemented or just sketched with UML), they can be grouped logically, into layers - or more generally packages, or modules. This is called the art of separating the concerns.
The sooner the better: if you do not start layering early enough, then you risk to have never do it as the effort can be too important.
Here are some criteria of when to...
Any time you anticipate the need to
replace one part of it with a
different part.
Any time you find
yourself need to divide work amongst
parallel team.
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
I'd say in most cases dealing with multiple distinct levels of abstraction in the concepts your code deals with would be a strong signal to mirror this with levels of abstraction in your implementation.
This does not override the scenarios that others have highlighted already though.
I think once you ask yourself "hmm should I layer this" the answer is yes.
I've worked on too many projects that probably started off as proof of concept/prototype that ended up being full projects used in production, which are horribly written and just wreak of "get it done quick, we'll fix it later." Trust me, you wont fix it later.
The Pragmatic Programmer lists this as the Broken Window Theory.
Try and always do it right from the start. Separate your concerns. Build it with modularity in mind.
And of course try and think of the poor maintenance programmer who might take over when you're done!
Thinking of it in terms of layers is a little limiting. It's what you see in whitepapers about a product, but it's not how products really work. They have "boxes" that depend on each other in various ways, and you can make it look like they fit into layers but you can do this in several different configurations, depending on what information you're leaving out of the diagram.
And in a really well-designed application, the boxes get very small. They are down to the level of individual interfaces and classes.
This is important because whenever you change a line of code, you need to have some understanding of the impact your change will have, which means you have to understand exactly what the code currently does, what its responsibilities are, which means it has to be a small chunk that has a single responsibility, implementing an interface that doesn't cause clients to be dependent on things they don't need (the S and the I of SOLID).
You may find that your application can look like it has two or three simple layers, if you narrow your eyes, but it may not. That isn't really a problem. Of course, a disastrously badly designed application can look like it has layers tiers if you squint as hard as you can. So those "high level" diagrams of an "architecture" can hide a multitude of sins.
My generic rule of thumb is to at least to separate the problem into a model and view layer, and throw in a controller if there is a possibility of more than one ways of handling the model or piping data to the view.
(Or as the first answer, at least the presentation tier and the application tier).
Loose coupling is all about minimising dependencies, so I would say 'layer' when a dependency is introduced. i.e. a database, third party application, etc.
Although 'layer' is probably the wrong term these days. Most of the time I use Dependency Injection (DI) through an Inversion of Control container such as Castle Windsor. This means that I can code on one part of my system without worrying about the rest. It has the side effect of ensuring loose coupling.
I would recommend DI as a general programming principle all of the time so that you have the choice on how to 'layer' your application later.
Give it a look.
R

Redundancy vs dependencies: which is worse?

When working on developing new software i normally get stuck with the redundancy vs dependencies problem. That is, to either accept a 3rd party library that i have a huge dependencies to or code it myself duplicate all the effect but reduce the dependencies.
Though I've recently been trying to come up with a metric way of weighing up either redundancy in the code and dependencies in the code. For the most part, I've concluded reducing redundancy increases you dependencies in your code. Reducing the dependencies in your code increases redundancy. So its very much counter each other out.
So my question is:
Whats a good metric you've used in the past and do use to weigh up dependencies or redundancy in your code?
One thing I think is soo important is if you choose the dependencies route is you need the tool sets in place so you can quickly examine all the routines and functions that use a specified function. Without those tools set, it seems like redundancy wins.
P.S Following on from an article
Article
I would definitely recommend reading Joels essay on this:
"In Defense of Not-Invented-Here Syndrome"
For a dependency, the best metric I can think of would be "would the world grind to a halt if this disappeared". For example if the STL of C++ magically went away, tons of programs would stop working. If .Net or Java disappeared, our economy would probably take a beating due to the number of products that stopped working...
I would think in those terms. Of course many things are a shade of gray between "end of world" and "meh" if they disappeared. The closer the dependency is to potentially causing the end-of-the-world if it dissapeared, the more likely it is to be stable, have an active user base, have its issues well-knows, etc. The more users the better.
Its analogous to being a small consumer of some hardware component. Sometimes hardware goes obsolete. Why? Because no one uses it. If you're a small company and need to get a component for your product, you will pick what is commonly available--what the "big players" order in large quantities, hoping this means that (a) the component won't disappear, (b) the problems with the component are well known, (c) there's a large, informed user base and (d) it might cost less and be more readily available.
Sometimes though, you need that special flux-capacitor part and you take the risk that the company selling it to you might not care to keep producing flux-capacitors if you are only ordering 20 a year, and no one seems to care :). In this case, it might be worth developing your own flux capacitor instead of relying on that unreliable Doc Brown Inc. Just don't buy Plutonium from the Libyans.
If you've dealt in manufacturing something (especially when you're making far fewer than millions of them per year), you've had to deal with with this problem. Software dependencies, I believe, need to be understood in very similar terms.
How to quantify this into a real metric? Roughly count how many people depend on something. If its high, the risk of the dependency hurting you is much lower, and you can decide at what point the risk is too much.
I hadn't really considered the criteria I use to decide whether to employ a third party library or not before, but having thought about it, probably the following, in no particular order:
How widespread the library is (am I going to be able to find support if I need it)
How likely am I to need to do unexpected things with it (am I going to end up embarking on a three week mission to add the functionality I need?)
How much of it do I need to use (am I going to spend days learning the ins and outs just to make use of one feature)
How stable it appears to be (more trouble than it's worth?)
How stable the interface is (are things likely to change in the next year or two?)
How interesting is the problem (would I be personally better off implementing it myself? will I learn anything useful?)
A possible alternative is to use the external software if it provides a lot of value to your project, but hide this behind a simplified (and more consistent to your project) interface.
This allows you to leverage the power of a third party library, but with much reduced complexity (and as such redundancy) in calling the library. The interface ensures that you don't let the specific style of the third party library bleed into your project and allows you to easily replace it with an internal implementation as and when you think that might be necessary.
A sign of when this is happening might be that the interface you want to support is hampered by the third party library.
The significant downside to this is that it does require extra development and add a certain maintenance impact (this increases with the amount of functionality you need from the library), but allows you to leverage the third party library without too much coupling and without all of your developers needing to understand it.
A good example of this would be the use of an object relation mapper (Hibernate\NHibernate) behind a set of repositories or data access objects, or factories being implemented with an dependency injection framework.

How to measure usability to get hard data?

There are a few posts on usability but none of them was useful to me.
I need a quantitative measure of usability of some part of an application.
I need to estimate it in hard numbers to be able to compare it with future versions (for e.g. reporting purposes). The simplest way is to count clicks and keystrokes, but this seems too simple (for example is the cost of filling a text field a simple sum of typing all the letters ? - I guess it is more complicated).
I need some mathematical model for that so I can estimate the numbers.
Does anyone know anything about this?
P.S. I don't need links to resources about designing user interfaces. I already have them. What I need is a mathematical apparatus to measure existing applications interface usability in hard numbers.
Thanks in advance.
http://www.techsmith.com/morae.asp
This is what Microsoft used in part when they spent millions redesigning Office 2007 with the ribbon toolbar.
Here is how Office 2007 was analyzed:
http://cs.winona.edu/CSConference/2007proceedings/caty.pdf
Be sure to check out the references at the end of the PDF too, there's a ton of good stuff there. Look up how Microsoft did Office 2007 (regardless of how you feel about it), they spent a ton of money on this stuff.
Your main ideas to approach in this are Effectiveness and Efficiency (and, in some cases, Efficacy). The basic points to remember are outlined on this webpage.
What you really want to look at doing is 'inspection' methods of measuring usability. These are typically more expensive to set up (both in terms of time, and finance), but can yield significant results if done properly. These methods include things like heuristic evaluation, which is simply comparing the system interface, and the usage of the system interface, with your usability heuristics (though, from what you've said above, this probably isn't what you're after).
More suited to your use, however, will be 'testing' methods, whereby you observe users performing tasks on your system. This is partially related to the point of effectiveness and efficiency, but can include various things, such as the "Think Aloud" concept (which works really well in certain circumstances, depending on the software being tested).
Jakob Nielsen has a decent (short) article on his website. There's another one, but it's more related to how to test in order to be representative, rather than how to perform the testing itself.
Consider measuring the time to perform critical tasks (using a new user and an experienced user) and the number of data entry errors for performing those tasks.
First you want to define goals: for example increasing the percentage of users who can complete a certain set of tasks, and reducing the time they need for it.
Then, get two cameras, a few users (5-10) give them a list of tasks to complete and ask them to think out loud. Half of the users should use the "old" system, the rest should use the new one.
Review the tapes, measure the time it took, measure success rates, discuss endlessly about interpretations.
Alternatively, you can develop a system for bucket-testing -- it works the same way, though it makes it far more difficult to find out something new. On the other hand, it's much cheaper, so you can do many more iterations. Of course that's limited to sites you can open to public testing.
That obviously implies you're trying to get comparative data between two designs. I can't think of a way of expressing usability as a value.
You might want to look into the GOMS model (Goals, Operators, Methods, and Selection rules). It is a very difficult research tool to use in my opinion, but it does provide a "mathematical" basis to measure performance in a strictly controlled environment. It is best used with "expert" users. See this very interesting case study of Project Ernestine for New England Telephone operators.
Measuring usability quantitatively is an extremely hard problem. I tackled this as a part of my doctoral work. The short answer is, yes, you can measure it; no, you can't use the results in a vacuum. You have to understand why something took longer or shorter; simply comparing numbers is worse than useless, because it's misleading.
For comparing alternate interfaces it works okay. In a longitudinal study, where users are bringing their past expertise with version 1 into their use of version 2, it's not going to be as useful. You will also need to take into account time to learn the interface, including time to re-understand the interface if the user's been away from it. Finally, if the task is of variable difficulty (and this is the usual case in the real world) then your numbers will be all over the map unless you have some way to factor out this difficulty.
GOMS (mentioned above) is a good method to use during the design phase to get an intuition about whether interface A is better than B at doing a specific task. However, it only addresses error-free performance by expert users, and only measures low-level task execution time. If the user figures out a more efficient way to do their work that you haven't thought of, you won't have a GOMS estimate for it and will have to draft one up.
Some specific measures that you could look into:
Measuring clock time for a standard task is good if you want to know what takes a long time. However, lab tests generally involve test subjects working much harder and concentrating much more than they do in everyday work, so comparing results from the lab to real users is going to be misleading.
Error rate: how often the user makes mistakes or backtracks. Especially if you notice the same sort of error occurring over and over again.
Appearance of workarounds; if your users are working around a feature, or taking a bunch of steps that you think are dumb, it may be a sign that your interface doesn't give the tools to figure out how to solve their problems.
Don't underestimate simply asking users how well they thought things went. Subjective usability is finicky but can be revealing.