Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
This is going to be the noobist of all noob questions, but what exactly is scope creep, what does it entail?
Scope creep "refers to uncontrolled changes in a project's scope. This phenomenon can occur when the scope of a project is not properly defined, documented, or controlled. It is generally considered a negative occurrence that is to be avoided."
Source.
You start with a mouse and end up with an elephant.
In other words requirements change on a daily basis and what you are supposed to deliver doesn't look anything like what was supposed to be delivered at the start of the project
Wikipedia expresses it more completely than I could.
Scope creep (also called requirement creep, or kitchen sink syndrome) in project management refers to changes, continuous or uncontrolled growth in a project’s scope, at any point after the project begins. This can occur when the scope of a project is not properly defined, documented, or controlled. It is generally considered harmful. It is related to but distinct from feature creep.
Scope creep can be a result of:
poor change control
lack of proper initial identification of what is required to bring about the project objectives
weak project manager or executive sponsor
poor communication between parties
lack of initial product versatility
There are two definitions for scope creep.
Negative. Scope changed in uncontrolled ways adding cost and risk.
Positive. Scope changed in uncontrolled ways because our fantasy project plan was wrong, and we starting learning things we didn't expect to learn.
Most folks seem to like the first one.
The second one, however is more accurate. The few managers who recognize that scope creep -== learning are able to manage it wisely and get stuff done.
The rest of the managers see uncontrolled learning as a threat. The project won't create the hoped-for deliverables in the fantasy schedule or budget. This means that either scope needs to be reduced or budget needs to be expanded.
Note that most Agile methods don't seem to care much about scope or scope creep. With a narrow focus on the next release, the big picture becomes less threatening and scary.
Learning -- in an Agile context -- also leads to scope creep, but it's a good thing. Stuff that won't ever be useful is deferred. Stuff that didn't seem important when we started gets accelerated.
It's when a project incessantly gets larger and more complex than what was originally planned for, and therefore is perpetually behind schedule and over budget.
When you start your project, you set certain parameters. These parameters are your scope. You should think about what your project is supposed to do, how much will it cost, and how much time is it going to take to complete. When more functionality, time, money.... creeps into the project this can be referred to as "scope creep",
Scope creep is when requirements keep being modified or added to current product release. For example, you show a demo to the customer/user and they request new reports/buttons/fields. It can also occur when a developer wants to add new "cool" features, etc without needing them. It is best prevented by managing the project release cycles. People suggesting features is a good thing, but everybody involved needs to understand the impact on the project and budget. Consider adding new feature requests to a list and meeting regularly to decide which new features get added to the "next release".
In the case of feature creep in my view there needs to be an element of contributing to the downfall of a product. For example growth of a city is good - urban sprawl is bad.
Or in the case of UI design.. all related features get stuck on one control screen - new features are continually added to the point where their mear existance confuses more people than would ever find the features useful.
To qualify as feature creep they should pass the test of contributing to unecessary complexity or throwing unecessary wrenches in the lifecycle of the system. Origionally unplanned features that do not meet this critera should not be concidered feature creep.
Scope Creep is concidered in the same light as feature creep only at the level of a products scope... For example:
Your team is building an "Air Sucker"(tm) which sucks air from one location and transports it to a factory two blocks away. The scope of the air sucker is that it moves air.
Then management realizes the sucking device also needs to suck and transport water and sand. The Air Sucker was never designed to transport liquid or granular materials and thus increasing the scope of the air sucker to the point where it must be significantly reengineered to meet the new transport requirements.
If you have appropriate change controls, possibly a cash cow.
There's a famous picture of a big power boat with a little runabout in tow. The runabout is called "The Original Contract", and the power boat is called "The Change Requests"
features, budget, schedule: pick any 2 but not all 3
Scope creep is any change to the originally agreed on specification/project scope/definition. The change could be due to discovery of prior unknowns, internal/external market changes, technology changes or plan ole' sponsors wanting something more. It negatively impacts the project when there is no change control process in place to review and accept or reject the change.
And how does scope creep affect product schedule?
Is there a way to keep scope creep out of your development process?
How does your PM handle scope creep?
What was the best feature to ever come out of a scope creep?
etc. etc.
Scope creep is like pornography: you know it when you see it.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I maintain an Open Source Android app. Every once in a while, some anonymous hero localizes to its mother tongue, sending files or using our online tool.
At first I thought the magic of collaboration would be enough to provide timely localizations, but actually, the UI strings change, and each release ships with roughly:
5 languages localized at 100%
8 languages localized at 70% (because recent strings have not been localized)
I am extremely thankful, but is there something I can do to bring the localizations from 70% to 100% when the release comes?
I send messages on the mailing list at code freeze a month before each release, and then a week before, but most of the people who contributed localizations don't read the mailing list, in fact most of them were probably just good samaritans passing by.
Should I stalk the translators and ask them personally?
I have been thinking about having a person responsible for each language. This person (how should I call the role?) would be "responsible" for bringing the translation to 100% before each release. Their names would be listed in the "About" dialog. Is it a good or a bad strategy? Any tips?
It's the 90-9-1 principle - http://www.90-9-1.com/ - and designating people in charge of things isn't going to do it. You can offer cash - rewards/pay - or you can groom them. If you bring money into the mix, remember that it quickly becomes tradeoff analysis. People will compare what you offer versus what they can earn on their own. Since I assume you don't have that much money, you don't want that sort of comparison.
Realistically, grooming them is a better option. You've done the first step - include them - show that their fixes, updates, etc are included and help the product. The next step is publicly thanking them and appreciating them. That will get the first 80% as you've seen. The next step is getting them personally committed. Start interacting with them directly. Send them your thanks.. not just in email. If you have a product t-shirt, send them one with a hand written note. In your release notes, link to their sites. If you ever see them in person, buy their coffee.. whatever. The point is that you go out of your way - however small - to acknowledge that they're going out of their own way.
I'm the Project Lead of the Open Source Project Management System web2project and have been doing this longer than I care to consider. ;)
Use both a carrot and a stick approach. Giving people credit in your "About" screen, and calling them the lead translator for a particular language (if they're willing to accept that responsibility) is a good thing. Don't include a translation until you've gotten someone to commit to being the lead translator for that language; just like with code, you don't want to accept contributions unless (a) you are able and willing to maintain those contributions or (b) someone who you consider reliable has volunteered to be responsible for maintaining it.
The "stick" is that if someone stops maintaining a language, and fails to appropriately pass the responsibility off to someone else, you will remove that language from the next version of your application. Most people who translate your app probably do so because they would prefer to use your app in their native language. The threat of removing the language from the next version, or the wake-up call when they find that the next version doesn't contain their language, might inspire them to come back and finish up the last 30% translation work.
You are getting it wrong, community translation is an ongoing process or lets say never ending approach.
This doesn't meant that is wrong, but you have to live with the fact that localization is always a compromise and that Usually, it is better to have 20 languages with 50% translation than having 10 languages 100% translated.
If one language is important, it will have more users, so it will have more contributors and the translation rate will be greater.
You don't know when and if the translation for a specific language will be 100%, probably never.
The good part is that you shouldn't care about obtaining 100%, you should spend your effort in motivating people to contribute.
Probably you already know, community translation doesn't play well with packed products.
The solution to this problem is to change the way you work and provide partial translations in your release and a simple update system that updates them (preferably a silent one, like the Chromium update).
PS. If you need to assure a 100% translation rate for a limited number of languages before your release, consider paying a translation vendor to do it.
Well, you get what you paid for. The strategy that you are planing to implement will not give you anything. That's because heroes have their own lives. The only way to get this done, is to engage more people to translate, because completion percentage is a function of a community size.
If your application is useful enough, you may try to give a link "help to translate into your language" and this could do for a while.
Think of creating discussion group or board that you could post "Call for translation" when you are ready to ship. Translators usually don't have a time to track changes.
One thing to note is that localization people need time to do the localizations, and the less they're getting paid for it, the longer they need. This means that you should at least try to avoid changing the localization keys (especially including defining new ones) shortly before a release. I know it's nice to not be constrained like that, but the reality is that if you're not paying then you're not going to get a fast turnaround in the majority of cases, so you have to plan for this and mitigate in your schedule. In effect, you have to stop thinking about the text shown to a user as something that can be finalized late in the project and instead start treating it as an important interface matter that you'll plan to lock down much earlier.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How to successfully implement democratic (non-BDFL controlled) type of management for the open source project?
More specifically - for the project using distributed source repositories.
What style of communication is best to adopt in such environment?
How to encourage merging branches into the master?
I am mostly interested in establishing the situation where people can directly merge into the master branch under the "social contract" agreement that they follow the project roadmap (which they themselves help to define) and that code they commit is tested.
I specifically want to encourage workflow
define the problem->define requirements and specific metrics of success->architect->build and test
The reason is that - I often see emails like here is the problem and here is how I think it should be solved
Immediately somebody else jumps in and starts a fight.
Not productive at all.
Often disagreement of that kind stems from not being on the same page on the problem definition, requirements or architecture. Or sometimes just because nobody even thought about such things.
How to encourage people to analyze the problem properly, share great ideas and select the best solutions?
How to organize communication so to avoid silly fights, make good decisions without being overly bureaucratic and move along at a good pace ?
Would you have any suggestions? Are there examples of projects managed this way?
How do you think adoption of distributed revision control instead of centralized affects the style of project management?
edit: found some interesting links in related questions
http://gettingreal.37signals.com/toc.php
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/
Sorry for this somewhat off-topic answer (i.e. one not addressing directly the question). Please edit at your heart's delight!
Projects _do_ need executive governance!
Such may come from a single Benevolent (or not) Dictator, or a [IMHO, small] group, possibly open, of diverse but like-minded individuals. A standard joke on this is that: "The group should be made of an odd number of members, and 3 is already too many"; in truth, small collegial committees can be very effective.
This requirement for a "non fully democratic" decision making entity is however somewhat aligned with the processes suggested in the question. To effectively harness the good will of contributors to the project, the executive team needs to
be perceived as legitimate
communicate effectively
empower "the masses" to contribute with roadmap definition, problem identification, solution scoping and all other design-level tasks. (All the while it remaining clear that in the end, and after all has been said, final decision will be done in committee.
DELIVER a good and vibrant product (BTW by adopting agile development processes, the time between deliveries is lessened)
compromise when needed
advocate the interest of pooling resources in a coordinated fashion, rather than branching out.
Share the glory!
To support all of this formal documentation and processes are very useful. For example, the define the problem->define requirements and specific metrics of success->architect->build procedure indicated in the question can be implemented in the form a single collaboratively edited document (wiki-based or other), i.e. one per issue/idea. This document is templated with a defined format: required attributes such as date, initial posting info... and sections which are opened (and closed) for editing following a given schedule. This allows keeping a clean(er) record of the way the collective though of a given issue, what was suggested etc, what was [authoritatively] decided and why.
With such a process, the community feels engaged and hopefully individuals not too disappointed when the eventual decisions do not go "their" way. They can read and comment on the what and why of the decisions.
Another useful approach is to reward effective participation by informally (or factually) provide more weight to the contributors who effectively help with the project. The more active members can either gain their way into the "inner circle" or be handed-out leadership roles on subsets of the project.
Finally, the leaders of a project (whether in the context of a "democratic" or of a "partially dicatorial" leadership) need to be ever vigilant about "peace keeping". Contributors to Open Source projects are typically energetic, smart and opinionated individuals; conflict of opinion, conflict of personalities, impatience etc. is to be expected. These clashes however can be eased by systematically addressing issues with clear facts, moderating/editing aggressively against name calling and non-productive forms etc.
Originally posted on MetaOptimize.
Constitution for Governance of Open-Source Projects (v20100227)
Let it be affirmed that the primary goal in instituting governance of an open-source project be to ensure the long-term health of the project.
Accordingly, the default bias should be towards openness and inclusiveness.
However, policy should be changed as issues present themselves, in order to maintain the long-term health of the project.
For the model of decision making, we favor a "do-ocracy".
The people who contribute the most generally command the respect of the community.
Alienating them is the best way to derail the project.
The repository should be open the committers, given that commits can easily be reverted and commit-access easily revoked. This is preferable to alienating potential committers.
To ensure transparency for developers new and old, and allow them to decide their involvement in a project based upon the history of the project, their should be transparency and openess in the inner working of the project. For example, the email archive should be public.
Lastly, let us remember that too much red-tape gets in the way of progress. So red-tape and other barriers to contribution should be avoided, and only added as issues present themselves.
This Constitution can and should be amended as issues present themselves.
Therefore be it resolved.
access to repository
there are several options ranging from one controlling committer to anyone can commit - but upon social agreement that they respect the project guidelines and the roadmap.
in my opinion distributed repository allows you to be more relaxed with who you allow to commit, because there are multiple backups and the repo becomes virtually unbreakable.
on a separate note - anyone can commit approach - which I would like to test out - sounds more like a "wiki". I can directly compare this with my experience managing a wiki (nmrwiki.org): where despite complete openness - where not even user registration is required to edit the resource, people are often wary about "breaking stuff" and this worry becomes a mental barrier to making a contribution. So a permissive approach to repository management may just work.
Communication style:
email and mailing lists:
discuss everything concerning the project in public (?)
when asking questions - do not bundle
questions with your own opinions
encourage people to propose more than one solution
ask people to weigh pros and cons
be brief and to the point
avoid frivolous language that can be perceived negatively with people that do not know you well
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We're a medium sized engineering shop (10-20). We are great at prioritizing and structuring work on our user facing stories and making customers happy. But the cobbler's children have no shoes. If it isn't about customers, we have 0 process.
I'm looking for systems to ensure we correctly prioritize and accomplish the non user facing work to keep a dev shop running: QA environments (pretty heavy, in our case), continuous integration systems, the packaging, and so forth.
Now, resources are always limited. We don't want to give the cobblers children 10 pair of the fanciest shoes, and specialized bike shoes to boot. We want to do the right, necessary work, with the same scrummy discipline that is applied to the rest of our development.
Tell me what system works for you: how to you prioritize and organize non-user facing work ? I want systems that are simple and integrate smoothly with scrum.
(I'm aware of a red box at the top of this text, indicating that Stack Overflow's automated question parser thinks this is a subjective question that can't be answered - I think there are likely 2 or 3 excellent answers that can be or have been proven viable - and process is integral to programming. So here is some psuedocode representing our process. Fix this algorithym).
IBacklog GetBacklogForWork(IWork requestedWork)
{
if(requestedWork.IsUserFacing) return new PrioritizedBacklogRepository();
// Everything else. Priority largely based on spare time and who thinks its a neat idea
return new RandomizedPriorityRepository();
}
void HandleIncomingSuggestionsForWork(IEnumerable(IWork) ideas)
{
foreach(work in ideas) GetBacklogForWork(work).Insert(work);
}
Someone involved is using and depending on the results of the project. This is necessarily true; if it weren't true, why would you be doing it?
When you identify the person who most depends upon, or cares most about, the results of the project, you have the "user" that your project is facing. Make that person the customer.
IMO something like "QA environments" is, in a sense, user-facing work.
Quality is admittedly a "non-functional" requirement (so there isn't necessarily an associated "story"). But, you may have a non-functional requirement like "the software must be tested before it's shipped". You can assign a relative priority to such a requirement ("how important is it that software be tested?"), and then execute as usual (decide how to implement that requirement, estimate how long it will take to implement, schedule the implementation, assign the implementation, etc.).
What we do where I work is to have a percentage, right now around 15% give or take a few percent, that is spent on internal tasks that are non-user facing work. This way the technical debt is handled and if the task backlog becomes rather large then a sprint may be spent on it instead of new functionality. The way that last one would get pitched to the user or customer is that there will be a time where just maintenance and preventive work is done so there aren't any new functions coming after the next sprint.
That's one idea that can be tweaked a bit though it isn't necessarily fully flushed out yet.
The way I've seen it work more or less ok is to try to do as much as possible of the non-functional/non-user-facing as PART of a related user-facing activity, or the first user-facing activity that requires it.
This is the easiest to cope with, as it just reflects the needs of the development organization in order to maintain sustainable velocity moving forward.
Additional work which cannot be related will be done using a percentage as described by JB King.
The alternative of pitching it as an investment with such and such ROI to the PO is a theoretically sexy concept, but with real life POs I've seen it rarely works.
Its very hard to get POs to understand the investment, not to mention actually being strong enough to delay functionality for it.
Its sometimes the difficult role of the development teams to be the guys that "slow things down" in order to keep a sustainable situation.
Dev managers sometimes feel really bad about this whole sitation, regardless of the chosen approach. My recommendation both as someone who's been in that spot, as well as an Agile coach, is that as long as you feel you are doing the right thing for the business, focusing on non-functional work that is required NOW and has a relatively quick ROI, you should feel ok about this.
Cautionary note: This is an area where self-organization is really put to the challenge. Organization needs to trust the team to do the right thing, and the team needs to earn and not abuse that trust. Its a sign of maturity for an individual or a team to know the right balance.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wonder what would be a good definition of term "over-engineering" as applied to software development. The expression seems to be used a lot during software design discussions often in conjunction with "excessive future-proofing" and it would be nice to nail down a more precise definition.
Contrary to most answers, I do not believe that "presently unneeded functionality" is over-engineering; or it is the least problematic form.
Like you said, the worst kind of over-engineering is usually committed in the name of future-proofing and extensibility - and achieves the exact opposite:
Empty layers of abstraction that are at best unnecessary and at worst restrict you to a narrow, inefficient use of the underlying API.
Code littered with designated "extension points" such as protected methods or components acquired via abstract factories - which all turn out to be not quite what you actually need when you do have to extend the functionality.
Making everything configurable to "avoid hard-coding", with the effect that there's more (complex, failure-prone) application logic in configuration files than in source code.
Over-genericizing: instead of implementing the (technically uninteresting) functional spec, the developer builds a (technically interesting) "business rule engine" that "executes" the specs themselves as supplied by business users. The net result is an interpreter for a proprietary (scripting or domain-specific) language that is usually horribly designed, has no tool support and is so hard to use that no business user could ever work with it.
The truth is that the design that is most easily adapted to new and changing requirements (and is thus the most future-proof and extensible) is the design that is as simple as possible.
Contrary to popular belief, over-engineering is really a phenomena that appears when engineers get "hubris" and think they understand the user.
I made a simple diagram to illustrate this:
In the cases where we've considered things over engineered, it's always been describing software that has been designed to be so generic that it loses sight of the main task that it was initially designed to perform, and has therefore become not only hard to use, but fundimentally unintelligent.
To me, over-engineering is including anything that you don't need and that you don't know you're going to need. If you catch yourself saying that a feature might be nice if the requirements change in a certain way, then you might be over-engineering. Basically, over-engineering is violating YAGNI.
The agile answer to this question is: every piece of code that does not contribute to the requested functionality.
There is this discussion at Joel on Software that starts with,
creating extensive class hierarchies for an imagined future problem that does not yet exist, is a kind of over-engineering, and is therefore, bad.
And, gets into a discussion with examples.
If you spend so much time thinking about the possible ramifications of a problem that you end up interfering with the solving of the problem itself, you may be over-engineering.
There's a fine balance between "best engineering practices" and "real world applicability". At some point you have to decide that even though a particular solution may not be as "pure" from an engineering standpoint as it could be, it will do the job.
For example:
If you are designing a user management system for one-time use at a high school reunion, you probably don't need to add support for incredibly long names, or funky character sets. Setting a reasonable maximum length and doing some basic sanitizing should be sufficient. On the other hand, if you're creating a system that will be deployed for hundreds of similar events, you might want to spend some more time on the problem.
It's all about the appropriate level of effort for the task at hand.
I'm afraid that a precise definition is probably not possible as it's highly dependent on the context. For example, it's much easier to over-engineer a web site that displays glittering ponies than it is a nuclear power plant control system. Redundancies, excessive error checking, highly instrumented logging facilities are all over-engineering for a glittering ponies app, but not for a nuclear power plant control system. I think the best you can do is have a feeling about when you are applying too much overhead to your features for the purpose of the application.
Note that I would distinguish between gold-plating and over-engineering. In my mind, gold-plating is creating features that weren't asked for and will never be used. Over-engineering is more about how much "safety" you build into the application either by coding checks around the code or using excessive design for a simple task.
This relates to my controversial programming opinion of "The simplest approach is always the best approach".
Quoting from here: "...Implement things when you actually need them, never when you just foresee that you need them."
To me it is anything that would add any more fat to the code. Meat would be any code that will do the job according to the spec and fat would be any code that would bloat the code in a way that it just adds more complexity. The programmer might have been expecting a future expansion of the functionality; but still it is fat.!
My rough definition would be 'Providing functionality that isnt needed to meet the requirements spec'
I think they are the same as Gold plating and being hit by the Golden hammer :)
Basically, you can sit down and spent too much time trying to make a perfect design, without ever writing some code to check out how it works. Any agile method will tell you not to do all your design up-front, and to just create chunks of design, implement it, reiterate over it, re-design, go again, etc...
Over-engeneering means architecting and designing the applcation with more components than it really should have according to the requirements list.
There is a big difference between over-engeneering and creating an extensible applcaiton, that can be upgraded as reqirements change. If I can think of an example i'll edit the post.
Over-engineering is simply creating a product with greater functionality, quality, generality, extensibility, documentation, or any other aspect than is required.
Of course, you may have requirements outside a specific project -- for example, if you forsee doing future similar applications, then you might have additional requirements for extendability, dependent on cost, that you add on to the project specific requirements.
When your design actually makes things more complex instead of simplifying things, you’re overengineering.
More on this at:
http://www.codesimplicity.com/post/what-is-overengineering/
Disclaimer #1: I am a big-picture BA. I know no code. I read this site all the time. This is my first post.
Funny I was just told by my boss that I over-engineered a new software produce we're planning for mentoring (target market HR people). So I came here to look up the term.
They want to get something in place to sell now, re-purposing existing tools. I can't help but sit back and think, fewer signups, lower retention, if it doesn't allow some of the flexibility we talked about. And mainly, have a highly visual UI that a monkey could use.
He said we could plan future phases to improve the product, especially the UI. We have current customers waiting on "future improvements" that we still aren't doing. They need it though, truly need it.
I am in the process of resigning so I didn't push back.
But my definition would be.............making sure it only does as little as possible, for as cheap as possible, and still be passable for the thing you say it is. Beyond that is over engineering.
Disclaimer #2: This site helped me land my next job implementing a more configurable software.
I think the best answers to your question can be found in this other qestion
The beauty of Agile programming is that it's hard to over engineer if you do it right.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Are there objective metrics for measuring code refactoring?
Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking whether the code was actually improved rather than just changed?
I'm looking for metrics that can be can determined and tested for to help improve the code review process.
Number of failed unittests must be less or equal to zero :)
Depending on your specific goals, metrics like cyclomatic complexity can provide an indicator for success. In the end every metric can be subverted, since they cannot capture intelligence and/or common sense.
A healthy code review process might do wonders though.
Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking if the code was actually improved rather than just changed?
Actually, as I have detailed in the question "What is the fascination with code metrics?", the trend of any metrics (findbugs, CRAP, whatever) is the true added value of metrics.
It (the evolution of metrics) allows you to prioritize the main fixing action you really need to make to your code (as opposed to blindly try to respect every metric out there)
A tool like Sonar can, in this domain (monitoring of metrics) can be very useful.
Sal adds in the comments:
The real issue is on checking what code changes add value rather than just adding change
For that, test coverage is very important, because only tests (unit tests, but also larger "functional tests") will give you a valid answer.
But refactoring should not be done without a clear objective anyway. To do it only because it would be "more elegant" or even "easier to maintain" may be not in itself a good reason enough to change the code.
There should be other measures like some bugs which will be fixed in the process, or some new functions which will be implemented much faster as a result of the "refactored" code.
In short, the added value of a refactoring is not solely measured with metrics, but should also be evaluated against objectives and/or milestones.
Code size. Anything that reduces it without breaking functionality is an improvement in my book (removing comments and shortening identifiers would not count, of course)
No matter what you do just make sure this metric thing is not used for evaluating programmer performance, deciding promotion or anything like that.
I would stay away from metrics for measuring refactoring success (aside from #unit test failures == 0). Instead, I'd go with code reviews.
It doesn't take much work to find obvious targets for refactoring: "Haven't I seen that exact same code before?" For the rest, you should create certain guidelines around what not to do, and make sure your developers know about them. Then they'll be able to find places where the other developer didn't follow the standards.
For higher-level refactorings, the more senior developers and architects will need to look at code in terms of where they see the code base moving. For instance, it may be perfectly reasonable for the code to have a static structure today; but if they know or suspect that a more dynamic structure will be required, they may suggest using a factory method instead of using new, or extracting an interface from a class because they know there will be another implementation in the next release.
None of these things would benefit from metrics.
Yes, several measures of code quality can tell you if a refactoring improves the quality of your code.
Duplication. In general, less duplication is better. However, duplication finders that I've used sometimes identify duplicated blocks that are merely structurally similar but have nothing to do with one another semantically and so should not be deduplicated. Be prepared to suppress or ignore those false positives.
Code coverage. This is by far my favorite metric in general, but it's only indirectly related to refactoring. You can and should raise low coverage by writing more tests, but that's not refactoring. However, you should monitor code coverage while refactoring (as with any other change to the code) to be sure it doesn't go down. Refactoring can improve code coverage by removing untested copies of duplicated code.
Size metrics such as lines of code, total and per class, method, function, etc. A Jeff Atwood post lists a few more. If a refactoring reduces lines of code while maintaining clarity, quality has increased. Unusually long classes, methods, etc. are likely to be good targets for refactoring. Be prepared to use judgement in deciding when a class, method, etc. really does need to be longer than usual to get its job done.
Complexity metrics such as cyclomatic complexity. Refactoring should try to decrease complexity and not increase it without a well thought out reason. Methods/functions with high complexity are good refactoring targets.
Robert C. Martin's package-design metrics: Abstractness, Instability and Distance from the abstractness-instability main sequence. He described them in his article on Stability in C++ Report and his book Agile Software Development, Principles, Patterns, and Practices. JDepend is one tool that measures them. Refactoring that improves package design should minimize D.
I have used and continue to use all of these to monitor the quality of my software projects.
There are two outcomes you want from refactoring. You want to team to maintain sustainable pace and you want zero defects in production.
Refactoring takes place on the code and the unit test build during Test Driven Development (TDD). Refactoring can be small and completed on a piece of code necessary to finish a story card. Or, refactoring can be large and required a technical story card to address technical debt. The story card can be placed on the product backlog and prioritized with the business partner.
Furthermore, as you write unit tests as you do TDD, you will continue to refactor the test as the code is developed.
Remember, in agile, the management practices as defined in SCRUM will provide you collaboration and ensure you understand the needs of the business partner and the code you have developed meets the business need. However, without proper engineering practices (as defined by Extreme Programming) you project will loss sustainable pace. Many agile project that did not employ engineering practices were in need of rescue. On the other hand, a team that was disciplined and employed both management and engineering agile practice were able to sustain delivery indefinitely.
So, if your code is released with many defects or your team looses velocity, refactoring, and other engineering practices (TDD, paring, automated testing, simple evolutionary design etc), are not being properly employed.
I see the question from the smell point of view. Smells could be treated as indicators of quality problems and hence, the volume of identified smell instances could reveal the software code quality.
Smells can be classified based on their granularity and their potential impact. For instance, there could be implementation smells, design smells, and architectural smells. You need to identify smells at all granularity levels before and after to show the gain from a refactoring exercise. In fact, refactoring could be guided by identified smells.
Examples:
Implementation smells: Long method, Complex conditional, Missing default case, Complex method, Long statement, and Magic numbers.
Design smells: Multifaceted abstraction, Missing abstraction, Deficient encapsulation, Unexploited encapsulation, Hub-like modularization, Cyclically-dependent modularization, Wide hierarchy, and Broken hierarchy. More information about design smells can be found in this book.
Architecture smells: Missing layer, Cyclical dependency in packages, Violated layer, Ambiguous Interfaces, and Scattered Parasitic Functionality. Find more information about architecture smells here.