When to upgrade an existing program to new language features? - language-agnostic

Imagine you have a program written e.g. in Java 1.4 or C# 1.0. Of course this program doesn't use generics and other language features that were introduced with later versions.
Now some non-trivial changes have to be made. And of course, you already have a newer IDE/compiler so you could use Java 1.6 resp. C# 3.5 instead. Would you use this oppurtunity to upgrade to the latest language features, i.e. use generic containers and get rid of many casts, etc. Or would you leave it like it is, and use the new features only for new parts. Or even stay with the features of the version originaly used, to maintain a level of consistency?

My basic rule for code maintenance: If it ain't broke, don't fix it.

It's just one specific form of refactoring, so all the advice on when and how to do that applies.

One consideration here is your present and future ability to attract and retain developers to perform maintenance and support on an application built with outdated technology. Many seasoned developers worked with those older versions of Java and C# when they were current, so it isn't that we have no experience with it. But if we spend, say, a year working in .Net 1.0, we will be forgetting what we now know about subsequent versions. And while we are spending that year working in old technology, all of our friends and competitors are honing their skills on the latest technology. We will never be able to make that year up.
In addition to falling behind, we will find it intensely frustrating not to be able to use the functionality in the later versions that we are now comfortable using.
So, younger developers will not have had experience in older technology (.Net 1.0 was released in January 2002). Older developers have been there and don't want to go back.
On the other hand, if you stay back in the older version, you will be comfortable with your skillset. You won't have to spend a lot of time learning about newer technologies. And perhaps best of all, you will have job security.

As far as possible, leave working code alone. It may seem like the code should all be using the latest programming secret sauce, but there is a lot to be said for tried and tested code.
I would only consider rewriting code that must be heavily modified in any case to add a new feature, and even then I would only change the programming metaphor if it will speed up the writing of the new feature.
Otherwise you will constantly be debugging code which was previously working, and using up energy that could have gone into improving the product.

It depends. Does it need to be compiled by old compilers? Do you have enough time/money to change everything?

In theory, it's a simple balance between costs and benefits, so you should only do the rewrite if the benefits outweigh the costs.
The problem with that is that it's almost impossible to measure the real costs (not only of doing the work, but of not doing other things that might contribute more). Generally speaking, the benefits can't really be measured at all -- you can only guess at how much difficult something would be if you'd stayed with the old code.
That leaves us with little chance to do anything based on rational measurements. That leaves a simple rule of thumb: leave old code alone until your only choices are to rewrite it or abandon it completely.

Related

Developing using pre-release dev tools

We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?

Legacy code - when to move on

My team and support a large number of legacy applications all of which are currently functional but problematic to support and maintain. They all depend on code that the compiler manufacture has officially no support for.
So the question is should we leave the code as is, and risk a new compiler breaking our code, or should we bite the bullet and update all the code?
The answer is totally dependant on the resources your employer (or yourself) can afford to make the refactoring (or even totally rewrite big parts).
So you should first estimate how much time/developers you can afford to refactoring the application, then see if you think it'll be enough.
If you can afford time and people, then do it, don't hesitate! You're investing in the future by reducing the time to debug the application so it will be helpful and less expensive once the refactoring is done.
It depends on the nature of the applications, just how big and important they are, as well as the programming culture at your workplace, and the resources available to you.
If the applications are valuable enough to you that they are worth the trouble, and you have the necessary resources, then do the update. Don't let the problem persist.
If they are not valuable enough to be worth a full-scale update effort, or appropriate resources are not at hand, perhaps work on updating one at a time if possible.
Just some suggestions, but again this greatly depends on you and your organization.
It sounds like you have a large technical debt. This debt is only going to increase unless you do something. Both things you mentioned are options, and risky, but long term it's a risk you need to take.
Using an updated compiler just means you need to update the code to work in the new compiler. Something is bound to break, but then refactor the parts that break. This allows you to migrate.
The other option is to update your entire code base. This takes time, during which you need to maintain 2 copies of the code, or freeze the old version. Freezing the old version is probably not an option.
I would recommend using an updated compiler and fixing what breaks. This allows you to add features, while refactoring and fixing the current codebase.
Rewriting the code can be an useful step for you company for many reasons:
you can use a new compiler and a more recent platform
you can refactor the code deleting its weaknesses
you can motivate your people because developing new code is better than correct bugs in an old one.
Why don't you start that activity with a small number of people, beginning from the most common parts of the code? You can group them into a dll and use it also for future projects.

Future of languages with no standard and no corporate backing

Over the years we have seen (well, I have :) a number of languages come and go. Some were more accepted, some a little less. So I was wondering, what do you think are factors which most impact whether the language survives ? And whether it will have a future for a number of years (by that I mean several decades or so) ?
For example, fortran and C have survived the test of time. They were popular though, but they also had very good corporate backing, financing, and standard specifications (ANSI and ISO).
Some of the modern languages I see today, although they are popular, have none of that (the current implementation is often considered standard). That is all fine for the time being, but what about 10 or 20 years later, when their authors are maybe not here anymore. I very rarely see open source languages which make the transition into corporate financing.
If you could put with a few words, in your opinion, what would be the most important factors for the survival of a language and why ?
Ruby is popular, although it has no corporate backing. And it has been here for 14 years already.
Perl already survived 22 years, and probably will survive a few more.
Python has no corporation backing (ok, don't know if you'd count Google's engagement), yet it made to Fortune 500 companies.
On the other hand:
Pascal got corporate backing and died.
Ada has corporation backing and it's practically reduced to DSL for avionics.
I think the answer depends a lot on the time-frame in which you define survival. This is important because I think there are three factors that have changed over time, and are still changing:
Hardware performance (i.e. speed or memory)
Hardware complexity (i.e. single-core v.s. multi-core)
Software complexity
I think the reason C has survived is because, until just the past few years, there was still a very real need for maximum performance in a lot of applications. Perhaps there will always be that kind of need, but I think it has been growing much less relevant in the past few years. I think it's always going to be around, but I'd be surprised if it was widely used 20 years from now; it's already started getting passed up in favor of C#/Java/etc in the past five years.
The recent (by which I mean past five years or so) rise of languages like Python are also a response to the fact that software has grown more complex, while performance has become less of an issue. Because consumers value the 'now', there's a huge incentive to develop quickly, and worry about speed later, if at all. That has a pretty big impact on which language you use for development.
I see clarity, maintainability and ease of use as the most important factor for survival, if you take the future out to 20+ years.
Every future language needs to make an existing problem easy
For example, concurrent programming is not easy on most languages today. This will be solved with a new language as we can not easily coax our existing paradigms into the parallel world. Just take a look at Java, which was built from the ground up with threads in mind, it has so many caveats with you even dare to do concurrent programming.
We'll need a system that makes it so easy to do concurrent programming that we won't even need to think about it. We'll need a memory model that protects us from having to think about these problems. For those who can't imagine such a world, you are just stuck in our current paradigm. We will need to change the way we develop software for this to work. Serious problems require change.
Another way for a language to survive is to attach it to an entire system. Just look at Objective C, it is Apple's language for all Apple products. I think this is the way to go. Design a system that is worthy of its own language.
There are many other examples, I've been thinking about this problem for a long time.
As far as I can recall, Fortran had no corporate backing until it was well established. C was backed by AT&T, but they really didn't care if anyone else adapted it. And both were well established before they had ANSI Standards (also, note the ANSI & ISO provide Standard specifications, not implementations)
On the other hand, IBM heavily back & promoted PL/I, and that never really caught on. And the US Government tried to get all of us writing Ada, and that didn't work either.
So, what does work? Good question. Getting schools to teach it is good (Pascal pretty much disappeared when colleges switched to C++ & Java). Lately have "buzz on the 'net" is good (cite: Java, Ruby)
In order for a language to survive it needs several things:
It needs to solve a problem better then other comparable options. This is the subjective aspect, that developers feel it is better and so they adopt it.
It needs to have good tooling. Without good tooling a language will never catch on to the masses.
It needs a strong community to be built around it. A community which provides assistance, help, components, etc etc...
I don't think corporate backing has a direct impact on these items. I think it can make things such as developing tooling more likely, but there are too many examples where it has helped or not helped adoption of a language.
Open source community has become more like a huge corporate, hasn't it?
Languages survive while they are used, and while people are prepared to maintain them. People are often prepared to maintain the language while it is used. If a language is not used, it dies.
There can be all sorts of things that contribute to, or determine whether, a language dies. Corporate-sponsored languages die if the corporate sponsor ceases to see a benefit (profit) in the language, or they want people to use an alternative, and the corporate sponsor is unwilling to release the code to open source, and there is no open source alternative.
I don't see evidence that corporate backing or standardization are sufficient to determine whether a language survives or not. There are many corporate backed languages that have failed to gain a strong foothold (ADA comes to mind). There are many standardized languages (Common Lisp) that also failed. On the other hand, there are plenty of non-standard non-corporate languages that gain popularity (Perl, PHP, Ruby). There doesn't seem to be causality there.
The viability of a language is really determined by the community around it. There is a positive feedback loop. More users means more support and more libraries which in turn means more users. Popular languages can languish, but they don't totally die out. Not for a long time.
If I were looking for a language to use for something that had to last, the two biggest criteria in my mind would be:
Does it work well for my problem domain?
Is the community strong enough to be self-perpetuating?
If the answers to those two questions are true, use the language. If either answer is false, don't.
While other languages have been almost killed by their corporate backing = Delphi

What does it take to make a language successful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have an interesting idea for a new programming language. It's based on a new programming paradigm that I've been working out in my head for some time. I finally got around to start working on a basic parser and interpreter for it a few weeks ago.
I want my new language to be successful and I want to eventually create a community around it when it's ready to release. The idea behind it is fairly innovative, so I don't expect it to gain a lot of ground in the business world, but it would thrill me more than anything else to see a handful of start ups use or open source projects use it.
So taking those aims into account, what can I do to help make my language successful? What do language projects do to become successful? What should I avoid at all costs? I'd love to hear opinions or stories about other languages -- successful or not -- so I can think about them as I continue to develop.
So far, the two biggest concerns on my mind are finding a market, access to existing libraries, having amazing tool support. What else might I add to this list?
The true answer is by having a beard.
http://blogs.microsoft.co.il/blogs/tamir/archive/2008/04/28/computer-languages-and-facial-hair-take-two.aspx
Although not specific to new programming languages, the book Producing Open Source Software by Karl Fogel (available to read online) may be contain some hints to the issue of making a community around your new programming language.
In terms of adoption of programming languages in general, it seems like the trend lately has been to have a rich library to make development times shorter.
As there isn't much detail on what your language is like, it's hard to determine whether adoption of the language is going to depend on the availability of a rich library. Perhaps your language will be able to fill a niche that has been overlooked by other languages and be able to gain users. Or perhaps it has a slick name that will draw people in -- there are many factors which can affect the adoption of a language.
Here are some factors that come to mind when thinking about recent successful languages:
Ability to leverage existing libraries in the new language.
Having an adapter to external libraries written in other languages.
Python allows access to code written in C through the Python/C API.
Targeting a platform which already has plenty of libraries available for use.
Groovy and Scala target the Java platform, therefore allowing the use of and interoperation between existing Java code.
Language design and syntax to allow increased productivity.
Many dynamically-typed languages have gained popularity, such as Ruby and Python to name a couple.
More concise and clear code can be written in languages such as Groovy, as opposed to verbose languages such as Java.
Offering features such as functions as first-class objects and closures which aren't offered in more "traditional" languages such as C and Java.
A community of dedicated users who also are willing to teach newcomers on the benefits of a language
The human factor is going to be big in wide-spread support for a language -- if people never start using your language, it won't gain more users.
Also, another suggestion that I could add is to make the development of your language open -- keep your users posted on developments in your language, and allow people to give you feedback. Better yet, let your users take part in the decision-making process, if you feel that is appropriate.
I believe that by offering ways to participate in the bringing up of a language, the more people will feel that they have a stake in the success of the new language, so the more likely it will gain more support.
Good luck!
Most languages that end up taking off rapidly do so by means of a killer app. For C it was Unix. Ruby had Rails. JavaScript is the only available programming system common to most browsers without third-party add-ons.
Another means of success is by fiat. This only works if you have significant clout. For example C#, as nice as a language as it might be, wouldn't be any where near as popular as it is now if Microsoft had not pushed it as hard as it does. Objective-C is the language of MacOS X simply because Apple says so.
The vast majority of languages, though, which lack a single killer app or a major corporate backer have gained success through long term investment of their respective creators. Perl and Python are prime examples. C++ has no single entity behind it, but it has evolved as the needs of developers have changed.
Don't worry about trying to make the language be successful; worry about using it to solve real problems and make real money.
You'll either make lots of money from using this language, or not. Once you have lots of money, others may care how you did it. Or not, either way you have lots of money.
If you don't make lots of money, nobody will want to know how you did it.
Edit based on comment: I define successful as people using it, and people use languages to solve problems, most for profit, thus successful == profitable.
In addition to making the language easy to use (which has several meanings), you should develop a comprehensive library that covers and also provides a good level of abstraction over (the following most important areas):
* Data structures and manipulation
* File I/O support
* XML processing
* Networking (plus web based technologies like HTTP/HTTPS)
* Database support
* Synchronous and asynchronous I/O
* Processes and threads
* Math
A well thought out framework that makes rapid development faster (and easier to maintain) would be a great addition. For this, you should know the currently popular frameworks well.
Keep in mind that it takes a lot of time. I think it took python about 10 years (someone please correct me if I'm wrong).
So even if your community still seems small after say, 5 years, that's not the end of the story.
"It's based on a new programming paradigm that I've been working out in my head for some time."
While laudable, odds are really good that someone has already done something with your "new" paradigm.
To make a language usable, it must build on prior art. Totally new is not a good path to success. My favorite example is Algol 68.
Algol 60 was wildly popular (back in the day, which is a while ago, admittedly).
The experts wanted to build on this success. They proposed some new paradigms, the effort split into factions. The purists put the new paradigms into Algol 68; it disappeared into obscurity. Some folks created a different version of Algol, called PL/I. It did not have any really new paradigms. It actually went somewhere and was used heavily. Another group created Pascal -- it didn't have much that was new -- it discarded things from Algol 60. It actually went somewhere ans was used heavily.
Your new paradigm must have a clear and concise summary so people can fit it into a context of where the language is usable, how it can be used, what the costs and benefits of using it are.
A "new programming paradigm" causes some people to say "why learn a completely new paradigm when the ones I have work so nicely?" You have to be very clear on how it helps to have a new paradigm.
The language and libraries must work, and work very, very well. A language that isn't rock-solid is worthless. In order to be rock-solid it must be very simple.
It has to have a tutorial that will help anyone get started with your language.
Good Framework for Common Tasks
Easy Installation/Deployment
Good Documentation
Debugger/IDE and other Tools
A popular flagship product that uses your language!
Good documentation, including a detailed reference manual as well as simple examples to get people started quickly.
Good library support so that people can actually write useful programs.
Most popular languages seem to be very strong in either or both or both of those.
Use Trojan Horse approach
C++ - The Forgotten Trojan Horse
An interesting article on why C++ can grab the heart of programmers successfully.

What makes code legacy?

I have heard many developers refer to code as "legacy". Most of the time it is code that has been written by someone who no longer works on the project. What is it that makes code, legacy code?
Update in response to:
"Something handed down from an ancestor or a predecessor or from the past" http://www.thefreedictionary.com/legacy. Clearly you wanted to know something else. Could you clarify or expand your question? S.Lott
I am looking for the symptoms of legacy code that make it unusable or a nightmare to work with. When is it better to throw it away? It is my opinion that code should be thrown away more often and that reinventing the wheel is valuable part of development. The academic ideal of not reinventing the wheel is a nice one but it is not very practical.
On the other hand there is obviously legacy code worth keeping.
By using hardware, software, APIs, languages, technologies or features that are either no longer supported or have been superceded, typically combined with little to no possibility of ever replacing that code, instead using it til it or the system dies.
What is it that makes code, legacy code?
As with plain legacy, when the author is dead or missing, you as a heir get all or some of his code.
You shed some tears and try to figure out what to do with all this rubbish.
Michael Feathers has an interesting definition in his book Working Effectively with Legacy Code. According to him legacy code is code without automated tests.
It is a very general (and oft abused term) but any of the following would be legitimate reasons to call an app legacy:
The code base is based on a language/platform which is entirely unsupported by the manufacturer of the original product (often said manufacturer has gone out of business).
(really 1a) The code base or platform on which it is built is so old that getting qualified or experienced developers for the system is both hard and expensive.
The application supports some aspect of the business which is no longer actively grown and for which alterations are extremely rare, normally to fix it if something entirely unexpected changes around it (the canonical example being the Y2K issue) or if some regulation/external pressure forces it. Since both reasons are pressing and normally unavoidable but no significant development has occurred on the project it is likely that those people assigned to deal with this will be unfamiliar with the system (and it's accumulated behaviours and intricacies). In these cases this would often be reason to increase the perceived and planned for risk associated with the project.
The system has/or is being replaced with another. As such the system may be used for much less than originally intended, or perhaps only as a means of viewing historical data.
Legacy generally refers to code that is no longer being developed - meaning that if you use it, you have to use it on its original terms - you cannot just edit it to support the way the world looks today. For example, legacy code has to run on hardware that may not exist today - or is no longer supported.
According to Michael Feathers, the author of the excellent Working Effectively with Legacy Code, legacy code is a code which has no tests. When there is no way to know what breaks when this code changes.
The main thing that distinguishes
legacy code from non-legacy code is
tests, or rather a lack of tests. We
can get a sense of this with a little
thought experiment: how easy would it
be to modify your code base if it
could bite back, if it could tell you
when you made a mistake? It would be
pretty easy, wouldn't it? Most of the
fear involved in making changes to
large code bases is fear of
introducing subtle bugs; fear of
changing things inadvertently. With
tests, you can make things better with
impunity. To me, the difference is so
critical, it overwhelms any other
distinction. With tests, you can make
things better. Without them, you just
don’t know whether things are getting
better or worse.
Nobody is gonna read this, but I feel the other answers don't get it quite right:
It has value, if it wasn't useful it would've been thrown away long ago
Its hard to reason about because either of
Lack of documentation,
Original author cannot be found or forgot (yes 2 months later your code can be legacy code too!!),
Lack of tests or typesystem
Doesn't follow modern practices (ie no context to hold on too)
There is a requirement to change or extend it.
If there isn't a requirement to change it, it isn't legacy code
since nobody cares about it. It does its thing and there is nobody
around to call it legacy code.
A colleague once told me that legacy code was any code that you hadn't written yourself.
Arguably, it's just a pejorative term for code that we don't like any more for whatever reason (typically because it's not cool or fashionable but it works).
The TDD brigade might suggest that any code without tests is legacy code.
Legacy code is source code that relates to a no-longer supported or manufactured operating system or other computer technology.
http://en.wikipedia.org/wiki/Legacy_code
"Legacy code is source code that relates to a no-longer supported or manufactured "
Any code with support (or documentation) missing. Be it:
inline comments
technical documentation
spoken documentation (the person who wrote it)
unit tests documenting the workings of the code
For me legacy code is code that was written prior to some paradigm shift.
It may still be very much in use but it is in the process of being refactored to bring it into line.
e.g. Old procedural code hanging around in an otherwise OO system.
Code (or anything else, really) becomes "legacy" when it has been replaced by something newer/better, and yet despite this it's still used and kept alive "in the wild".
Preserving legacy code is not so much an academic ideal as it is keeping code that works, no matter how poorly. In many conservative enterprise situations, that would be considered more practical than throwing it away and starting again from scratch. Better the devil you know...
Legacy code is code that is painful/expensive to keep current with changing requirements.
There are two ways that this can happen:
The code is unsuitable for change
The semantics of the code have been swapped out to silicon
1) is the easier of the two to recognize. It is software that has fundamental limits making it unable to keep up with the ecosystem around it. For example, a system built around O(n^2) algorithm won't scale beyond a certain point and must be re-written if requirements move in that direction. Another example is code using libraries that are not supported on the latest OS versions.
2) Is harder to recognize, but all code of this kind shares the characteristic that people are afraid to change it. This could be because it was badly written/documented to begin with, because it is untested, or because it is non-trivial and the original authors who understood it left the team.
The ASCII/Unicode chars that comprise living code have semantic meaning, the "why's", "what's" and to some degree the "how's", in the minds of people associated with it. Legacy code is either un-owned or the owners do not have meaning associated with large portions of it. Once this happens (and it could happen the next day with really poorly-written code), to change this code, someone must learn it and understand it. This process is a significant fraction of the time it takes to write it in the first place.
The day you're afraid to refactor your code is the day when your code has become legacy.
I consider code "legacy" if any or all of the following conditions apply:
It was written using a language or methodology that is a generation behind current standards
The code is a complete mess with no planning or design behind it
It is written in outdated languages and in an outdated, non object-oriented style
It is difficult to find developers who know the language because it is so old
Unlike some of the other opinions here, I've seen plenty of modern applications that work decently without unit tests. Unit testing still has not caught on with everyone. Perhaps ten years from now the next generation of programmers will look at our current applications and consider them "legacy" for not containing unit tests, just as I consider non object-oriented applications to be legacy.
If few changes need to be made to a legacy codebase, it's better to simply leave it as-is and go with the flow. If the application needs drastic functionality changes, a GUI overhaul, and/or you can't find anyone who knows the programming language, it's time to throw away and start over. A word of warning, however: rewriting from scratch can be very time-consuming, and it's difficult to know if you've replicated all functionality. You'll probably want to have test cases and unit tests written for the legacy application and the new application.
Quite honestly legacy code is any code, framework, api, of other software construct thta's not "cool" anymore. For example COBOL is unanimously regarded as legacy while APL is not. Now one can also make the case that COBOL is consideed legacy and APL not because it has about 1m times the install base as APL. However, if you say that you need to work on APL code the reply would not be "oh no, that legacy stuff" but rather "oh my god, guess you won't be doing anything for the next century" see the difference?
This is a general term thrown around quite often (and quite generically) in the software ecosystem.
Well, I like to think of legacy code as inherited code. This is simply code that was written in the past. In most cases, legacy code do not follow new/current practices and is often considered archaic.
Legacy code is anything written more than a month ago :-)
It's often any code that isn't written in the trendy scripting language du jour, and I'm only half joking.