How to hide silly warnings in Poedit - warnings

I'm translating English to Japanese in Poedit, and I'm getting warnings like this:
That is the Japanese character for a period/full-stop! I would like to either: teach Poedit what is perfectly valid punctuation (preferred, so that other warnings still appear), or disable these warnings altogether (plan B). Does anyone know how?
The only settings in Preferences that I thought might be applicable was "Check Spelling", but disabling that did nothing.
I recently upgraded to Poedit 2, and I don't remember seeing those warnings before, so perhaps it is a new "feature".

Yes, it's a new feature, and yes, it's not entirely perfect yet. As you may or may not be aware, bugs happen, and the traditional process of reporting them to the developer usually works quite well — considerably better, I dare say, than spewing snark at random Internet places.
Thanks for bringing this to my attention, will be fixed in 2.0.3. If you encounter more issues than this, please report them directly.

Related

Reverse engineering a QuickBASIC 3.0 program

I have a program (I own the rights) written in QuickBASIC 3.0, though I do not have anymore the source code.
Anyone know a decompiler that I can use to see what the program does?
Basically it gets some numbers in input and it performs some calculation, showing some results. Nothing too complicated.
Thanks
I haven't seen any publicly available tools but there's a page from a guy who claims to have made one. You could try contacting him.
I wouldn't recommend trying it on your own if you don't have any experience in reversing DOS programs. It seems QuickBASIC 3.0 was compiled into some kind of p-code. I've never seen any research on the DOS-era p-code, but it might bear some relation to the one eventually used in Visual Basic 6.0, and that one has been investigated quite a lot.
If you vaguely remember the idea but don't remember the details (e.g. actual values of coefficients in the formula), one thing you could try is to enter some numbers, read the results, and save them in an Excel sheet. Repeat that a couple of times and try to plot the data. Not much, but might help.
Use the debugger of Borland C++ 3.1, but you are going to need knowledge of assembler...

Is it ok to put comments about bug fixes in the source code?

And if so, where do you draw the line? My coworkers and I disagree on this subject. I have seen such things as
// fixes bug # 22
to
// fixed bug: shouldnt be decrementing
i++;
Is it ok if the change is fairly significant, and radically changes what the method was written to do? Or do you simply change the summary text of the method to reflect what it is now meant to do?
My opinion is that this information should be put into source control. Some state that this is bad because then it will be lost outside of the context of source control (say you switch systems and want to keep historical data).
Comments should explain how the methods work.
Source control explains why changes were made.
Adding a comment about bug fixing is a good idea, if you write the right thing.
For example,
/* I know this looks wrong, but originally foo was being decremented here, and
it caused the baz to sproing. Remember, the logic is negated by blort! */
Stuff like Fixes bug #22 is better kept in source control. Comments in your code should be signposts to help future sojourners on their way, not satisfy process and tracking.
No. You should keep information on bugs and the change set that fixes the bug external to the source code. Any comments in the code itself should only relate to what the code is doing. Anything else is just clutter.
I personally feel that comments should be about the code itself, not about a bug fix.
My reason for this is maintainability - 2 (or 10) years later, this comment will no longer be meaningful. In your example above, I would prefer something like:
// Increment i to counteract extra decrement
++i;
The difference is that it's not tied to a bug, but rather what the code is doing. Comments should be commenting on the code, not meta info, IMO.
This opinion is partially because I maintain a very old codebase - and we have lots of comments that are no longer meaningful related to bug fixes or feature enhancement requests, etc....
We had a few comments like this, but then our Bugzilla server died and we restarted at bug #1 so they're all meaningless. A short explanation of the bug is my preferred method now.
Something like // fixes bug # 22
is quite meaningless on its own, and requires supplementary steps to even get an idea about what it means and what role it fulfills. A short description is in my opinion more appropriate, regardless of the bug tracking or source control software you might be using.
If the algorithm needs to be coded in a certain way - to workaround a bug in a 3rd party API for example, then that should be commented in the code so that the next person that comes along doesn't try to "optimise" the code (or whatever) and reintroduce a problem.
If this involves adding a comment when you fix the original bug then do it.
It will also serve as a marker so you can find the code you need to check if ever you upgrade to the next version of the API.
Assuming comments aren't superfluous (the classic i++; //increment i example comes to mind), there is almost never a reason to argue against adding a comment, regardless of what it's related to. Information is useful. However, it's best to be descriptive and concise - don't say "fixes bug #YY", but instead add something like "this used to fail for x=0, the extra logic here prevents that". That way someone who looks at the code later can understand why a particular section is critical to proper function.
I rely on FogBugz and check-in comments in svn. Works great, though as jeffamaphone said case numbers don't make a lot of sense if you lose your bug database.
A problem with putting comments in the code is that, over time, your code will become littered with comments about problems that haven't existed for awhile. By placing such comments in the source control check-in comments you're effectively tying information about the fix to the specific version where it was corrected, which can be helpful later on.
My view is that comments should be relevant to the developer's intention, or highlights of 'why' surrounding the algorithm/method.
Comments shouldn't surround a fix-in-time.
I agree that such data should be placed in source control or another part Configuration Management. Having worked in codebases that place information about bug fixes in comments, I can say it leads to very cluttered comments and code later. Six months after the fix is in place, do you really want to know that about a line fixing some long-past bug? What do you do with comments when you need to refactor the code?
We use Team Foundation Server for source control here at my company and it lets you tie a check-in to a bug report, so I wouldn't put a comment directly in code to server the same purpose.
However, in situations where I'm implementing code as a workaround for a bug in the .NET framework or a third-party library I like to put a the URL to the Microsoft TechNet log or website that describes the bug and its status.
So obviously
// fix bug #22
i++;
is not effective communication.
Good communication is mostly common sense. Say what you mean.
// Compensate for removeFrob() decrementing i.
i++;
Include the bug number if it seems likely to help future readers.
// Skipping the next flange is necessary to maintain the loop
// invariant if the lookup fails (bug #22).
i++;
Sometimes important conversations are recorded in your bug tracking system. Sometimes a bug leads to a key insight that changes the shape of the code.
// Treat this as a bleet. Misnomed grotzjammers and particle
// bleets are actually both special cases of the same
// situation; see Anna's analysis in bug #22.
i++;
In the Perl5 source repository it is common to refer to bugs, with their associated Trac number in Test files.
This makes more sense to me, because adding a test for a bug will prevent that bug from ever going unnoticed again.

What makes code legacy?

I have heard many developers refer to code as "legacy". Most of the time it is code that has been written by someone who no longer works on the project. What is it that makes code, legacy code?
Update in response to:
"Something handed down from an ancestor or a predecessor or from the past" http://www.thefreedictionary.com/legacy. Clearly you wanted to know something else. Could you clarify or expand your question? S.Lott
I am looking for the symptoms of legacy code that make it unusable or a nightmare to work with. When is it better to throw it away? It is my opinion that code should be thrown away more often and that reinventing the wheel is valuable part of development. The academic ideal of not reinventing the wheel is a nice one but it is not very practical.
On the other hand there is obviously legacy code worth keeping.
By using hardware, software, APIs, languages, technologies or features that are either no longer supported or have been superceded, typically combined with little to no possibility of ever replacing that code, instead using it til it or the system dies.
What is it that makes code, legacy code?
As with plain legacy, when the author is dead or missing, you as a heir get all or some of his code.
You shed some tears and try to figure out what to do with all this rubbish.
Michael Feathers has an interesting definition in his book Working Effectively with Legacy Code. According to him legacy code is code without automated tests.
It is a very general (and oft abused term) but any of the following would be legitimate reasons to call an app legacy:
The code base is based on a language/platform which is entirely unsupported by the manufacturer of the original product (often said manufacturer has gone out of business).
(really 1a) The code base or platform on which it is built is so old that getting qualified or experienced developers for the system is both hard and expensive.
The application supports some aspect of the business which is no longer actively grown and for which alterations are extremely rare, normally to fix it if something entirely unexpected changes around it (the canonical example being the Y2K issue) or if some regulation/external pressure forces it. Since both reasons are pressing and normally unavoidable but no significant development has occurred on the project it is likely that those people assigned to deal with this will be unfamiliar with the system (and it's accumulated behaviours and intricacies). In these cases this would often be reason to increase the perceived and planned for risk associated with the project.
The system has/or is being replaced with another. As such the system may be used for much less than originally intended, or perhaps only as a means of viewing historical data.
Legacy generally refers to code that is no longer being developed - meaning that if you use it, you have to use it on its original terms - you cannot just edit it to support the way the world looks today. For example, legacy code has to run on hardware that may not exist today - or is no longer supported.
According to Michael Feathers, the author of the excellent Working Effectively with Legacy Code, legacy code is a code which has no tests. When there is no way to know what breaks when this code changes.
The main thing that distinguishes
legacy code from non-legacy code is
tests, or rather a lack of tests. We
can get a sense of this with a little
thought experiment: how easy would it
be to modify your code base if it
could bite back, if it could tell you
when you made a mistake? It would be
pretty easy, wouldn't it? Most of the
fear involved in making changes to
large code bases is fear of
introducing subtle bugs; fear of
changing things inadvertently. With
tests, you can make things better with
impunity. To me, the difference is so
critical, it overwhelms any other
distinction. With tests, you can make
things better. Without them, you just
don’t know whether things are getting
better or worse.
Nobody is gonna read this, but I feel the other answers don't get it quite right:
It has value, if it wasn't useful it would've been thrown away long ago
Its hard to reason about because either of
Lack of documentation,
Original author cannot be found or forgot (yes 2 months later your code can be legacy code too!!),
Lack of tests or typesystem
Doesn't follow modern practices (ie no context to hold on too)
There is a requirement to change or extend it.
If there isn't a requirement to change it, it isn't legacy code
since nobody cares about it. It does its thing and there is nobody
around to call it legacy code.
A colleague once told me that legacy code was any code that you hadn't written yourself.
Arguably, it's just a pejorative term for code that we don't like any more for whatever reason (typically because it's not cool or fashionable but it works).
The TDD brigade might suggest that any code without tests is legacy code.
Legacy code is source code that relates to a no-longer supported or manufactured operating system or other computer technology.
http://en.wikipedia.org/wiki/Legacy_code
"Legacy code is source code that relates to a no-longer supported or manufactured "
Any code with support (or documentation) missing. Be it:
inline comments
technical documentation
spoken documentation (the person who wrote it)
unit tests documenting the workings of the code
For me legacy code is code that was written prior to some paradigm shift.
It may still be very much in use but it is in the process of being refactored to bring it into line.
e.g. Old procedural code hanging around in an otherwise OO system.
Code (or anything else, really) becomes "legacy" when it has been replaced by something newer/better, and yet despite this it's still used and kept alive "in the wild".
Preserving legacy code is not so much an academic ideal as it is keeping code that works, no matter how poorly. In many conservative enterprise situations, that would be considered more practical than throwing it away and starting again from scratch. Better the devil you know...
Legacy code is code that is painful/expensive to keep current with changing requirements.
There are two ways that this can happen:
The code is unsuitable for change
The semantics of the code have been swapped out to silicon
1) is the easier of the two to recognize. It is software that has fundamental limits making it unable to keep up with the ecosystem around it. For example, a system built around O(n^2) algorithm won't scale beyond a certain point and must be re-written if requirements move in that direction. Another example is code using libraries that are not supported on the latest OS versions.
2) Is harder to recognize, but all code of this kind shares the characteristic that people are afraid to change it. This could be because it was badly written/documented to begin with, because it is untested, or because it is non-trivial and the original authors who understood it left the team.
The ASCII/Unicode chars that comprise living code have semantic meaning, the "why's", "what's" and to some degree the "how's", in the minds of people associated with it. Legacy code is either un-owned or the owners do not have meaning associated with large portions of it. Once this happens (and it could happen the next day with really poorly-written code), to change this code, someone must learn it and understand it. This process is a significant fraction of the time it takes to write it in the first place.
The day you're afraid to refactor your code is the day when your code has become legacy.
I consider code "legacy" if any or all of the following conditions apply:
It was written using a language or methodology that is a generation behind current standards
The code is a complete mess with no planning or design behind it
It is written in outdated languages and in an outdated, non object-oriented style
It is difficult to find developers who know the language because it is so old
Unlike some of the other opinions here, I've seen plenty of modern applications that work decently without unit tests. Unit testing still has not caught on with everyone. Perhaps ten years from now the next generation of programmers will look at our current applications and consider them "legacy" for not containing unit tests, just as I consider non object-oriented applications to be legacy.
If few changes need to be made to a legacy codebase, it's better to simply leave it as-is and go with the flow. If the application needs drastic functionality changes, a GUI overhaul, and/or you can't find anyone who knows the programming language, it's time to throw away and start over. A word of warning, however: rewriting from scratch can be very time-consuming, and it's difficult to know if you've replicated all functionality. You'll probably want to have test cases and unit tests written for the legacy application and the new application.
Quite honestly legacy code is any code, framework, api, of other software construct thta's not "cool" anymore. For example COBOL is unanimously regarded as legacy while APL is not. Now one can also make the case that COBOL is consideed legacy and APL not because it has about 1m times the install base as APL. However, if you say that you need to work on APL code the reply would not be "oh no, that legacy stuff" but rather "oh my god, guess you won't be doing anything for the next century" see the difference?
This is a general term thrown around quite often (and quite generically) in the software ecosystem.
Well, I like to think of legacy code as inherited code. This is simply code that was written in the past. In most cases, legacy code do not follow new/current practices and is often considered archaic.
Legacy code is anything written more than a month ago :-)
It's often any code that isn't written in the trendy scripting language du jour, and I'm only half joking.

When do you say that the code is Legacy code?

Any useful metrics will be fine
One of the things that I look for in a code is unit test. This will give the freedom to refactor it. So if the code does not have tests I consider it a legacy code.
If the code:
has been replaced by newer code that implements the same or functionality or better
is not being used by current systems
is soon to be replaced by something else altogether
has been archived for historic reasons
when vendors stop supporting it
We use the term "legacy" to refer to any code, still in use, developed using technology we have ceased active development in.
It is code that we would rather rewrite using more recent tools than modify in its current state.
Micheal Feathers, Author of the excellent "Working Effectively with Legacy Code", defines it as any code that does not have tests.
A better question would probably be what marks a piece of code as non legacy.
To me legacy means unchangeable. So as soon as you're no longer 'able' to change it it's legacy.
Whether that ability is removed by fixed requirements, fear of breakage, knowledge loss, or some other impact is largely irrelevant.
A related note is that I don't think I'd ever use the exact word legacy as it stirs up too many emotions to be useful.
I don't believe there is a definitive answer, but I do believe that the likelihood that code is legacy code increases with the number of people who don't want to touch it and the likelihood that changing it will cause it to break.
the term "legacy code" is subjective and is probably a loaded term. but in general I subscribe to the view that legacy code is one that is not unit-testable and as such is hard to refactor.
When the code is old enough you never met the developer who originally wrote the code.
When 3rd party libraries aren't supported anymore.
In my opinion all code that is written is legacy code. It might take some time before the original intent and all the decisions made about the code is forgotten but sooner or later you cannot imagine what they were thinking while writing it. You never write legacy code yourself, right?
Using unit tests or some measure like seconds since the developer has left the building do not really measure whether or not the code is legacy code. Legacy code may have a good set of unit tests and comments and it may have undergone a strict code review and other analysis. This doesn't mean that the code is still relevant for the program at hand. It just suggests that the code might be comparably well written. And if it is no longer relevant, the code will actually make it harder to solve the problem the program is developed for.
Legacy code has been defined in many places as "code without tests". I don't think they are specific in the types of tests, but in general, if you can't make a change to your code without the fear of something unknown happening, well, it quickly devolves.
See "Working Effectively with Legacy Code"
I maybe wrong, but I don't think there is an established metric for this.
Usually a piece of code is deemed to be legacy, when it has seen at least 5-6 release cycles( maybe more ). More often than not, the Original Implementor is no longer around and the code is maintained through.
Almost seconds after the devs leave the premises. :)
If...
there's no money in the bank for new features
you can't find anyone that admits working on the project that needs fixing
the source code to the project you own has gone MIA
...then you're working on legacy code.
Usually people refer to something as legacy code when no one is still around that is familiar with or feels comfortable maintaining the code.
Unit tests make it easier for people unfamiliar with code to dig into it, so the theory is it helps prevent code from becoming "legacy".
Often when code is legacy it is changed in a different manner. People are afraid to change it, but also changes tend to be quick and dirty because nobody understands full consequences. Code duplication issues may arise, because people don't want to take the risk associated with deeper changes.
So, in such circumstances, the situation may get worse, at an increasing rate.
I don't know of any real metrics that can be used to determine if something is "legacy code" or not, but anything older than just written could be considered legacy. Legacy code means different things to different people/organizations, so it really is somewhat subjective.

Are you fluent in Unicode yet?

Almost 5 years ago Joel Spolsky wrote this article, "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)".
Like many, I read it carefully, realizing it was high-time I got to grips with this "replacement for ASCII". Unfortunately, 5 years later I feel I have slipped back into a few bad habits in this area. Have you?
I don't write many specifically international applications, however I have helped build many ASP.NET internet facing websites, so I guess that's not an excuse.
So for my benefit (and I believe many others) can I get some input from people on the following:
How to "get over" ASCII once and for all
Fundamental guidance when working with Unicode.
Recommended (recent) books and websites on Unicode (for developers).
Current state of Unicode (5 years after Joels' article)
Future directions.
I must admit I have a .NET background and so would also be happy for information on Unicode in the .NET framework. Of course this shouldn't stop anyone with a differing background from commenting though.
Update: See this related question also asked on StackOverflow previously.
Since I read the Joel article and some other I18n articles I always kept a close eye to my character encoding; And it actually works if you do it consistantly. If you work in a company where it is standard to use UTF-8 and everybody knows this / does this it will work.
Here some interesting articles (besides Joel's article) on the subject:
http://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode
http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF
A quote from the first article; Tips for using Unicode:
Embrace Unicode, don't fight it; it's probably the right thing to do, and if it weren't you'd probably have to anyhow.
Inside your software, store text as UTF-8 or UTF-16; that is to say, pick one of the two and stick with it.
Interchange data with the outside world using XML whenever possible; this makes a whole bunch of potential problems go away.
Try to make your application browser-based rather than write your own client; the browsers are getting really quite good at dealing with the texts of the world.
If you're using someone else's library code (and of course you are), assume its Unicode handling is broken until proved to be correct.
If you're doing search, try to hand the linguistic and character-handling problems off to someone who understands them.
Go off to Amazon or somewhere and buy the latest revision of the printed Unicode standard; it contains pretty well everything you need to know.
Spend some time poking around the Unicode web site and learning how the code charts work.
If you're going to have to do any serious work with Asian languages, go buy the O'Reilly book on the subject by Ken Lunde.
If you have a Macintosh, run out and grab Lord Pixel's Unicode Font Inspection tool. Totally cool.
If you're really going to have to get down and dirty with the data, go attend one of the twice-a-year Unicode conferences. All the experts go and if you don't know what you need to know, you'll be able to find someone there who knows.
I spent a while working with search engine software - You wouldn't believe how many web sites serve up content with HTTP headers or meta tags which lie about the encoding of the pages. Often, you'll even get a document which contains both ISO-8859 characters and UTF-8 characters.
Once you've battled through a few of those sorts of issues, you start taking the proper character encoding of data you produce really seriously.
The .NET Framework uses Windows default encoding for storing strings, which turns out to be UTF-16. If you don't specify an encoding when you use most text I/O classes, you will write UTF-8 with no BOM and read by first checking for a BOM then assuming UTF-8 (I know for sure StreamReader and StreamWriter behave this way.) This is pretty safe for "dumb" text editors that won't understand a BOM but kind of cruddy for smarter ones that could display UTF-8 or the situation where you're actually writing characters outside the standard ASCII range.
Normally this is invisible, but it can rear its head in interesting ways. Yesterday I was working with someone who was using XML serialization to serialize an object to a string using a StringWriter, and he couldn't figure out why the encoding was always UTF-16. Since a string in memory is going to be UTF-16 and that is enforced by .NET, that's the only thing the XML serialization framework could do.
So, when I'm writing something that isn't just a throwaway tool, I specify a UTF-8 encoding with a BOM. Technically in .NET you will always be accidentally Unicode aware, but only if your user knows to detect your encoding as UTF-8.
It makes me cry a little every time I see someone ask, "How do I get the bytes of a string?" and the suggested solution uses Encoding.ASCII.GetBytes() :(
Rule of thumb: if you never munge or look inside a string and instead treat it strictly as a blob of data, you'll be much better off.
Even doing something as simple as splitting words or lowercasing strings becomes tough if you want to do it "the Unicode way".
And if you want to do it "the Unicode way", you'll need an awfully good library. This stuff is incredibly complex.