System re-design, SyRS - language-agnostic

If you are re-designing a system, and you are writing a SyRS for the re-designed version of the system, following IEEE 1233 how do you make backreferences to "the old design" and mention what was wrong with it?
I can think of 2 ways to do it:
The old system should be summarized outside the new SyRS, and the new SyRS should simply specify the new system without making back references to "how it was done" in the old system.
No old-system summary up-front, instead the SyRS will constantly refer to the old system and what was wrong with it inline, as the new system is being specified.

I'd say #1.
I think a summary of the old system, and it's major flaws as introductory matter (not requirements), is a win. From a communication/efficiency perspective, a new developer or tester should not have to learn all about the old system in order to work with the new system, but there should be some overall learning-from-mistakes that can happen at a higher level.
Define the new system in a positive sense. In other words, state what the new system should do - both the things that it previously did as the old system, and the new functionality, and the new requirements that are essentially flaws in the old system. But worded to be functions/behaviors for the new system.
If you refer to the old system and try to correct it's flaws through requirements, it's likely that you will end up with a lot of statements that come out as "not like that". That's generally faulty requirements writing, since it is both hard to test and hard to implement correctly.

Related

Fix or Start Over (MS Access Database)?

I have sort of agreed to help out a small local community non-profit org with a database that was created for them many years ago. The original developer, along with his original uncompiled files are long gone, the current file has many problems, but since it is a .mde file I can't pull it apart to see how it all works. I do have access to the Tables and I can see what the Reports look like (just not how and where they get the data).
My feeling is that it might just be easier (and quicker!) to create a new version using what I can see in the original database as the basis for the new one. This way whatever problems they are experiencing will be resolved and all of the legacy stuff they no longer need/use will be removed. Plus, they will have a version that can be modified in the future even if I'm not around.
If you've had to deal with a similar situation how did you go about deciding which direction to head in? In other words, what are the typical considerations/traps I need to know about before taking on this adventure (other than it taking way more time and effort than I think it will)?
Thanks!
I'd first determine if big or minor changes are needed now or to be expected in the near future. If not, I'd try to find a way to keep the system alive.
On the other hand, you already provided some good arguments for setting up a new system that I agree with. I would add to that that is much more satisfying to do this. Almost always you wish you started from greenfield if you try to alter an old legacy system.
Point to considder, is that you might need to train the existing users to work with the new system.
Good luck!

Developing using pre-release dev tools

We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?

When to upgrade an existing program to new language features?

Imagine you have a program written e.g. in Java 1.4 or C# 1.0. Of course this program doesn't use generics and other language features that were introduced with later versions.
Now some non-trivial changes have to be made. And of course, you already have a newer IDE/compiler so you could use Java 1.6 resp. C# 3.5 instead. Would you use this oppurtunity to upgrade to the latest language features, i.e. use generic containers and get rid of many casts, etc. Or would you leave it like it is, and use the new features only for new parts. Or even stay with the features of the version originaly used, to maintain a level of consistency?
My basic rule for code maintenance: If it ain't broke, don't fix it.
It's just one specific form of refactoring, so all the advice on when and how to do that applies.
One consideration here is your present and future ability to attract and retain developers to perform maintenance and support on an application built with outdated technology. Many seasoned developers worked with those older versions of Java and C# when they were current, so it isn't that we have no experience with it. But if we spend, say, a year working in .Net 1.0, we will be forgetting what we now know about subsequent versions. And while we are spending that year working in old technology, all of our friends and competitors are honing their skills on the latest technology. We will never be able to make that year up.
In addition to falling behind, we will find it intensely frustrating not to be able to use the functionality in the later versions that we are now comfortable using.
So, younger developers will not have had experience in older technology (.Net 1.0 was released in January 2002). Older developers have been there and don't want to go back.
On the other hand, if you stay back in the older version, you will be comfortable with your skillset. You won't have to spend a lot of time learning about newer technologies. And perhaps best of all, you will have job security.
As far as possible, leave working code alone. It may seem like the code should all be using the latest programming secret sauce, but there is a lot to be said for tried and tested code.
I would only consider rewriting code that must be heavily modified in any case to add a new feature, and even then I would only change the programming metaphor if it will speed up the writing of the new feature.
Otherwise you will constantly be debugging code which was previously working, and using up energy that could have gone into improving the product.
It depends. Does it need to be compiled by old compilers? Do you have enough time/money to change everything?
In theory, it's a simple balance between costs and benefits, so you should only do the rewrite if the benefits outweigh the costs.
The problem with that is that it's almost impossible to measure the real costs (not only of doing the work, but of not doing other things that might contribute more). Generally speaking, the benefits can't really be measured at all -- you can only guess at how much difficult something would be if you'd stayed with the old code.
That leaves us with little chance to do anything based on rational measurements. That leaves a simple rule of thumb: leave old code alone until your only choices are to rewrite it or abandon it completely.

Have you employed any strategies or techniques to overcome user resistance when implementing a new system?

We have been trying now for a while to assist the management (of a customer) with the implementation of a a new system that is custom developed by ourselves, to their requirements. Their old system is text based (DOS) and their employees have been using it for years. The new system is Windows GUI and have many advanced features which will make their lives easier and their organisation more efficient. The problem is that the staff do not want to adapt to the new GUI environment and they are now resorting to be unfriendly and as unhelpful as possible, often placing serious obstacles in our way. The management is adamant that implementation must proceed. The system runs trouble free. We have been consistently friendly and helpful with all parties.
Any advise would be greatly appreciated! Have you encountered something like this before and did you manage to turn it round?
Note:This question is intended to help Programmers etc. with implementation difficulties by sharing experiences and factual solutions that worked. It is not intended to be subjective and indeed Programming techniques may help to solve the problem.
Use the tool
Somebody needs to really understand how the existing tool works. Not just well enough to walk through it; but well enough to do it for real. Why not take 2 weeks and go and do their job with them? That will both improve your understanding of the tool, and may foster a better working relationship with them. And while you're there, perhaps buy the drinks once or twice - it sounds corny, but anything that lowers the hostility, and lets you communicate.
User experience
Getting a good developer (or better: designer) who understanding user-experience is probably key. You can't just completely change their tooling and expect their productivity to remain the same.
Keyboard use:
Think of tools like Visual Studio, AutoCAD, etc - in most cases you don't need the mouse, and "die hard" types wouldn't notice if you took their mouse away. Try to respect this; provide shortcuts / chords (ideally the same as the existing system).
Terminology:
Keep it the same. Don't invent new terms for things.
Talk to them?
This may or may not be possible, but getting a few key users "on board" early can be pivotal; especially if you genuinely empower them to help with the user experience.
Find the faults
In the existing system. Take away their existing pain points and they may forgive you a lot.
Unfortunately it sounds like a case of needing to close the barn doors after the horses have bolted. You really need to get grass roots buy-in on the need for an improved system before beginning the project and maintain that relationship during the development.
By having champions of the system at the "coal face" level in the business would a) make sure you meet not only the management requirements but also the users goals which is all important in a successful system and b) the users get a system to which they have been a development party not just had a system thrust on them.
Getting people to moan about the short comings of an existing system is easy. Describing possible new systems before its create in way which allows the users to comment enables them to feel some control and gives you vital feedback. Be absolutely sure to identifier those killer gripes about the old system and make sure those are addressed in the new system.
Of course this all a bit late for you. The way forward is to create a review forum with the most vocal opponents and put them in a room with you and management. Get them to defend their reasons for not wanting the new system. If you can't show how your new system is better then perhaps it isn't. If you can see how the new system might be slightly improved (the movement may only need to be small) then do that, it may go a long way to get back the feeling of involvement you missed out on before.
I would sit together with the staff or a couple of the more loud mouthed opposers, go through what they find lacking with the system and suggest some of these changes to be incorporated in a future release(s). That way they will pay more attention to your the system and also feel more a part of the process instead of just being handed something like some peon. In addition it would also help avoid any misunderstandings about the system.
Get one / more of the user to be your champion by involving them in the development process. Make sure to choose the right ones. Hopefully one that you can reason with. When launching, do a launch event. Make it a big deal. Not necessarily applied to an application, but I've seen it worked in my previous work environments. If this is too late (you went ahead without any involvement from the actual users before), well... there is always things called staff turnover, lol. Out with the old and in with the new. Make the new users your buddy :).
You have to show some kind of benefit for making the change. A demo / mockup can be useful for this. Choose a manager to demo it to and wait. Let it become his idea. Then it might move forward. Being to pushy can cause a negative knee-jerk reaction which might block further consideration of the idea.
It is sad that software often gets replaced by a management decision without any user involvement and then people wonder why the system is rejected.
I've witnessed this first hand. A guy I once worked was told to develop a new version of an application "in secret". At the end of 6 months development it was shown to the users. It didn't meet their requirements and they were angry they hadn't been involved. Needless to say the software didn't make it into production and the developer left shortly after (I felt sorry for him as he had wasted 6 months effort and actually did a real good job considering the circumstances).
The chances are that the software is inferior to the previous application- perhaps data entry is actually slower (you will be biased as you wrote it- everyone likes to think their software is better).
Re-engage with the users, do some analysis and work out what is bad about the old system. If the new system can address the grips the users have with the old system you might be able to turn this around.
Edit- who was involved in engaged with your developers? Presumably the managers at the client, who probably never use the system? This is another big mistake people tend to make- managers driving requirements.
If the old system is perfect, then it never needed to be replaced in the first place!

Benefits of cross-platform development?

Are there benefits to developing an application on two or more different platforms? Does using a different compiler on even the same platform have benefits?
Yes, especially if you plan to distribute your code for multiple platforms.
But even if you don't cross platform development is a form of futureproofing; if it runs on multiple (diverse) platforms today, it's more likely to run on future platforms than something that was tuned, tweeked, and specialized to work on a version 7.8.3 clean install of vendor X's Q-series boxes (patch level 1452) and nothing else.
There seems to be a benefit in finding and simply preventing bugs with a different compiler and a different OS. Different CPUs can pin down endian issues early. There is the pain at the GUI level if you want to stay native at that level.
Short answer: Yes.
Short of cloning a disk, it is almost impossible to make two systems exactly alike, so you are going to end up running on "different platforms" whether you meant to or not. By specifically confronting and solving the "what if system A doesn't do things like B?" problem head on you are much more likely to find those key assumptions your code makes.
That said, I would say you should get a good chunk of your base code working on system A, and then take a day (or a week or ...) and get it running on system B. It can be very educational.
My education came back in the 80's when I ported a source level C debugger to over 100 flavors of U*NX. Gack!
Are there benefits to developing an application on two or more different platforms?
If this is production software, the obvious reason is the lure of a larger client base. Your product's appeal is magnified the moment the client hears that you support multiple platforms. Remember, most enterprises do not use a single OS or even a single version of the OS. It is fairly typical to find a section using Windows, another Mac and a smaller version some flavor of Linux.
It is also seen that customizing a product for a single platform is often far more tedious than to have it run on multi-platform. The law of diminishing returns kicks in even before you know.
Of course, all of this makes little sense, if you are doing customization work for an existing product for the client's proprietary hardware. But even then, keep an eye out for the entire range of hardware your client has in his repertoire -- you never know when he might ask for it.
Does using a different compiler on even the same platform have benefits?
Yes, again. Different compilers implement different extensions. See to it that you are not dependent on a particular version of a particular compiler.
Further, there may be a bug or two in the compiler itself. Using multiple compilers helps sort these out.
I have further seen bits of a (cross-platform) product using two different compilers -- one was to used in those modules where floating point manipulation required a very high level of accuracy. (Been a while I've heard anyone else do that, but ...)
I've ported a large C++ program, originally Win32, to Linux. It wasn't very difficult. Mostly dealing with compiler incompatibilities, because the MS C++ compiler at the time was non-compliant in various ways. I expect that problem has mostly gone now (until C++0x features start gradually appearing). Also writing a simple platform abstraction library to centralize the platform-specific code in one place. It depends to what extent you are dependent on services from the OS that would be hard to mimic on a new platform.
You don't have to build portability in from the ground up. That's why "porting" is often described as an activity you can perform in one shot after an initial release on your most important platform. You don't have to do it continuously from the very start. Purely for economic reasons, if you can avoid doing work that may never pay off, obviously you should. The cost of porting later on, when really necessary, turns out to be not that bad.
Mostly, there is an existing platform where the application is written for (individual software). But you adress more developers (both platforms), if you decide to provide an independent language.
Also products (standard software) for SMEs can be sold better if they run on different platforms! You can gain access to both markets, WIN&LINUX! (and MacOSx and so on...)
Big companies mostly buy hardware which is supported/certified by the product vendor only to deploy the specified product.
If you develop on multiple platforms at the same time you get the advantage of being able to use different tools. For example I once had a memory overwrite (I still swear I didn't need the +1 for the null byte!) that cause "free" to crash. I brought the code up to speed on Windows and found the overwrite in about 1 minute with Rational Purify... it had taken me a week under Linux of chasing it (valgrind might have found it... but I didn't know about it at the time).
Different compilers on the same or different platforms is, to me, a must as each compiler will report different things, and sometimes the report from one compiler about an error will be gibberish but the other compiler makes it very clear.
Using things like multiple databases while developing means you are much less likely to tie yourself to a particular database which means you can swap out the database if there is a reason to do so. If you want to integrate something that uses Oracle into a existing infrastructure that uses SQL Server for example it can really suck - much better if the Oracle or SQL Server pieces can be moved to the other system (I know of some places that have 3 different databases for their financial systems... ick).
In general, always developing for two or three things means that the odds of you finding mistakes is better, and the odds of the system being more flexible is better.
On the other hand all of that can take time and effort that, at the immediate time, is seen as an unneeded expense.
Some platforms have really dreadful development tools. I once worked in an IB where rather than use Sun's ghastly toolset, peole developed code in VC++ and then ported to Solaris.