Handling missing resources - exception

I've just found myself in situation where I needed to handle exception I'll probably never get, so out of curiosity, let's do a small poll.
Do you validate the presence of resources in your programs? I mean, those resources which are installed with your program, like icons, images and similar. Generally, if those are missing, either your install didn't do its job, or the user randomly deleted files in your app.
If you do validate the presence, what do you do when the files are not there?
Of course, for web apps, you'll have nice 404 page or broken link, but what about the rest? Fail early, yes, but leave handling failures to your compiler, or what?

In Python, many folks rely on simple exception handling for this kind of thing. We might wrap an application in a Big-Old-Try-Block that reports "serious problems" for unhandleable exceptions like this and tries to clean up and exit gracefully.
It's hardly worth checking deeply in advance.
If it's even possible for the user to get to some super delicate and precious part of the application and the app dies undoing hours (or years) of work, then you should rethink that use case to create a more robust scenario where a crash is not so destructive.

Graceful Degradation:
... but no, in a lot of cases it doesn't really matter if resources are missing, that's violating the structural integrity of the program. In the case in my picture above, it's a video game where resources are loaded over the net, and sometimes (I think?) the server doesn't supply the textures needed so they are displayed as pink and black checkers. In that case it makes a bit of sense.

Related

Why use buffer overflow exploit?

I understand the concept of buffer overflow, and acknowledge it can give me the opportunity to execute my own code within a foreign executable.
My question is, cant this simply be done with easier ways ?
Say inject a DLL, and in DLLMain write your malicious code ?
Or play with the disassembly and inject assembly code into executable ?
And even if you got your malicious code working, what damage\profit can you get by the act, that you could not get by editing the disassembly by yourself ?
As far as I understand, the moment you got an executable in your hands you are the master of it, and can add\change\remove code by playing with the disassembly, why make all the effort for searching for exploits ?
Thanks, Michael.
Thing is, you don't generally get the victim to run your executable. So the fact that you can make it malicious is of little value.
Instead you can get the potential victim to use your input: that's why it's so interesting.
Most of the time, this is due to the skeptical minds of users nowadays towards executables, and how they do not think that a PDF document could contain a virus. In other situations, the only way to deliver the code is through an exploit, such as a buffer/heap/stack overflow.
For example, on Apple iOS devices, the only way to download executable code is through the AppStore. All executables that come this way must be explicitly approved of by Apple. On the other hand, if the user simply visits a link to a maliciously crafted PDF document in MobileSafari, it could allow an attacker to arbitrarily execute code on the device.
This is the case with Comex's JailbreakMe.com site (both v2.0 (Star) and v3.0 (Saffron)). The site has the device load an incredibly intricate PDF file that ultimately leads to jailbreaking the device. There is no chance in the world that Apple would approve of an app that would do the same thing.

Juding whether an exception is exceptional

It's a pretty popular and well known phrase that you should "only catch/throw exceptions which are exceptional". However, how is an "exceptional" exception determined?
For example, a bad password is very routine in logging into a service, so this is not exceptional. Statistics for a web app would probably show something like one bad login attempt for every 5 attempts (from no specific user). Likewise, with attempting to go to a checkout with a basket in an online store, this could be very commmon (especially for new users). However, a file not found could go either way. I usually work along the lines that if a method is missing something to do its work, throw an exception, but then it gets a little confusing here. In some cases, a file not found could be common (e.g. a file share used by many users with no tight controls), compared to a very locked down production environment missing a file, which would be exceptional.
Is this the right way to deduce between whether an exception is exceptional or not? I can easily filter things like no network connection etc as exceptional, but some cases are hard to judge. Is it subjective?
Thanks
I think it's pretty subjective, honestly, so I prefer to avoid that method of figuring out when I should use exceptions.
Instead, I prefer to consider three things:
Is it likely that I might want to let the call stack unwind more than one level?
Is there another way? (Return null or an error code, etc.) If so, do I have even the slightest performance concern?
If neither of those lead to a clear decision, which is easier to read by someone who has to maintain the code?
If #1 is true, and I don't have a MAJOR performance concern, I will probably opt to use exceptions because it will speed up my development time not to have to code return codes (and manually code the logic to have them propagate up the call stack if needed). When you use exceptions, call stack unwinding is free of charge for development time.
If #2 is true, and either I'm not going more than one frame (maybe two?) up the call stack or I have a serious performance concern (in a tight loop, for example), then I'll try really hard to find another way that doesn't involve exceptions.
Exceptions are only a tool for programmers in a language which supports them. I don't believe they have to have any intrinsic value as to what is "exceptional" or not. Instead, I say use them when they are the best tool for the job.

Unwillingness to see boilerplate code as a major issue? Why? Countermeasures?

Almost everywhere I worked I met lots of people who didn't care that they produced massive amounts of boilerplate code.
For me this is one of the worst things ever, it leads to errors, it is boring and it increases the noise.
Worst example may be even Microsofts unwillingness to give us a better syntax for this annoying "INotifyPropertyChanged" - stuff. You can't use automatically generated properties, you have to create a big redundancy (replicating the property name in the call to "OnPropertyChanged" or whatever your raiser method is called).
Some people go as far as to accept that most programs in many programming languages consist of mostly the same repeated code (noise), not interesting stuff (signal). See MSDN - examples for example, there is so much unneeded, repeated code all over the place (the horrible "INotifyPropertyChanged" - pattern that ruins all the flow being only the tip of the iceberg).
However, when I raise this issue and propose solutions like AOP (PostSharp.NET) or using delegates (for the non - C# - folks: anonymous functions, often realized using a lambda operator), all I get is "we don't care".
Anyone else here troubled by the insane amount of noise introduced by boilerplate code and who wants to think about ways to push solutions to the boilerplate - issue?
For what it's worth, I'm completely on your side.
The boilerplate folks argue that the repetitive, redundant code is "automatic" or "consistent" and therefore doesn't contribute to code complexity. Often when a language forces developers to create boilerplate, the industry creates IDEs and other crutches to automate the process. Then, when the apparent cost of producing that boilerplate code approaches zero, people think it doesn't cost anything.
They're wrong: Boilerplate code contributes to code bulk, and anyone maintaining code has to dig through the irrelevant code to get at the important parts. Also, since auto-generated code can and often does get edited, it can hide bugs introduced by typos, incomplete renaming or other accidents. The cost of boilerplate code is not in its creation but in its maintenance - which many projects try to ignore completely.
In the '80s, I saw trade mags plastered with ads for memory leak debuggers for C++, and it was an obvious sign to me that memory management in C++ is seriously broken. Now, in the vicinity of Java and C#, I see a proliferation of code generating assists, and that indicates to me that those languages have issues that would be better solved elsewhere.
Scala has issues of its own, but I love what they've done with properties and auto-initializing constructors.
Just kill'em. No, really, if someone writes boilerplate code and doesn't care to improve it, I doubt we may call him professional. It often happens that management wants to see tasks done fast and only thing that's left is to push out some write-only boilerplate code just to make them happy. If your management encourages such approach - change your job.

How do you stress test your own software?

I've been working on an app, by myself, and I am at a stage where everything works great--as long as the user does everything he or she is supposed to do. :-) The software needs more testing to see how robust it is, how well it works when people do things like click the same button repeatedly, try to open the wrong kind of files, put data in the wrong places, etc.
I'm having a little trouble with this because it's a bit difficult for me to think in terms of using the application incorrectly. These are all edge cases to me. Still, I'd like to have the application as stable and well tested as possible before I start giving it to beta testers. Assuming that I am not talking about hiring professional testers at this point, I'm curious whether y'all have any tips or systematic ways of thinking about this task.
Thanks, as always.
Well it sounds like you are talking about 2 different things
"Testing your application's functionality" and "Stress testing"(which is the title of your question)
Stress testing is when you have a website, and want to check that it can server 100,000 people at the same time. Seeing how your application performs under stress. You can do this a number of ways, e.g by recording some actions and then getting a number of agent machines to hit your application concurrently.
This questions sounds more like a Quality Assurance question. That is what testers / beta testers are for. But there are things that you can do yourself to validate your application works the best it can.
Unit testing your code would be a good start, it helps you to try and find those edge cases. If your method takes in things like ints, try passing in int.max, int.min, and seeing what blows up. Pass nulls into everything. If you are using .Net you might want to look at PEX, it will go through all the branches/codepaths that your application has. That might help you to further refine your unit tests to test your application the best you can.
Integration tests, see what happens end to end for some of your usual things. This will help you 'find bugs' as you are developing later.
Those are some quick tips on things you can do yourself to try and find edge cases that you may have missed. But yes, eventually you will need to pass your app off to someone else to test. Just make sure that you have covered off as much as you can before it hits them :-)
Make sure you have adequate code coverage in your unit tests and integration tests.
Use appropriate UI validation, and test combinations that can break it.
I have found that a well-architected application that reduces the number of possible permutations in the UI (ways the user can break it) helps a lot. Design patterns like MVC can be especially useful in this regard, since they make your UI veneer as thin as possible.
Automation.
(Re)Factor your code so that another program can throw user-events at it. Create simple scripts of user events and play them back to your program. Capture events from beta users and save those as test scripts (useful for reproducing problems and checking for regressions). Write a fuzz-tester that applies small random changes to the scripts and try them against your program as well.
With this kind of automation you can stress and application and find glaring problems like caches and memory leaks. It won't test the actual functionality. For functionality, unit tests can be helpful. There are a ton of unit testing frameworks out there to try. Pick something useful, learn to write good tests, and integrate them into your build process.

Misunderstood Good Ideas

Sometimes someone has a great idea that solves a problem. But as time passes, people forget why it was a great idea and try to use it in ways that end up causing problems as bad (or worse) than what the idea was originally supposed to solve.
Example:
I'm sure that distributed source
control is sufficiently
counterintuitive that people try to
establish conventions that defeat the
point of distributed source control.
Example 2:
it's very natural to think that when
you're writing some code, you should
handle all errors that could possibly
arrise. But a function doesn't always
have enough information to handle the
error properly, so all it can do is
somehow tell whoever called it that
the error occured. Passing errors up
the call stack by hand is tedious, so
exceptions were invented. With no
extra typing on the part of the
programmer, exceptions will bubble up
the call stack until somebody can do
something with them. It seems like
checked exceptions, at least in
practice, tarnish the awesomeness of
exceptions. At best, the programmer
has to tediously work her way up any
possible call stack, specifying that
every method throws a given exception
up to the point where it can be
handled (if it can be handled). Worse,
she might swallow the exception to
avoid the chore!
What are some other examples where an approach that seems like the common-sense thing to do is actually recreating a problem that had been solved in some way?
Point of this question: internalizing what is wrong with the common-sense "obvious" solution is a very good way of developing an intuition for how and why to use the initially counterintuitive elegant solution.
Hmmm..... let me think... what's with the foundation on which the web works - stateless HTTP on which many stateful frameworks have been built (ASP.NET, JSF etc.) that completely discard the stateless nature of the protocol? Well, not discard it in their implementation but discard it for their users - developers, who not even knowing anything about basic web elements try to pack megabytes of serialized objects into pages which leads to performance loss and tremendous consumption of traffic and server resources.
Would it fall into you conception definition?
You're right that there's many examples of this with DVCS. The most common one is to use DVCS like Subversion by always pushing on any commit or going days without even bothering to commit.