Use of the term "page fault" not relating to virtual memory? - terminology

In a number of situation, I've found my self using the term "page fault" to describe something other then virtual memory. For example:
When manually refactoring a block of code into a function, I "page fault" the variable I need into the argument list, that is I start with a void list and add whatever the compiler tells me is missing until things compile.
When working with a new system I generally take guesses at how it works and only "page fault" knowledge from the docs when my first one or two guesses don't work.
Is this (mis)use of the term to describe error driven advancement common?

I've never heard it used either of those ways, although I don't work in an environment where the official meaning is used that much either.

Related

Understanding pragmas

I have a few related questions about pragmas. What got me started on this line of questions was trying to determine whether it's possible to disable some warnings without going all the way to no worries (I'd still like to worry, at least a little bit!). And I'm still interested in the answer to that specific question.
But thinking about that issue made me realize that I don't really understand how pragmas work. It's clear that at least some pragmas take arguments (e.g., use isms<Perl5>). But they don't seem to be functions. Where do they fit into the overall MOP? Are they sort of like Traits? Or packages? Is there any way to introspect over them? See what pragmas are currently in effect?
Are pragmas built into the language, or are they something that users can add? When writing a library, I'd love to have some errors/warnings that users can optionally disable with a pragma – is that possible, or are they restricted to use in the compiler? If I can create my pragmas, is there a practical difference between setting something with a pragma versus with a dynamic variable, aside from the cleaner look of a pragma? For that matter, how do we decide what language features should be set with a pragma versus a variable (e.g., why is $*TOLERANCE not a pragma)?
Basically, I'd be interested in any info about pragmas that you could offer or point me towards – though my specific question is still whether I can selectively turn off certain warnings.
Currently, pragmas are hard-coded in the handling of the use statement. They usually either set some flag in a hash that is associated with the lexical scope of the moment, or change the setting of a dynamic variable in the grammar.
Since use is a compile time construct, you can only use compile time constructs to get at them (currently) (so you'd need BEGIN if it is not part of a use).
I have been in favour of decoupling use from pragma's in the past, as I see them as mostly a holdover from the Perl roots of Raku.
All of this will be changed in the RakuAST branch. I'm not sure what Jonathan Worthington has in mind regarding pragmas in the RakuAST context. For one thing, I think we should be able to "export" a pragma to the scope of a use statement.

Best place to handle exceptions

Where is the proper place to handle thrown exception from lower layers.. inside the class or at the possible toppest level? OR it depends to the use case?
You can take a look at this post:
In particular it is now possible (and considered good practice) to set up a top-level exception handler that will handle any unexpected exception on the main thread in a Windows application. This means that it is no longer necessary to have exception handlers in every routine.
You may also look at How to implement top level exception handling?
And one link for exception handling in Java http://onjava.com/pub/a/onjava/2003/11/19/exceptions.html
So, as a general answer to your question: I would say that yes it depends on the use case (is it just your simple short script or a full-fledged application), but you should try to do the exception handling at the highest possible level, and while doing that do keep in mind the "technicality" of the message you present to your users (trust me, a message "error 31231241 in main thread" doesn't improve user friendlines of your application).
edit:
As Steve McConnell also states in his famous Code Complete 2 book, one should Throw exceptions on the right level of abstraction - for example if you have a getUser() method and you return IOException then that would be very bad. But yes, I think that's commmon sense. Also, he says that one should write a function in such manner that if some other function sends it "garbage", it should not cause a crash of the whole program.
Also, he is in favor of using assertions, and he says: Use error handling code for the conditions you expect to occur; use assertions for conditions that should never occur.
Finally, the states that while addressing the errors you should keep in mind the two approaches: robustness and correctness. The story he tells in the book for this example is very vivid and stayed in my head long after I've read it. Consider having a "Text editing application" and take into account the correctness of the data presented. Imagine few pixels "go wild" (you misscalculate them, or sth like that) - for sure you wouldn't consider to force close the application if something like that happens and this is called robustness (continue operating). But, imagine now that you're making an X-ray manipulation application- in this case any "wierd data" should (as McConnell suggest) cause the critical error message and it is said that you're striving for correctness in your applicaiton.
P.S. pardon for the CC2 part, but I just love that book and think every developer should read it (at least once).

Powershell 2: How to determine what exceptions a cmdlet can throw?

Back in my (limited) java programming days, I remember this nice feature where if I tried to make a call that could throw an exception, java would require me to handle that exception or pass it off to something that could.
Anyways, I am writing a piece of powershell code that messes around with objects in Active Directory, so I want to be very, very careful. I've gotten occasional remote timeout errors, and that is leading me toward the more general question:
"How can I know ahead of time which of these cmdlets can throw exceptions indicating dangerous conditions, and what is the list of those possible exceptions?"
I am wondering if the list of exceptions, per cmdlet, is way too long to address all possibilities. I also don't want to just write a generic exception handler, as powershell seems to do OK in the general sense of error handling.
What's the best way to determine, per cmdlet, the list of all exceptions that can occur? Is this even possible / feasible?
Thanks!
Heh, I think you started out on the wrong foot there. The jury is very much out on whether Java's checked exceptions are a nice idea.
That said, what you ask is very difficult to answer. In Java, it's clear to the compiler through static analysis what methods throw (or at least what they declare they will throw) what exceptions; this is a closed system existing solely in the process space of the compiler. In the real world of distributed heterogeneous systems, there is no universal checked exception framework. PowerShell cmdlets exist in the domain of a .NET appdomain in a win32 process, but they talk to backing systems on foreign servers using obtuse protocols like Active Directory which are a world apart both in implementation and general conception. Exceptional conditions may "flow" from one domain to the next, but they get warped, wrapped and mushed in all directions before they bubble up to you, the poor user at the console. In short, the answer is no. The general purpose Cmdlets (get-item, get-childitem) do not know about the underlying provider system's propensity to cause errors, and nor can they reliably know this.
However, if you have a dedicated module for Active Directory (like ActiveDirectory module from Microsoft, or Quest's QAD module) then it's possible they have listed the exceptions that their cmdlets will surface in the case of exceptional conditions in the backing system. This help would be found - most likely - in the module (or snapin) help files, or on a per-cmdlet basis. Try running the following command:
ps> get-help do-something -full | more
This will show the full invocation syntax along with any notes the developers have felt good enough to bless you with. Pay particular attention to the footer; it's here you'll usually find a more general help topic like "about_thesecmdlets" that you may view with: get-help about_thesecmdlets
Hope this helps.

Should code prevent a logically invalid call even when no harm would be done?

This one has been puzzling my for some time now.
Let's imagine a class which represents a resource, and in order to be able to use this resource one needs to first call the 'Open' method on it, or an InvalidOperationException will be thrown.
Should my code also check whether someone tries to open an already open resource, or close an already closed one?
Should code prevent a logically invalid invocation even when no harm would be done?
I think that programming this way would help writing better code at the other side, but I feel that I might be taking too much responsibility and affect reusability.
What do you guys think?
Edit:
I don't think this could be called defensive programming because it won't let a possible bad use to slip either, and another InvalidOperationException will be thrown.
This is called defensive programming. That's a good programming practice because you ensure that your application doesn't crash on misbehaviour.
That some method should be called first before another method is called, is not a good programming practice. It add's a lot of complexity, which is better handled by the class itself.
This is called sequential coupling. This wikipedia article says that it depends on the context if it's a bad practice, but it shouldn't crash when handled improperly. Sometimes it's necessary to throw an exception to make things clear.
This really depends on what the class actually does. In some cases failing silently is a good idea (eg, you want your DVD player to continue working, not show an error message if it opens the DVD tray that is already open) and in other cases you want as much information as possible (eg, if an airplane tries to close a door that is reportedly already closed, then something is wrong and the pilot should be alerted).
In most cases throwing an error when a logically invalid action is performed is useful for developers, so implementing those exceptions depends on who will use the code. If it is used internally for one application, then it's not vital. But if it is used by many different projects or developers, then I would look into it.
If your example is really the case, then the Open functionality should probably be invoked by the class's constructor.
If you consider the C++ iostream library (which is very widely used and considered quite a good example) you can call any operation on a stream class, whether it is open or not. The called function will simply return a failure indicator of some sort if the operation could not be performed. The functions must of course test the stream state in order to do this.
What you must not do is allow your programs to silently accept any old input as parameters. For example, this would be a broken implementation of strlen()
int strlen( const char * s )
{
if ( s == 0 )
{
return 0; // bad
}
else
{
// calculate length not shown
}
}
as it fields bad inputs without causing a fuss - it should instead throw an exception or use an assert(), depending on your exact development philosophy.
There's no substitute for taste, talent and experience in figuring out exactly how many safety checks should be in your code for best cost/benefit ratio for your organization.
A good quality APIs are expected to be fool-proof, and to guide the user with proper amount of warnings.
Sometimes, safety precautions may impair performance. Performance is one of the most counter-intuitive things in programming. Optimize with care, only when performance really matters.
If this is part of a public SDK that you're releasing to the wild, then the exposed API calls should have strong validation. It will help your 'users' (who are developers) and ensure you aren't stuck supporting usage you never intended to support.
Otherwise, I would not add such checks. I think they make the code harder to read, and these checks are rarely tested. In the past I would add a lot of code like this to make sure my code doesn't do the wrong thing. Now I write unit tests to verify my code does the right thing. The difference? I think tests are more maintainable, more readable, and they don't clutter your production code.
In the case of opening a file that is already open, it depends on knowing the effect of the request, will it reset the current read location for example.
In the case of closing a file that is already closed, think of it as a request for the file to be put in a known state. The code doesn't have to do anything but the desired state is acheived so the code can return a success condition. This is not true if there is some sort of file buffering that needs to be taken care of or maybe an interlinked resource to coordinate, like a modem/serial port or a printer/spooler.
Step back and think of the problem in terms of the desired outcome including any side-effects.
We once put a 'logout' link on an app menu that was displayed regardless of your login status. Why? Because it only took a simple (and very short) method to handle returning you to the login screen from the login screen and saved a large number of checks to handled tracking the login status just so the 'logout' menu-item was displayed only when you were logged in.
logical invalid invocations should always be reported to the user in debug mode..
When compiled in release mode, your code should not throw any unneeded exceptions or do anything else which could endanger the whole application.
Personally i prefer having some kind of logfile, and logging such logically invalid invocations surely will do no harm (at least when performance is not important)

DoSomethingToThing(Thing n) vs Thing.DoSomething()

What factors determine which approach is more appropriate?
I think both have their places.
You shouldn't simply use DoSomethingToThing(Thing n) just because you think "Functional programming is good". Likewise you shouldn't simply use Thing.DoSomething() because "Object Oriented programming is good".
I think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand.
For example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style.
Example:
fileHandle.close();
Most of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents.
CounterExample:
string x = "Hello World";
submitHttpRequest( x );
In this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp()
Needless to say, these are not mutually exclusive. You'll probably actually have
networkConnection.submitHttpRequest(x)
in which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code.
To be object-oriented, tell, don't ask : http://www.pragmaticprogrammer.com/articles/tell-dont-ask.
So, Thing.DoSomething() rather than DoSomethingToThing(Thing n).
If you're dealing with internal state of a thing, Thing.DoSomething() makes more sense, because even if you change the internal representation of Thing, or how it works, the code talking to it doesn't have to change. If you're dealing with a collection of Things, or writing some utility methods, procedural-style DoSomethingToThing() might make more sense or be more straight-forward; but still, can usually be represented as a method on the object representing that collection: for instance
GetTotalPriceofThings();
vs
Cart.getTotal();
It really depends on how object oriented your code is.
Thing.DoSomething is appropriate if Thing is the subject of your sentence.
DoSomethingToThing(Thing n) is appropriate if Thing is the object of your sentence.
ThingA.DoSomethingToThingB(ThingB m) is an unavoidable combination, since in all the languages I can think of, functions belong to one class and are not mutually owned. But this makes sense because you can have a subject and an object.
Active voice is more straightforward than passive voice, so make sure your sentence has a subject that isn't just "the computer". This means, use form 1 and form 3 frequently, and use form 2 rarely.
For clarity:
// Form 1: "File handle, close."
fileHandle.close();
// Form 2: "(Computer,) close the file handle."
close(fileHandle);
// Form 3: "File handle, write the contents of another file handle."
fileHandle.writeContentsOf(anotherFileHandle);
I agree with Orion, but I'm going to rephrase the decision process.
You have a noun and a verb / an object and an action.
If many objects of this type will use this action, try to make the action part of the object.
Otherwise, try to group the action separately, but with related actions.
I like the File / string examples. There are many string operations, such as "SendAsHTTPReply", which won't happen for your average string, but do happen often in a certain setting. However, you basically will always close a File (hopefully), so it makes perfect sense to put the Close action in the class interface.
Another way to think of this is as buying part of an entertainment system. It makes sense to bundle a TV remote with a TV, because you always use them together. But it would be strange to bundle a power cable for a specific VCR with a TV, since many customers will never use this. The key idea is how often will this action be used on this object?
Not nearly enough information here. It depends if your language even supports the construct "Thing.something" or equivalent (ie. it's an OO language). If so, it's far more appropriate because that's the OO paradigm (members should be associated with the object they act on). In a procedural style, of course, DoSomethingtoThing() is your only choice... or ThingDoSomething()
DoSomethingToThing(Thing n) would be more of a functional approach whereas Thing.DoSomething() would be more of an object oriented approach.
That is the Object Oriented versus Procedural Programming choice :)
I think the well documented OO advantages apply to the Thing.DoSomething()
This has been asked Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
Here are a couple of factors to consider:
Can you modify or extend the Thing class. If not, use the former
Can Thing be instantiated. If not, use the later as a static method
If Thing actually get modified (i.e. has properties that change), prefer the latter. If Thing is not modified the latter is just as acceptable.
Otherwise, as objects are meant to map on to real world object, choose the method that seems more grounded in reality.
Even if you aren't working in an OO language, where you would have Thing.DoSomething(), for the overall readability of your code, having a set of functions like:
ThingDoSomething()
ThingDoAnotherTask()
ThingWeDoSomethingElse()
then
AnotherThingDoSomething()
and so on is far better.
All the code that works on "Thing" is on the one location. Of course, the "DoSomething" and other tasks should be named consistently - so you have a ThingOneRead(), a ThingTwoRead()... by now you should get point. When you go back to work on the code in twelve months time, you will appreciate taking the time to make things logical.
In general, if "something" is an action that "thing" naturally knows how to do, then you should use thing.doSomething(). That's good OO encapsulation, because otherwise DoSomethingToThing(thing) would have to access potential internal information of "thing".
For example invoice.getTotal()
If "something" is not naturally part of "thing's" domain model, then one option is to use a helper method.
For example: Logger.log(invoice)
If DoingSomething to an object is likely to produce a different result in another scenario, then i'd suggest you oneThing.DoSomethingToThing(anotherThing).
For example you may have two was of saving thing in you program so you might adopt a DatabaseObject.Save(thing) SessionObject.Save(thing) would be more advantageous than thing.Save() or thing.SaveToDatabase or thing.SaveToSession().
I rarely pass no parameters to a class, unless I'm retrieving public properties.
To add to Aeon's answer, it depends on the the thing and what you want to do to it. So if you are writing Thing, and DoSomething alters the internal state of Thing, then the best approach is Thing.DoSomething. However, if the action does more than change the internal state, then DoSomething(Thing) makes more sense. For example:
Collection.Add(Thing)
is better than
Thing.AddSelfToCollection(Collection)
And if you didn't write Thing, and cannot create a derived class, then you have no chocie but to do DoSomething(Thing)
Even in object oriented programming it might be useful to use a function call instead of a method (or for that matter calling a method of an object other than the one we call it on). Imagine a simple database persistence framework where you'd like to just call save() on an object. Instead of including an SQL statement in every class you'd like to have saved, thus complicating code, spreading SQL all across the code and making changing the storage engine a PITA, you could create an Interface defining save(Class1), save(Class2) etc. and its implementation. Then you'd actually be calling databaseSaver.save(class1) and have everything in one place.
I have to agree with Kevin Conner
Also keep in mind the caller of either of the 2 forms. The caller is probably a method of some other object that definitely does something to your Thing :)