Verb for what you do when you have A and do A AND B - language-agnostic

Ok, this may seem like a silly question, but it is seriously bugging me. Hoping some fellow programmer has a good word for it!
Thing is, I am making an ExpressionBuilder class to help me build up expressions to use with LinqToSQL. And my problem is about how word myself when describing what two methods. And it kind of is a problem in general for me too when talking about it. Here is the issue:
You have an Expression<Func<T, bool>>, A. Later you get another one, B. You are now going to combine that B with A using && / AndAlso or || / OrElse. So for example like this:
A = A && B;
Alright. So, what did you just do there? What is the verb for what you did with B to A? If you think in a series of this stuff, like A = A && B && C && D && E && ..., you could sort of say that you then "add" F to that series. But that wouldn't really be correct either I feel...
What I feed would be most "correct" would be that you take B and you "and" it to/with A. You take B and you "or" it to/with A. But can "and" and "or" be used as a verb?? Is that considered ok? Feels like incredibly bad English... but maybe it is ok in a programming environment? Or?

If I was speaking to a mathematician, I would probably use terms like "perform a logical conjunction" (or disjunction).
If I was speaking to a fellow programmer, I would use "and" and "or" as verbs directly.
If I was speaking with my mom, I would probably just find pen and paper and start drawing Venn diagrams.

In logic AND is the conjunction operator, so you are conjoining A and B. OR is disjoining.

I think it is perfectly ok to use "and" as a verb in this case. You and'd A and B. It just seems bad due to the words AND and OR themselves. If you talk about it with XOR though, it doesn't sound so bad to say you XOR'd something yet you're effectively saying the same thing.

Compound?
Naming is always one of the hardest things.

If you are adding (e.g. numbers, or items to a set/list) then I'd say "Add"
If you are concatenating (e.g. strings) then I'd say "Append"
Alternatively... if you are just "adding" another item to a list... "Push" works too

Does the output of A feed into the input of B?
If so I'd use 'chain', or 'compose' in the sense of functional composition
Otherwise, if they're independant functions which are being combined, then maybe 'cat' as shorthand for concatenate.

In general, this is composition of functions. Since these functions are all predicates, you're putting them together with the various logical operations, so the specific composition would be conjunction, disjunction, etc. All the basic set theory terms I forgot since college!

How about logically connect?
-- http://en.wikipedia.org/wiki/Logical_connective

I'd go with Set Notation (Venn Diagrams) when explaining it.
AND: A intersected with B
OR: A unioned with B
http://www.purplemath.com/modules/venndiag2.htm

Related

Monads - where are they necessary?

The other day I was talking about functional programming - especially Haskell with some Java/Scala guys and they asked me what are Monads and where are they necessary.
Well the definition and examples were not that hard - Maybe Monad, IO Monad, State Monad etc., so everyone was, at least partially, ok with me saying Monads are a good thing.
But where are Monads necessary - Maybe can be avoided via magic values like -1 in the setting of Integer, or "" in the setting of String. I have written a game without the State Monad, which is not nice at all but beginners do that.
So my question: Where are Monads necessary ? - and cannot be avoided at all.
(And no confusion - I like Monads and use them, I just want to know).
EDIT
I think I have to clarify that I do not think using "Magic Values" is a good solution, but a lot of programmers use them, especially in low level languages as C or in SHell scrips where an error is often implied by returning -1.
It was already clear to me that not using monads isn't a good idea. Abstraction is often very helpful, but also complicated to get, hence many a people struggle with the concept of monads.
The very core of my question was if it was possible to do for example IO, without a monad and still being pure and functional. I knew it would be tedious and painful to put a known good solution aside, as well as lighting a fire with flint and tinder instead of using a lighter.
The article #Antal S-Z refers to is great you could have invented monads, I skimmed over it, and will definitely read it when I have more time. The more revealing answer is hidden in the comment with the blog post referred to by #Antal S-Z i remember the time before monads, which was the stuff I was looking for when I asked the question.
I don't think you ever need monads. They're just a pattern that shows up naturally when you're working with certain kinds of function. The best explanation of this point of view that I've ever seen is Dan Piponi (sigfpe)'s excellent blog post "You Could Have Invented Monads! (And Maybe You Already Have.)", which this answer is inspired by.
You say you wrote a game without using the state monad. What did it look like? There's a good chance you ended up working with functions with types that looked something like openChest :: Player -> Location -> (Item,Player) (which opens a chest, maybe damages the player with a trap, and returns the found item). Once you need to combine those, you can either do so manually (let (item,player') = openChest player loc ; (x,player'') = func2 player' y in ...) or reimplement the state monad's >>= operator.
Or suppose that we're working in a language with hash maps/associative arrays, and we're not working with monads. We need to look up a few items and work with them; maybe we're trying to send a message between two users.
send username1 username2 = {
user1 = users[username1]
user2 = users[username2]
sendMessage user1 user2 messageBody
}
But wait, this won't work; username1 and username2 might be missing, in which case they'll be nil or -1 or something instead of the desired value. Or maybe looking up a key in an associative array returns a value of type Maybe a, so this will even be a type error. Instead, we've got to write something like
send username1 username2 = {
user1 = users[username1]
if (user1 == nil) return
user2 = users[username2]
if (user2 == nil) return
sendMessage user1 user2 messageBody
}
Or, using Maybe,
send username1 username2 =
case users[username1] of
Just user1 -> case users[username2] of
Just user2 -> Just $ sendMessage user1 user2 messageBody
Nothing -> Nothing
Nothing -> Nothing
Ick! This is messy and overly nested. So we define some sort of function which combines possibly-failing actions. Maybe something like
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
f >>= Just x = f x
f >>= Nothing = Nothing
So you can write
send username1 username2 =
users[username1] >>= $ \user1 ->
users[username2] >>= $ \user2 ->
Just (sendMessage user1 user2 messageBody)
If you really didn't want to use Maybe, then you could implement
f >>= x = if x == nil then nil else f x
The same principle applies.
Really, though, I recommend reading "You Could Have Invented Monads!" It's where I got this intuition for monads, and explains it better and in more detail. Monads arise naturally when working with certain types. Sometimes you make that structure explicit and sometimes you don't, but just because you're refraining from it doesn't mean it's not there. You never need to use monads in the sense that you don't need to work with that specific structure, but often it's a natural thing to do. And recognizing the common pattern, here as in many other things, can allow you to write some nicely general code.
(Also, as the second example I used shows, note that you've thrown the baby out with the bathwater by replacing Maybe with magic values. Just because Maybe is a monad doesn't mean you have to use it like one; lists are also monads, as are functions (of the form r ->), but you don't propose getting rid of them! :-))
I could take the phrase "where is/are X necessary and unavoidable?" where X is anything at all in computing; what would be the point?
Instead, I think it's more valuable to ask, "what value do/does X provide?"".
And the most basic answer is that most X's in computing provide a useful abstraction that makes it easier, less tedious, and less error-prone to put code together.
Okay, but you don't need the abstraction, right? I mean, I could just type out a little code by hand that does the same thing, right? Yeah, of course, it's all just a bunch of 0's and 1's, so let's see who can write an XML parser faster, me using Java/Haskell/C or you with a Turing machine.
Re monads: since monads typically deal with effectful computations, this abstraction is most useful when composing effectful functions.
I take issue with your "magic values Maybe monad". That approach offers a very different abstraction to the programmer, and is less safe, more tedious, and more error prone to deal with than an actual Maybe monad. Also, reading such code, the programmer's intent would be less clear. In other words, it misses the whole point of real monads, which is to provide an abstraction.
I'd also like to note that monads are not fundamental to Haskell:
do-notation is simply syntactic sugar, and can be entirely replaced by >>= and >> without any loss of expressiveness
they (and their combinators, such as join, >>=, mapM, etc.) can be written in Haskell
they can be written in any language that supports higher-order functions, or even in Java using objects. So if you had to work with a Lisp that didn't have monads, you could implement them in that Lisp yourself without too much trouble
Because monad types return an answer of the same type, implementations of that monad type can enforce & preserve semantics. Then, in your code, you can chain operations with that type & let it enforce its rules, regardless of the type(s) it contained.
For example, the Optional class in Java 8 enforces the rule that the contained value is either present & non-null, or else not present. As long as you are using the Optional class, with or without using flatMap, you are wrapping that rule around the contained data type. No one can cheat or forget and add a value=null with present=true.
So declaring outside the code that -1 will be a sentinel value and mean such-and-such is fine, but you are still reliant on yourself and the other people working in the code to honor that semantic. If a new guy comes on board and starts using -1000000 to mean the same thing, then the semantics need to be enforced outside the code (perhaps with a lead pipe?) rather than through code mechanisms.
So rather than having to apply some rule consistently in your program, you can trust the monad to preserve that rule (or other semantics) -- over arbitrary types.
In this way, you can extend functionality of types by wrapping semantics around them, instead of, say, adding an "isPresent" to every type in your code base.
The presence of the numerous monadic utility types points to the fact that this mechanism of wrapping types with semantics is a pretty useful trick. If you have your own semantics that you'd like to add, you can do that by writing your own class using the monad pattern, and then inject strings or floats or ints or whatever into it.
But the short answer is that monads are a nice way to wrap common types in a fluent or chain-able container to add rules and usage without having to fuss with the implementation of the underlying types.

Is a simple ternary case okay for use in program flow as long as it does not hurt readability?

After reading To ternary or not to ternary? and Is this a reasonable use of the ternary operator?, I gathered that simple uses of the ternary operator are generally accepted, because they do not hurt readability. I also gathered that having one side of the ternary block return null when you don't want it to do something is a complete waste.. However, I ran across this case while refactoring my site that made me wrinkle my nose:
if ($success) {
$database->commit();
} else {
$database->rollback();
}
I refactored this down to
$success ? $database->commit() : $database->rollback();
And I was pretty satisfied with it.. but something inside me made me come here for input. Exception catching aside, would you consider this an okay use case? Am I wondering if this is an okay use because I have never done this before, or because it really is bad practice? This doesn't seem difficult to me, but would this seem difficult to understand for anyone else? Does it depend on the language.. as in, would this be more/less wrong in C, C++, or Java?
No, it is not OK. You are turning something that should look like a statement into something that looks like an expression. In fact, if commit() and rollback() return void, this will not compile in Java at least (not sure about the others mentioned).
If you want a one-liner, you should rather create another method on the $database object such as $database->endTransaction($success) that does the if statement internally.
I would be more inclined to use it in case the two actions are mutually-exclusive and/or opposite (yet related to each other), for example:
$success ? go_up() : go_down();
For two unrelated actions I would be less inclined to use it, the reason being that there is a higher probability for one of the branches to need expanding in the future. If that's the case, you will again need to rewrite it as an if-else statement. Imagine that you have:
$success ? do_abc() : do_xyz();
If at some point you decide that the first branch needs to do_def() as well, you'll need to rewrite the whole thing to an if-else statement again.
The more frequent usage of the ternary operator, however, is:
$var = $success ? UP : DOWN;
This way you are evaluating it as an expression, not as a statement.
The real question is, "Is the ternary form more or less readable than the if form?". I'd say it isn't. But this is a question of style, not of function.

Can you programmatically detect pluralizations of English words, and derive the singular form?

Given some (English) word that we shall assume is a plural, is it possible to derive the singular form? I'd like to avoid lookup/dictionary tables if possible.
Some examples:
Examples -> Example a simple 's' suffix
Glitch -> Glitches 'es' suffix, as opposed to above
Countries -> Country 'ies' suffix.
Sheep -> Sheep no change: possible fallback for indeterminate values
Or, this seems to be a fairly exhaustive list.
Suggestions of libraries in language x are fine, as long as they are open-source (ie, so that someone can examine them to determine how to do it in language y)
It really depends on what you mean by 'programmatically'. Part of English works on easy to understand rules, and part doesn't. It has to do mainly with frequency. For a brief overview, you can read Pinker's "Words and Rules", but do yourself a favor and don't take the whole generative theory of linguistics entirely to heart. There's a lot more empiricism there than that school of thought really lends to the pursuit.
A lot of English can be statistically lemmatized. By the way, stemming or lemmatization is the term you're looking for. One of the most effective lemmatizers which work off of statistical rules bootstrapped with frequency-based exceptions is the Morpha Lemmatizer. You can give this a shot if you have a project that requires this type of simplification of strings which represent specific terms in English.
There are even more naive approaches that accomplish much with respect to normalizing related terms. Take a look at the Porter Stemmer, which is effective enough to cluster together most terms in English.
Going from singular to plural, English plural form is actually pretty regular compared to some other European languages I have a passing familiarity with. In German for example, working out the plural form is really complicated (eg Land -> Länder). I think there are roughly 20-30 exceptions and the rest follow a fairly simple ruleset:
-y -> -ies (family -> families)
-us -> -i (cactus -> cacti)
-s -> -ses (loss -> losses)
otherwise add -s
That being said, plural to singular form becomes that much harder because the reverse cases have ambiguities. For example:
pies: is it py or pie?
ski: is it singular or plural for 'skus'?
molasses: is it singular or plural for 'molasse' or 'molass'?
So it can be done but you're going to have a much larger list of exceptions and you're going to have to store a lot of false positives (ie things that appear plural but aren't).
Is "axes" the plural of "ax" or of "axis"? Even a human cannot tell without context.
You can take a look at Inflector.net - my port of Rails' inflection class.
No - English isn't a language which sticks to many rules.
I think your best bet is either:
use a dictionary of common words and their plurals (or group them by their plural rule, eg: group words where you just add an S, words where you add ES, words where you drop a Y and add IES...)
rethink your application
It is not possible, as nickf has already said. It would be simple for the classes of words you have described, but what about all the words that end with s naturally? My name, Marius, for example, is not plural of Mariu. Same with Bus I guess. Pluralization of words in English is a one way function (a hash function), and you usually need the rest of the sentence or paragraph for context.

Where is the best place to put the constant on a condition?

Where is the best place to put the constant in a condition? Left side or Right side?
I personally use in the right side:
if($value > 23)
{
}
A lot of people will say the LHS because it prevents you doing subtle and damaging things like if (foo = KBAR) (note the lack of '==') but I always find that jarring for readability.
The right side. The left side is a tradition in C/C++ because people sometimes forget and uses "=" instead of "==" and putting the const on the left side causes a compilation error in this case.
depends:
if (23 <= i and i <= 40)
but i would prefer right side, it reads more naturallly
Put the condition on the right side, since that's the "natural" place, and rely on your compiler to generate a warning if you accidentally use = instead of ==.
Does it really matter? It may help you if you keep a convention, but whether it is to keep the constants on one side, or to always use the <, <= operators and avoid the >, >=; that really depends on you.
It surely doesn't matter to the compiler/interpreter, and modern compilers should give a clear warning when you accidentally write "set to" (=) instead of "does it equal" (==), as pointed out in ocdecio's post.
The answer, as always, is "it depends". In most cases, it reads more naturally to put it on the right, as in the OP's example. In other cases, particularly compound statements that check to see if something is in a range (see Peter Miehle's example), it can go either way. I think you should use whichever makes the statement clearer to any future programmers who happen across your code. If there is no clear difference in readability, I recommend defaulting to putting it on the right, since that is what most people expect (principle of least surprise). As many have mentioned already, any decent compiler nowadays will warn you if you attempt to perform an assignment inside an if statement (you can usually silence this warning by putting an extra set of parentheses around the assignment). Also, it has been mentioned that some JIT or interpreted languages might make it hard to find this problem without the constant-on-the-left trick, but IIRC, many of them will also emit a warning in this case, so if you run them with warnings treated as errors, it will help you catch that problem.
I prefer left side, as it prevents accidental assignments like so:
// when using = instead of == this can result in accidental assignment
if ($value == null) {}
// $value cannot be accidentally assigned this way
if (null === $value) {}
NOTE: from reading other answer I understand when using compiled languages, this could get you in trouble. I still prefer using this, as my main language is PHP. For compiled languages, please refer to the answers others have already given.
Always use < (and <=), never use > (or >=), and avoid languages which can't distinguish between assignment and equality.
First part of the rule means that numbers in your conditions occur in their usual order, smallest on the left, largest on the right. This is a great help when your conditions contain multiple terms (eg 3<x && x<=14)
Second part of the rule means letting a compiler sweat about things a compiler is good at (such as spotting how many ===== signs you've typed).
And I make these assertions forcefully and positively sure and certain in the knowledge that this is only my opinion and that there isn't a right or wrong answer.
Regards

Is Switch (Case) always wrong?

Are there instances where switch(case) is is a good design choice (except for simplicity) over strategy or similar patterns...
Use Switches when you're testing on values of primitives. (ie. integers or characters).
Use polymorphism when you are choosing between different types.
Examples :
Testing whether a character the user has entered is one of 'a', 'b' or 'c' is a job for a switch.
Testing whether the object you're dealing with is a Dog or Cat is a job for polymorphic dispatch.
In many languages, if you have more complicated values you may not be able to use Switch anyway.
First of all, Simplicity often is a good design choice.
I never understood this bias against switch/case. Yes, it can be abused, but that, so can just about every other programming construct.
Switching on a type is usually wrong and probably should be replaced by polymorphism. Switching on other things is usually OK.
For one, readability.
Yes, definitely. Many times your switch is only relevant to a very small part of your overall logic and it would be a mistake to create whole new classes just for this minor effect.
For example, let's say you have a database of words, the user input another word, and you want to find that word in the database but include possible plurals. You might write something like (C++)
vector<string> possible_forms;
possible_forms.push_back(word);
char last_letter = word[word.size() - 1];
switch (last_letter) {
case 's':
case 'i':
case 'z':
possible_forms.push_back(word + "es");
break;
case 'y':
possible_forms.push_back(word.substr(0, word.size() - 1) + "ies");
break;
default:
possible_forms.push_back(word + "s");
}
Doing this with strategies would be overkill.
it's usually ok, as long as you only have one switch in one place. when you have more than one (or many), then it's time too consider alternatives.
The "strategies" could be created with a switch.
That could be the starting point and from there let the polymorphism do the job.
Other that comes to mind need for extra speed at the cost of flexibility. There are cases.
No, the switch statement is probably only a good design choice in simple situations.
Once you are passed a simple situation switch statements become very painful to keep updating and maintaining. This is part of the reason design patterns came about.
My view is that switch is always wrong:
The body of a case is code and is behaviour,
therefore, the thing in the case (the 'value') has a behavioural type,
therefore, polymorphism would be a better choice.
This implies that values are in fact types, e.g. the number 1 is a type of everything that equals 1 in some way. All that remains is for us to map 1-ness to the behaviour for our specific case, and we have polymorphism with all those other types (a Good Thing).
This is easier done in some languages than others, unfortunately, most languages in common use are pretty awful, so the path of least resistance is the wrong one, and people end up writing switches or if statements (same thing).