Checking bitmask: x & b != 0 VS x & b == b - binary

Suppose x is a bitmask value, and b is one flag, e.g.
x = 0b10101101
b = 0b00000100
There seems to be two ways to check whether the bit indicated by b is turned on in x:
if (x & b != 0) // (1)
if (x & b == b) // (2)
In most circumstances it seems these two checks always yield the same result, given that b is always a binary with only one bit turned on.
However I wonder is there any exception that makes one method better than another?

In general, if we interpret both values as bit sets, the first condition checks if the intersection of x and b is not empty (or, to put it differently: if b and x have elements in common), while the second one checks if b is a subset of x.
Clearly, if b is a singleton, b is a subset of x if and only if the intersection is not empty.
So, whenever you cannot guarantee to 100% that b is a singleton, choose your condition wisely. Ask yourself if you want to express that all elements of b must also be elements of x, or that there are elements of b that are also elements of x. It's a huge difference except for the single bit case.

Related

proof L = {a^n b^m | n>=m} is irregular language

I am stuck in finding S for pumping lemma. is there any idea to proof that
L = {a^n b^m | n>=m} is an irregular language?
The pumping lemma states this:
If L is a regular language, then there exists a natural number p such that any string w of length at least p can be written as w = uvx where |uv| <= p, |v| > 0 and for all natural numbers n, u(v^n)x is also in the language.
To prove a language is not regular using the pumping lemma, we need to design a string w such that the rest of the statement fails: that is, there are no valid assignments of u, v and x.
Our language L requires the number of a's to be the same as the number of b's. The shortest string that satisfies the hypothesis that the string w has length at least p is a^(p/2) b^(p/2). We could guess this as our string. If we do, we have a few cases:
v is entirely made of a's. But then, pumping is going to result in a different number of a's and b's, so the resulting string is not in the language; a condtradiction.
v spans a's and b's. But then, pumping is going to cause a's and b's to be mixed up in the middle, whereas our language requires all the a's to come first. This is also a contradiction.
v is entirely made of b's. But then, we have the same contradiction as in case #1.
In all cases, this choice of w led to a contradiction. That means the guess worked.
There was a simpler choice for w here: choose w = a^p b^p, then there is only one case. But our choice worked out fine. If our choice had not worked out, we could have learned from that choice what went wrong and chosen a different candidate.
For the previous comment,(1) doesn't make sense, since we can have more a's then b's. n>=m. I probably bombed a midterm yesterday due to this question, but found that the answer is actually in the pumping part.
The solution is that we can pump down as well as up. The pumping lemma for regular languages says that for all i>=0, w=x(y^i)z.
CASE 1: y = only a's
So by using a^n b^m with w = a^p b^p, if y is some amount of a's then we see:
x = a^p-l
y = a^l
z = b^m
Now if we use y^0, then there will be less a's than b's.
The next two cases should be easy to prove but I'll add them regardless.
CASE 2: y = only b's
x = a^p
y = b^l
z = b^(p-l)
Pumping to xy^2z leaves more b's than a's so that is not an accepted word in L.
CASE 3: y = a's and b's
x = a^(p-l)
y = (a^l)(b^k)
z = b^(p-k)
Pumping x(y^2)z gives a^(p-l) [(a^l)(b^k)(a^l)(b^k)] b^(p-k) which is not included in L.

Keep getting the error message "Arguments are not sufficiently instantiated" can't understand why

Keep getting the error Arguments are not sufficiently instantiated for the multiplication by addition rule I wrote as shown below.
mult(_, 0, 0). %base case for multiplying by 0
mult(X, 1, X). % another base case
mult(X, Y, Z) :-
Y > 1,
Y1 is Y - 1,
mult(X, Y1, Z1),
Z is X + Z1.
I am new to Prolog and really struggling with even such simple problems.
Any recommendations for books or online tutorials would be great.
I am running it on SWI-Prolog on Ubuntu Linux.
In your definition of mult/3 the first two arguments have to be known. If one of them is still a variable, an instantiation error will occur. Eg. mult(2, X, 6) will yield an instantiation error, although X = 3 is a correct answer ; in fact, the only answer.
There are several options you have:
successor-arithmetics, constraints, or meta-logical predicates.
Here is a starting point with successor arithmetics:
add(0,Y,Y).
add(s(X),Y,s(Z)) :- add(X,Y,Z).
Another approach would be to use constraints over the integers. YAP and SWI have a library(clpfd) that can be used in a very flexible manner: Both for regular integer computations and the more general constraints. Of course, multiplication is already predefined:
?- A * B #= C.
A*B#=C.
?- A * B #= C, C = 6.
C = 6, A in -6.. -1\/1..6, A*B#=6, B in -6.. -1\/1..6.
?- A * B #= C, C = 6, A = 2.
A = 2, B = 3, C = 6.
Meta-logical predicates: I cannot recommend this option in which you would use var/1, nonvar/1, ground/1 to distinguish various cases and handle them differently. This is so error prone that I have rarely seen a correct program using them. In fact, even very well known textbooks contain serious errors!
I think you got the last two calls reversed. Don't you mean:
mult(X,Y,Z):- Y>1,Y1 is Y-1, Z1 is X+Z, mult(X,Y1,Z1).
Edit: nevermind that, looked at the code again and it doesn't make sense. I believe your original code is correct.
As for why that error is occuring, I need to know how you're calling the predicate. Can you give an example input?
The correct way of calling your predicate is mult(+X, +Y, ?Z):
?- mult(5,0,X).
X = 0
?- mult(5,1,X).
X = 5
?- mult(5,5,X).
X = 25
?- mult(4,4,16).
yes
?- mult(3,3,10).
no
etc. Calling it with a free variable in the first two arguments will produce that error, because one of them will be used in the right side of an is or in either side of the <, and those predicates expect ground terms to succeed.

What is the difference between Set ( = ) and SetDelayed ( := )?

This discussion came up in a previous question and I'm interested in knowing the difference between the two. Illustration with an example would be nice.
Basic Example
Here is an example from Leonid Shifrin's book Mathematica programming: an advanced introduction
It is an excellent resource for this kind of question. See: (1) (2)
ClearAll[a, b]
a = RandomInteger[{1, 10}];
b := RandomInteger[{1, 10}]
Table[a, {5}]
{4, 4, 4, 4, 4}
Table[b, {5}]
{10, 5, 2, 1, 3}
Complicated Example
The example above may give the impression that once a definition for a symbol is created using Set, its value is fixed, and does not change. This is not so.
f = ... assigns to f an expression as it evaluates at the time of assignment. If symbols remain in that evaluated expression, and later their values change, so does the apparent value of f.
ClearAll[f, x]
f = 2 x;
f
2 x
x = 7;
f
14
x = 3;
f
6
It is useful to keep in mind how the rules are stored internally. For symbols assigned a value as symbol = expression, the rules are stored in OwnValues. Usually (but not always), OwnValues contains just one rule. In this particular case,
In[84]:= OwnValues[f]
Out[84]= {HoldPattern[f] :> 2 x}
The important part for us now is the r.h.s., which contains x as a symbol. What really matters for evaluation is this form - the way the rules are stored internally. As long as x did not have a value at the moment of assignment, both Set and SetDelayed produce (create) the same rule above in the global rule base, and that is all that matters. They are, therefore, equivalent in this context.
The end result is a symbol f that has a function-like behavior, since its computed value depends on the current value of x. This is not a true function however, since it does not have any parameters, and triggers only changes of the symbol x. Generally, the use of such constructs should be discouraged, since implicit dependencies on global symbols (variables) are just as bad in Mathematica as they are in other languages - they make the code harder to understand and bugs subtler and easier to overlook. Somewhat related discussion can be found here.
Set used for functions
Set can be used for functions, and sometimes it needs to be. Let me give you an example. Here Mathematica symbolically solves the Sum, and then assigns that to aF(x), which is then used for the plot.
ClearAll[aF, x]
aF[x_] = Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}];
DiscretePlot[aF[x], {x, 1, 50}]
If on the other hand you try to use SetDelayed then you pass each value to be plotted to the Sum function. Not only will this be much slower, but at least on Mathematica 7, it fails entirely.
ClearAll[aF, x]
aF[x_] := Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}];
DiscretePlot[aF[x], {x, 1, 50}]
If one wants to make sure that possible global values for formal parameters (x here) do not interfere and are ignored during the process of defining a new function, an alternative to Clear is to wrap Block around the definition:
ClearAll[aF, x];
x = 1;
Block[{x}, aF[x_] = Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}]];
A look at the function's definition confirms that we get what we wanted:
?aF
Global`aF
aF[x_]=-(x/(-1+x+x^2))
In[1]:= Attributes[Set]
Out[1]= {HoldFirst, Protected, SequenceHold}
In[2]:= Attributes[SetDelayed]
Out[2]= {HoldAll, Protected, SequenceHold}
As you can see by their attributes, both functions hold their first argument (the symbol to which you are assigning), but they differ in that SetDelayed also holds its second argument, while Set does not. This means that Set will evaluate the expression to the right of = at the time the assignment is made. SetDelayed does not evaluate the expression to the right of the := until the variable is actually used.
What's happening is more clear if the right hand side of the assignment has a side effect (e.g. Print[]):
In[3]:= x = (Print["right hand side of Set"]; 3)
x
x
x
During evaluation of In[3]:= right hand side of Set
Out[3]= 3
Out[4]= 3
Out[5]= 3
Out[6]= 3
In[7]:= x := (Print["right hand side of SetDelayed"]; 3)
x
x
x
During evaluation of In[7]:= right hand side of SetDelayed
Out[8]= 3
During evaluation of In[7]:= right hand side of SetDelayed
Out[9]= 3
During evaluation of In[7]:= right hand side of SetDelayed
Out[10]= 3
:= is for defining functions and = is for setting a value, basically.
ie := will evaluate when its read, = will be evaluated when it is set.
think about:
x = 2
y = x
z := x
x = 4
Now, z is 4 if evaluated while y is still 2

Normal vector from least squares-derived plane

I have a set of points and I can derive a least squares solution in the form:
z = Ax + By + C
The coefficients I compute are correct, but how would I get the vector normal to the plane in an equation of this form? Simply using A, B and C coefficients from this equation don't seem correct as a normal vector using my test dataset.
Following on from dmckee's answer:
a x b = (a2b3 − a3b2), (a3b1 − a1b3), (a1b2 − a2b1)
In your case a1=1, a2=0 a3=A b1=0 b2=1 b3=B
so = (-A), (-B), (1)
Form the two vectors
v1 = <1 0 A>
v2 = <0 1 B>
both of which lie in the plane and take the cross-product:
N = v1 x v2 = <-A, -B, +1> (or v2 x v1 = <A, B, -1> )
It works because the cross-product of two vectors is always perpendicular to both of the inputs. So using two (non-colinear) vectors in the plane gives you a normal.
NB: You probably want a normalized normal, of course, but I'll leave that as an exercise.
A little extra color on the dmckee answer. I'd comment directly, but I do not have enough SO rep yet. ;-(
The plane z = Ax + By + C only contains the points (1, 0, A) and (0, 1, B) when C=0. So, we would be talking about the plane z = Ax + By. Which is fine, of course, since this second plane is parallel to the original one, the unique vertical translation that contains the origin. The orthogonal vector we wish to compute is invariant under translations like this, so no harm done.
Granted, dmckee's phrasing is that his specified "vectors" lie in the plane, not the points, so he's arguably covered. But it strikes me as helpful to explicitly acknowledge the implied translations.
Boy, it's been a while for me on this stuff, too.
Pedantically yours... ;-)

How do you calculate the total number of all possible unique subsets from a set with repeats?

Given a set** S containing duplicate elements, how can one determine the total number all the possible subsets of S, where each subset is unique.
For example, say S = {A, B, B} and let K be the set of all subsets, then K = {{}, {A}, {B}, {A, B}, {B, B}, {A, B, B}} and therefore |K| = 6.
Another example would be if S = {A, A, B, B}, then K = {{}, {A}, {B}, {A, B}, {A, A}, {B, B}, {A, B, B}, {A, A, B}, {A, A, B, B}} and therefor |K| = 9
It is easy to see that if S is a real set, having only unique elements, then |K| = 2^|S|.
What is a formula to calculate this value |K| given a "set" S (with duplicates), without generating all the subsets?
** Not technically a set.
Take the product of all the (frequencies + 1).
For example, in {A,B,B}, the answer is (1+1) [the number of As] * (2+1) [the number of Bs] = 6.
In the second example, count(A) = 2 and count(B) = 2. Thus the answer is (2+1) * (2+1) = 9.
The reason this works is that you can define any subset as a vector of counts - for {A,B,B}, the subsets can be described as {A=0,B=0}, {A=0,B=1}, {0,2}, {1,0}, {1,1}, {1,2}.
For each number in counts[] there are (frequencies of that object + 1) possible values. (0..frequencies)
Therefore, the total number of possiblities is the product of all (frequencies+1).
The "all unique" case can also be explained this way - there is one occurence of each object, so the answer is (1+1)^|S| = 2^|S|.
I'll argue that this problem is simple to solve, when viewed in the proper way. You don't care about order of the elements, only whether they appear in a subset of not.
Count the number of times each element appears in the set. For the one element set {A}, how many subsets are there? Clearly there are only two sets. Now suppose we added another element, B, that is distinct from A, to form the set {A,B}. We can form the list of all sets very easily. Take all the sets that we formed using only A, and add in zero or one copy of B. In effect, we double the number of sets. Clearly we can use induction to show that for N distinct elements, the total number of sets is just 2^N.
Suppose that some elements appear multiple times? Consider the set with three copies of A. Thus {A,A,A}. How many subsets can you form? Again, this is simple. We can have 0, 1, 2, or 3 copies of A, so the total number of subsets is 4 since order does not matter.
In general, for N copies of the element A, we will end up with N+1 possible subsets. Now, expand this by adding in some number, M, of copies of B. So we have N copies of A and M copies of B. How many total subsets are there? Yes, this seems clear too. To every possible subset with only A in it (there were N+1 of them) we can add between 0 and M copies of B.
So the total number of subsets when we have N copies of A and M copies of B is simple. It must be (N+1)*(M+1). Again, we can use an inductive argument to show that the total number of subsets is the product of such terms. Merely count up the total number of replicates for each distinct element, add 1, and take the product.
See what happens with the set {A,B,B}. We get 2*3 = 6.
For the set {A,A,B,B}, we get 3*3 = 9.