"If you can press a button to get $1M and a random person dies somewhere in the world would you press the button?"
A = press button
B = get $1M
C = random person dies
Here is what I think it should be:
If A, then B AND c
According to the original statement is it:
(If A, then B) AND C
or
If A, then (B AND C)
You've correctly identified the three propositional variables:
P1(x): "x presses a button."
P2(x): "x receives one million dollars."
P3(x): "x causes the death of a random person."
You want to express the sentence Q: "if someone presses the button, then they receive a million dollars and a person dies." At first glance, it seems like P1(x) ⇒ P2(x) ∧ P3(x) correctly expresses this. How can we be sure? Let's draw a truth table:
P1 P2 P3 P2 ^ P3 P1 --> P2 ^ P3
---- ---- ---- --------- ----------------
T T T T T
T T F F F
T F T F F
T F F F F
F T T T T
F T F F T
F F T F T
F F F F T
Notice that "you receive a million dollars and cause a death" is true only when both of the constituent parts are true. This makes sense; if both parts don't come true, the whole is not also true.
Notice also the truth values for the entire statement Q: it's false whenever the second part is false and the first part is true. This makes sense: if you press the button but either (1) the million dollars doesn't appear or (2) nobody dies, the prediction implied by Q is not true. So our assertion is correct.
Think about it. Draw up a truth table for each option.
HINT: If you don't push the button, would the random person die?
In all maths where the operators are the same and no logical grouping is indicated, the expression is read from left to right. Therefore, if you press the button, you will receive $1M and a random person will die.
I changed my mind. True. This is not programming. This is Ethical logic. Go to Community wiki.
Related
I am stuck in finding S for pumping lemma. is there any idea to proof that
L = {a^n b^m | n>=m} is an irregular language?
The pumping lemma states this:
If L is a regular language, then there exists a natural number p such that any string w of length at least p can be written as w = uvx where |uv| <= p, |v| > 0 and for all natural numbers n, u(v^n)x is also in the language.
To prove a language is not regular using the pumping lemma, we need to design a string w such that the rest of the statement fails: that is, there are no valid assignments of u, v and x.
Our language L requires the number of a's to be the same as the number of b's. The shortest string that satisfies the hypothesis that the string w has length at least p is a^(p/2) b^(p/2). We could guess this as our string. If we do, we have a few cases:
v is entirely made of a's. But then, pumping is going to result in a different number of a's and b's, so the resulting string is not in the language; a condtradiction.
v spans a's and b's. But then, pumping is going to cause a's and b's to be mixed up in the middle, whereas our language requires all the a's to come first. This is also a contradiction.
v is entirely made of b's. But then, we have the same contradiction as in case #1.
In all cases, this choice of w led to a contradiction. That means the guess worked.
There was a simpler choice for w here: choose w = a^p b^p, then there is only one case. But our choice worked out fine. If our choice had not worked out, we could have learned from that choice what went wrong and chosen a different candidate.
For the previous comment,(1) doesn't make sense, since we can have more a's then b's. n>=m. I probably bombed a midterm yesterday due to this question, but found that the answer is actually in the pumping part.
The solution is that we can pump down as well as up. The pumping lemma for regular languages says that for all i>=0, w=x(y^i)z.
CASE 1: y = only a's
So by using a^n b^m with w = a^p b^p, if y is some amount of a's then we see:
x = a^p-l
y = a^l
z = b^m
Now if we use y^0, then there will be less a's than b's.
The next two cases should be easy to prove but I'll add them regardless.
CASE 2: y = only b's
x = a^p
y = b^l
z = b^(p-l)
Pumping to xy^2z leaves more b's than a's so that is not an accepted word in L.
CASE 3: y = a's and b's
x = a^(p-l)
y = (a^l)(b^k)
z = b^(p-k)
Pumping x(y^2)z gives a^(p-l) [(a^l)(b^k)(a^l)(b^k)] b^(p-k) which is not included in L.
I have definitely checked out many different related posts, as suggested when creating this question. I have also done different sample problems from online sources as well from a similar problem. However, I am stuck on the problem below specifically.
Given the following relation R and the set of functional dependencies S that hold on R, find all candidate keys for R. Show your work.
R(A, B, C, D, E, F)
S:
AB → C
AC → B
AD → E
BC → A
E → F
Initially, I broke the attributes into groups: attributes found only on the left, only on the right, and on both sides (they are D, ABCE, and F respectively). I also know that I should try to compute the closure of D. This is where I get stuck. At first glance, this seems like I am unable to solve this problem, which isn't true. I also tried computing the closures of (AD), (BD), (CD), and (ED) because I thought that the closure of D = D. Any thoughts?
The keys here are ABD, ACD and BCD.
You were on the right track. After dividing the attributes in three groups, the attributes under "only on the left" list are always a part of the key. Here that attribute is D.
"I also tried computing the closures of (AD), (BD), (CD), and (ED)"
As you couldn't determine the key while taking attributes in groups of 2 you should have then tried making group of 3 attributes and check their closure.
Suppose x is a bitmask value, and b is one flag, e.g.
x = 0b10101101
b = 0b00000100
There seems to be two ways to check whether the bit indicated by b is turned on in x:
if (x & b != 0) // (1)
if (x & b == b) // (2)
In most circumstances it seems these two checks always yield the same result, given that b is always a binary with only one bit turned on.
However I wonder is there any exception that makes one method better than another?
In general, if we interpret both values as bit sets, the first condition checks if the intersection of x and b is not empty (or, to put it differently: if b and x have elements in common), while the second one checks if b is a subset of x.
Clearly, if b is a singleton, b is a subset of x if and only if the intersection is not empty.
So, whenever you cannot guarantee to 100% that b is a singleton, choose your condition wisely. Ask yourself if you want to express that all elements of b must also be elements of x, or that there are elements of b that are also elements of x. It's a huge difference except for the single bit case.
Suppose I have a tree X
a
b c
d e f g
and I want to add a long subtree Y to X
a
b
e
u
so X+Y would look like this.
a
b c
d e f g
u
How would one go about implementing such a tree concatenation?
What you're describing sounds to me like you're trying to insert a word into a trie. If that's what you're trying to do, you can start at the root of the trie and the beginning of the word and then process each character x - if there is no edge labeled x from the current node, create a new node and add an edge between them; then, in either case, follow the edge labeled x and move to the next character.
Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.