Suppose I have a tree X
a
b c
d e f g
and I want to add a long subtree Y to X
a
b
e
u
so X+Y would look like this.
a
b c
d e f g
u
How would one go about implementing such a tree concatenation?
What you're describing sounds to me like you're trying to insert a word into a trie. If that's what you're trying to do, you can start at the root of the trie and the beginning of the word and then process each character x - if there is no edge labeled x from the current node, create a new node and add an edge between them; then, in either case, follow the edge labeled x and move to the next character.
Related
I am stuck in finding S for pumping lemma. is there any idea to proof that
L = {a^n b^m | n>=m} is an irregular language?
The pumping lemma states this:
If L is a regular language, then there exists a natural number p such that any string w of length at least p can be written as w = uvx where |uv| <= p, |v| > 0 and for all natural numbers n, u(v^n)x is also in the language.
To prove a language is not regular using the pumping lemma, we need to design a string w such that the rest of the statement fails: that is, there are no valid assignments of u, v and x.
Our language L requires the number of a's to be the same as the number of b's. The shortest string that satisfies the hypothesis that the string w has length at least p is a^(p/2) b^(p/2). We could guess this as our string. If we do, we have a few cases:
v is entirely made of a's. But then, pumping is going to result in a different number of a's and b's, so the resulting string is not in the language; a condtradiction.
v spans a's and b's. But then, pumping is going to cause a's and b's to be mixed up in the middle, whereas our language requires all the a's to come first. This is also a contradiction.
v is entirely made of b's. But then, we have the same contradiction as in case #1.
In all cases, this choice of w led to a contradiction. That means the guess worked.
There was a simpler choice for w here: choose w = a^p b^p, then there is only one case. But our choice worked out fine. If our choice had not worked out, we could have learned from that choice what went wrong and chosen a different candidate.
For the previous comment,(1) doesn't make sense, since we can have more a's then b's. n>=m. I probably bombed a midterm yesterday due to this question, but found that the answer is actually in the pumping part.
The solution is that we can pump down as well as up. The pumping lemma for regular languages says that for all i>=0, w=x(y^i)z.
CASE 1: y = only a's
So by using a^n b^m with w = a^p b^p, if y is some amount of a's then we see:
x = a^p-l
y = a^l
z = b^m
Now if we use y^0, then there will be less a's than b's.
The next two cases should be easy to prove but I'll add them regardless.
CASE 2: y = only b's
x = a^p
y = b^l
z = b^(p-l)
Pumping to xy^2z leaves more b's than a's so that is not an accepted word in L.
CASE 3: y = a's and b's
x = a^(p-l)
y = (a^l)(b^k)
z = b^(p-k)
Pumping x(y^2)z gives a^(p-l) [(a^l)(b^k)(a^l)(b^k)] b^(p-k) which is not included in L.
I am finding the intersection of an array of lines via determinants. However, to look for all intersections, I am checking each line with every other line, creating O(n^2) checks.
Is there a more efficient way to check for all of the intersections? I'm afraid of the run time when I am trying to sort out the intersections between hundreds or thousands of lines.
Please specify - do you mean infinite lines?
For line segments there is efficient Bentley-Ottmann algorithm.
For N infinite lines there are about N^2 intersections (if most of them are not parallel), so your approach is optimal in complexity sense (perhaps, micro-optimization is possible)
Edit: with clarified task description Bentley-Ottmann looks like overhead.
Find intersections of lines with clipping window
(for example, using Liang-Barsky algorithm)
Consider only lines that intersect window
Scan top window border from left corner to the right
Insert every line end into the binary search tree
(and add link to this tree node from the paired end to the map)
Scan right window border from top corner to the bottom right one
Check whether current line already has another end in the tree/map
If yes
all lines with ends between positions of the first and
the second ends do intersect with it
Calculate intersections and remove both line ends from tree and map
else
Insert line end into the list
Continue for bottom and left edge.
Complexity O(N) for preliminary treatment and O(K + MlogM) for M lines intersecting window rectangle and K intersection (note that K might be about N^2)
Example: tree state for walking around the perimeter
E //and map F to E node
EG //and map H to G
EGI
EGIK
EGIK + H
H has pair point G, so GH intersect IJ and KL
remove G
EIK
EIK + F
F has pair point E, so EH intersect IJ and KL
remove E
IK
IK + J
J has pair point I, so IJ intersect KL
remove I
K
K+L
remove K
end (5 intersections detected)
I am looking at the number of stalls in the following MIPS code with and without forwarding. I am trying to get a better understanding of when the data is needed in the datapath.
lw $10, 0($4)
sw $10, 24($5)
With forwarding, I get the following with the understanding that the value going into register 10 from the load word instruction is available after the memory stage, and that value is needed by the store word instruction before its memory stage. Hence, there are zero stalls.
F D E M W
F D E M W
If there is no forwarding, register 10 will not have the correct value from the load word instruction until it is written in the first half of the clock cycle in the write back stage.
Is it correct to say that the store word instruction needs the correct value of register 10 in the second half of the clock cycle in the decode stage, producing the following two stalls:
F D E M W
F F F D E M W
Or is it that the store word instruction needs it in the execute stage producing this sequence of two stalls:
F D E M W
F D D D E M W
I guess I'd like a way of phrasing this in my head to better my understanding.
Without forwarding, the load word instruction will have register 10 updated after the 1st half of the clock cycle in the write back stage. The store word instruction will need to read that value in register 10 in the second half of the clock cycle in the decode stage, producing the following 2 stalls in the decode stage:
F D E M W
F D D D E M W
Suppose x is a bitmask value, and b is one flag, e.g.
x = 0b10101101
b = 0b00000100
There seems to be two ways to check whether the bit indicated by b is turned on in x:
if (x & b != 0) // (1)
if (x & b == b) // (2)
In most circumstances it seems these two checks always yield the same result, given that b is always a binary with only one bit turned on.
However I wonder is there any exception that makes one method better than another?
In general, if we interpret both values as bit sets, the first condition checks if the intersection of x and b is not empty (or, to put it differently: if b and x have elements in common), while the second one checks if b is a subset of x.
Clearly, if b is a singleton, b is a subset of x if and only if the intersection is not empty.
So, whenever you cannot guarantee to 100% that b is a singleton, choose your condition wisely. Ask yourself if you want to express that all elements of b must also be elements of x, or that there are elements of b that are also elements of x. It's a huge difference except for the single bit case.
"If you can press a button to get $1M and a random person dies somewhere in the world would you press the button?"
A = press button
B = get $1M
C = random person dies
Here is what I think it should be:
If A, then B AND c
According to the original statement is it:
(If A, then B) AND C
or
If A, then (B AND C)
You've correctly identified the three propositional variables:
P1(x): "x presses a button."
P2(x): "x receives one million dollars."
P3(x): "x causes the death of a random person."
You want to express the sentence Q: "if someone presses the button, then they receive a million dollars and a person dies." At first glance, it seems like P1(x) ⇒ P2(x) ∧ P3(x) correctly expresses this. How can we be sure? Let's draw a truth table:
P1 P2 P3 P2 ^ P3 P1 --> P2 ^ P3
---- ---- ---- --------- ----------------
T T T T T
T T F F F
T F T F F
T F F F F
F T T T T
F T F F T
F F T F T
F F F F T
Notice that "you receive a million dollars and cause a death" is true only when both of the constituent parts are true. This makes sense; if both parts don't come true, the whole is not also true.
Notice also the truth values for the entire statement Q: it's false whenever the second part is false and the first part is true. This makes sense: if you press the button but either (1) the million dollars doesn't appear or (2) nobody dies, the prediction implied by Q is not true. So our assertion is correct.
Think about it. Draw up a truth table for each option.
HINT: If you don't push the button, would the random person die?
In all maths where the operators are the same and no logical grouping is indicated, the expression is read from left to right. Therefore, if you press the button, you will receive $1M and a random person will die.
I changed my mind. True. This is not programming. This is Ethical logic. Go to Community wiki.