Deterministic Finite Automaton vs Deterministic Pushdown Automaton - language-agnostic

I was wondering if somebody could give me a simple explanation of the relationship between these two terms, as I am very confused by the terminology.

A Deterministic Pushdown Automaton (DPDA) is a Deterministic Finite Automaton (DFA) that also has access to a Stack, which is a Last In, First Out (LIFO) data structure.
Having access to a form of memory allows a DPDA to recognize a greater variety of strings than a DFA. For example, given a language with symbols A and B, a DFA could be constructed to recognize AB, AABB, AAABBB, but no DFA can be constructed to recognize A^nB^n for all n, whereas that is easily done with a DPDA that works as follows:
Enter start state.
Push $ to the stack.
Read letter from string.
if B, go to a terminal non-accept state.
if A, push A on the stack, and go to state 4.
Read a letter from string
if A, push A on the stack and stay in this state
if B, pop the top value from the stack.
If the popped value is A, go to state 5.
If the popped value is $, go to a terminal non-accept state.
Read a letter from string
if B, pop the top value from the stack.
If the popped value is A, stay in this state.
If the popped value is $, go to a terminal non-accept state.
if we read the end of the string, pop the top value from the stack
If the popped value is $, go to the accept state
If the popped value is A, go to a terminal non-accept state.
if we read anything else from the string, go to a terminal non-accept state.
PDAs recognize context-free languages, with DPDAs recognizing only the deterministic subset of context-free languages. They are more powerful than DFAs in terms of the number of languages they can recognize, but less powerful than Turing Machines

Related

How do exactly computers convert ASCII to Binary?

I have read that when you press a key on the keyboard, the OS will translate it to the corresponding ASCII, then the computer will convert ACII to Binary. But, what part of the computer converts ASCII to Binary? The question may be stupid because I have only started to learn CS.
Bear with me, it has been a while since I dealt with this sort of thing...
When you press a key on the keyboard, a (very low) voltage signal is raised and is detected by one of the I/O subsystems on the motherboard - in this case the one responsible for the signals at the port that the keyboard is connected to (e.g. USB, DIN, Bluetooth, etc).
The I/O handler then signals this to the interrupt handler, which in turn sends it as a keyboard interrupt to the operating system's keyboard driver. The keyboard driver maps this high-priority interrupt signal to a binary value according to the hardware's specific rules. And this binary representation of the pressed key is the value that the operating system uses and/or hands over to another program (like a word processor, console/terminal, email, etc).
For example, and assuming a very simple, single-byte ASCII-based system (it gets a lot more complicated these days - UTF-8, UTF-16, EBCDIC, etc., etc.):
When you press the letters g and H, the two voltages get translated into binary values 01100111 and 01001000 respectively. But since the computer does not understand the concept of "letters", these binary values represent numbers instead (no problem for the computer). In decimal this would be 103 and 72 respectively.
So where do the actual letters come in? The ASCII code is a mapping between the binary representation, its numeric value (in dec, Hex, Oct, etc.) and a corresponding symbol. The symbols in this case being g and H, which the computer then "paints" on the screen. Letters, numbers, punctuation marks - they are all graphical representations of a number -- little images if you like.

Is HTML Turing Complete?

After reading this question Is CSS Turing complete? -- which received a few thoughtful, succinct answers -- it made me wonder: Is HTML Turing Complete?
Although the short answer is a definitive Yes or No, please also provide a short description or counter-example to prove whether HTML is or is not Turing Complete (obviously it cannot be both). Information on other versions of HTML may be interesting, but the correct answer should answer this for HTML5.
By itself (without CSS or JS), HTML (5 or otherwise) cannot possibly be Turing-complete because it is not a machine. Asking whether it is or not is essentially equivalent to asking whether an apple or an orange is Turing complete, or to take a more relevant example, a book.
HTML is not something that "runs". It is a representation. It is a format. It is an information encoding. Not being a machine, it cannot compute anything on its own, at the level of Turing completeness or any other level.
It seems clear to me that states and transitions can be represented in HTML with pages and hyperlinks, respectively. With this, one can implement deterministic finite automata where clicking links transitions between states. For example, I implemented a few simple DFA which are accessible here.
DFA are much simpler that the Turing Machine though. To implement something closer to a TM, an additional mechanism involving reading and writing to memory would be necessary, besides the basic states/transitions functionality. However, HTML does not seem to have this kind of feature. So I would say HTML is not Turing-complete, but is able to simulate DFA.
Edit1: I was reminded of the video On The Turing Completeness of PowerPoint when writing this answer.
Edit2: complementing this answer with the DFA definition and clarification.
Edit3: it might be worth mentioning that any machine in the real world is a finite-state machine due to reality's constraint of finite memory. So in a way, DFA can actually do anything that any real machine can do, as far as I know. See: https://en.wikipedia.org/wiki/Turing_machine#Comparison_with_real_machines
Definition
From https://en.wikipedia.org/wiki/Deterministic_finite_automaton#Formal_definition
In the theory of computation, a branch of theoretical computer
science, a deterministic finite automaton (DFA)—also known as
deterministic finite acceptor (DFA), deterministic finite-state
machine (DFSM), or deterministic finite-state automaton (DFSA)—is a
finite-state machine that accepts or rejects a given string of
symbols, by running through a state sequence uniquely determined by
the string.
A deterministic finite automaton M is a 5-tuple, (Q, Σ, δ, q0, F),
consisting of
a finite set of states Q
a finite set of input symbols called the alphabet Σ
a transition function δ : Q × Σ → Q
an initial or start state q0
a set of accept states F
The following example is of a DFA M, with a binary alphabet, which
requires that the input contains an even number of 0s.
M = (Q, Σ, δ, q0, F) where
Q = {S1, S2}
Σ = {0, 1}
q0 = S1
F = {S1} and
δ is defined by the following state transition table:
0
0
s1
s2
s1
s2
s1
s2
State diagram for M:
The state S1 represents that there has been an even number of 0s in
the input so far, while S2 signifies an odd number. A 1 in the input
does not change the state of the automaton. When the input ends, the
state will show whether the input contained an even number of 0s or
not. If the input did contain an even number of 0s, M will finish in
state S1, an accepting state, so the input string will be accepted.
HTML implementation
The DFA M exemplified above plus a few of the most basic DFA were implemented in Markdown and converted/hosted as HTML pages by Github, accessible here.
Following the definition of M, its HTML implementation is detailed as follows.
The set of states Q contains the pages s1.html and s2.html, and also the acceptance page acc.html and the rejection page rej.html. These two additional states are a "user-friendly" way to communicate the acceptance of a word and don't affect the semantics of the DFA.
The set of symbols Σ is defined as the symbols 0 and 1. The empty string symbol ε was also included to denote the end of the input, leading to either acc.html or rej.html state.
The initial state q0 is s1.html.
The set of accept states is {acc.html}.
The set of transitions is defined by hyperlinks such that page s1.html contains a link with text "0" leading to s2.html, a link with text "1" leading to s1.html, and a link with text "ε" leading to acc.html. Each page is analogous according to the following transition table. Obs: acc.html and rej.html don't contain links.
0
1
ε
s1.html
s2.html
s1.html
acc.html
s2.html
s1.html
s2.html
rej.html
Questions
In what ways are those HTML pages "machines"? Don't these machines include the browser and the person who clicks the links? In what way does a link perform computation?
DFA is an abstract machine, i.e. a mathematical object. By the definition shown above, it is a tuple that defines transition rules between states according to a set of symbols. A real-world implementation of these rules (i.e. who keeps track of the current state, looks up the transition table and updates the current state accordingly) is then outside the scope of the definition. And for that matter, a Turing machine is a similar tuple with a few more elements to it.
As described above, the HTML implementation represents the DFA M in full: every state and every transition is represented by a page and a link respectively. Browsers, clicks and CPUs are then irrelevant in the context of the DFA.
In other words, as written by #Not_Here in the comments:
Rules don't innately implement themselves, they're just rules an
implementation should follow. Consider it this way: Turing machines
aren't actual machines, Turing didn't build machines. They're purely
mathematical objects, they're tuples of sets (state, symbols) and a
transition function between states. Turing machines are purely
mathematical objects, they're sets of instructions for how to
implement a computation, and so is this example in HTML.
The Wikipedia article on abstract machines:
An abstract machine, also called an abstract computer, is a
theoretical computer used for defining a model of computation.
Abstraction of computing processes is used in both the computer
science and computer engineering disciplines and usually assumes a
discrete time paradigm.
In the theory of computation, abstract machines are often used in
thought experiments regarding computability or to analyze the
complexity of algorithms (see computational complexity theory). A
typical abstract machine consists of a definition in terms of input,
output, and the set of allowable operations used to turn the former
into the latter. The best-known example is the Turing machine.
Some have claimed to implement Rule 110, a cellular automaton, using pure HTML and CSS (no JavaScript). You can see a video here, or browse the source of one implementation.
Why is this relevant? It has been proven that Rule 110 is itself Turing complete, meaning that it can simulate any Turing machine. If we then implement Rule 110 using pure HTML, it follows that HTML can simulate any Turing machine via its simulation of that particular cellular automaton.
The critiques of this HTML "proof" focus on the fact that human input is required to drive the operation of the HTML machine. As seen in the video above, the human's input is constrained to a repeating pattern of Tab + Space (because the HTML machine consists of a series of checkboxes). Much as a Turing machine would require a clock signal and motive force to move its read/write head if it were to be implemented as a physical machine, the HTML machine needs energy input from the human -- but no information input, and crucially, no decision making.
In summary: HTML is probably Turing-complete, as proven by construction.

How do interpreters load their values?

I mean, interpreters work on a list of instructions, which seem to be composed more or less by sequences of bytes, usually stored as integers. Opcodes are retrieved from these integers, by doing bit-wise operations, for use in a big switch statement where all operations are located.
My specific question is: How do the object values get stored/retrieved?
For example, let's (non-realistically) assume:
Our instructions are unsigned 32 bit integers.
We've reserved the first 4 bits of the integer for opcodes.
If I wanted to store data in the same integer as my opcode, I'm limited to a 24 bit integer. If I wanted to store it in the next instruction, I'm limited to a 32 bit value.
Values like Strings require lots more storage than this. How do most interpreters get away with this in an efficient manner?
I'm going to start by assuming that you're interested primarily (if not exclusively) in a byte-code interpreter or something similar (since your question seems to assume that). An interpreter that works directly from source code (in raw or tokenized form) is a fair amount different.
For a typical byte-code interpreter, you basically design some idealized machine. Stack-based (or at least stack-oriented) designs are pretty common for this purpose, so let's assume that.
So, first let's consider the choice of 4 bits for op-codes. A lot here will depend on how many data formats we want to support, and whether we're including that in the 4 bits for the op code. Just for the sake of argument, let's assume that the basic data types supported by the virtual machine proper are 8-bit and 64-bit integers (which can also be used for addressing), and 32-bit and 64-bit floating point.
For integers we pretty much need to support at least: add, subtract, multiply, divide, and, or, xor, not, negate, compare, test, left/right shift/rotate (right shifts in both logical and arithmetic varieties), load, and store. Floating point will support the same arithmetic operations, but remove the logical/bitwise operations. We'll also need some branch/jump operations (unconditional jump, jump if zero, jump if not zero, etc.) For a stack machine, we probably also want at least a few stack oriented instructions (push, pop, dupe, possibly rotate, etc.)
That gives us a two-bit field for the data type, and at least 5 (quite possibly 6) bits for the op-code field. Instead of conditional jumps being special instructions, we might want to have just one jump instruction, and a few bits to specify conditional execution that can be applied to any instruction. We also pretty much need to specify at least a few addressing modes:
Optional: small immediate (N bits of data in the instruction itself)
large immediate (data in the 64-bit word following the instruction)
implied (operand(s) on top of stack)
Absolute (address specified in 64 bits following instruction)
relative (offset specified in or following instruction)
I've done my best to keep everything about as minimal as is at all reasonable here -- you might well want more to improve efficiency.
Anyway, in a model like this, an object's value is just some locations in memory. Likewise, a string is just some sequence of 8-bit integers in memory. Nearly all manipulation of objects/strings is done via the stack. For example, let's assume you had some classes A and B defined like:
class A {
int x;
int y;
};
class B {
int a;
int b;
};
...and some code like:
A a {1, 2};
B b {3, 4};
a.x += b.a;
The initialization would mean values in the executable file loaded into the memory locations assigned to a and b. The addition could then produce code something like this:
push immediate a.x // put &a.x on top of stack
dupe // copy address to next lower stack position
load // load value from a.x
push immediate b.a // put &b.a on top of stack
load // load value from b.a
add // add two values
store // store back to a.x using address placed on stack with `dupe`
Assuming one byte for each instruction proper, we end up around 23 bytes for the sequence as a whole, 16 bytes of which are addresses. If we use 32-bit addressing instead of 64-bit, we can reduce that by 8 bytes (i.e., a total of 15 bytes).
The most obvious thing to keep in mind is that the virtual machine implemented by a typical byte-code interpreter (or similar) isn't all that different from a "real" machine implemented in hardware. You might add some instructions that are important to the model you're trying to implement (e.g., the JVM includes instructions to directly support its security model), or you might leave out a few if you only want to support languages that don't include them (e.g., I suppose you could leave out a few like xor if you really wanted to). You also need to decide what sort of virtual machine you're going to support. What I've portrayed above is stack-oriented, but you can certainly do a register-oriented machine if you prefer.
Either way, most of object access, string storage, etc., comes down to them being locations in memory. The machine will retrieve data from those locations into the stack/registers, manipulate as appropriate, and store back to the locations of the destination object(s).
Bytecode interpreters that I'm familiar with do this using constant tables. When the compiler is generating bytecode for a chunk of source, it is also generating a little constant table that rides along with that bytecode. (For example, if the bytecode gets stuffed into some kind of "function" object, the constant table will go in there too.)
Any time the compiler encounters a literal like a string or a number, it creates an actual runtime object for the value that the interpreter can work with. It adds that to the constant table and gets the index where the value was added. Then it emits something like a LOAD_CONSTANT instruction that has an argument whose value is the index in the constant table.
Here's an example:
static void string(Compiler* compiler, int allowAssignment)
{
// Define a constant for the literal.
int constant = addConstant(compiler, wrenNewString(compiler->parser->vm,
compiler->parser->currentString, compiler->parser->currentStringLength));
// Compile the code to load the constant.
emit(compiler, CODE_CONSTANT);
emit(compiler, constant);
}
At runtime, to implement a LOAD_CONSTANT instruction, you just decode the argument, and pull the object out of the constant table.
Here's an example:
CASE_CODE(CONSTANT):
PUSH(frame->fn->constants[READ_ARG()]);
DISPATCH();
For things like small numbers and frequently used values like true and null, you may devote dedicated instructions to them, but that's just an optimization.

How to tell whether Haskell will cache a result or recompute it?

I noticed that sometimes Haskell pure functions are somehow cached: if I call the function twice with the same parameters, the second time the result is computed in no time.
Why does this happen? Is it a GHCI feature or what?
Can I rely on this (ie: can I deterministically know if a function value will be cached)?
Can I force or disable this feature for some function calls?
As required by comments, here is an example I found on the web:
isPrime a = isPrimeHelper a primes
isPrimeHelper a (p:ps)
| p*p > a = True
| a `mod` p == 0 = False
| otherwise = isPrimeHelper a ps
primes = 2 : filter isPrime [3,5..]
I was expecting, before running it, to be quite slow, since it keeps accessing elements of primes without explicitly caching them (thus, unless these values are cached somewhere, they would need to be recomputed plenty times). But I was wrong.
If I set +s in GHCI (to print timing/memory stats after each evaluation) and evaluate the expression primes!!10000 twice, this is what I get:
*Main> :set +s
*Main> primes!!10000
104743
(2.10 secs, 169800904 bytes)
*Main> primes!!10000
104743
(0.00 secs, 0 bytes)
This means that at least primes !! 10000 (or better: the whole primes list, since also primes!!9999 will take no time) must be cached.
primes, in your code, is not a function, but a constant, in haskellspeak known as a CAF. If it took a parameter (say, ()), you would get two different versions of the same list back if calling it twice, but as it is a CAF, you get the exact same list back both times;
As a ghci top-level definition, primes never becomes unreachable, thus the head of the list it points to (and thus its tail/the rest of the computation) is never garbage collected. Adding a parameter prevents retaining that reference, the list would then be garbage collected as (!!) iterates down it to find the right element, and your second call to (!!) would force repetition of the whole computation instead of just traversing the already-computed list.
Note that in compiled programs, there is no top-level scope like in ghci and things get garbage collected when the last reference to them is gone, quite likely before the whole program exits, CAF or not, meaning that your first call would take long, the second one not, and after that, "the future of your program" not referencing the CAF anymore, the memory the CAF takes up is recycled.
The primes package provides a function that takes an argument for (primarily, I'd claim) this very reason, as carrying around half a terabyte of prime numbers might not be what one wants to do.
If you want to really get down to the bottom of this, I recommend reading the STG paper. It doesn't include newer developments in GHC, but does a great job of explaining how Haskell maps onto assembly, and thus how thunks get eaten by strictness, in general.

My simple turing machine

I'm trying to understand and implement the simplest turing machine and would like feedback if I'm making sense.
We have a infinite tape (lets say an array called T with pointer at 0 at the start) and instruction table:
( S , R , W , D , N )
S->STEP (Start at step 1)
R->READ (0 or 1)
W->WRITE (0 or 1)
D->DIRECTION (0=LEFT 1=RIGHT)
N->NEXTSTEP (Non existing step is HALT)
My understanding is that a 3-state 2-symbol is the simplest machine. 3-state i don't understand. 2-symbol because we use 0 and 1 for READ/WRITE.
For example:
(1,0,1,1,2)
(1,1,0,1,2)
Starting at step 1, if Read is 0 then { Write 1, Move Right) else {Write 0, Move Right) and then go to step 2 - which does not exist which halts program.
What does 3-state mean? Does this machine pass as turing machine? Can we simplify more?
I think the confusion might come from your use of "steps" instead of "states". You can think of a machine's state as the value it has in its memory (although as a previous poster noted, some people also take a machine's state to include the contents of the tape -- however, I don't think that definition is relevant to your question). It's possible that this change in terminology might be at the heart of your confusion. Let me explain what I think it is you're thinking. :)
You gave lists of five numbers -- for example, (1,0,1,1,2). As you correctly state, this should be interpreted (reading from left to right) as "If the machine is in state 1 AND the current square contains a 0, print a 1, move right, and change to state 2." However, your use of the word "step" seems to suggest that that "step 2" must be followed by "step 3", etc., when in reality a Turing machine can go back and forth between states (and of course, there can only be finitely many possible states).
So to answer your questions:
Turing machines keep track of "states" not "steps";
What you've described is a legitimate Turing machine;
A simpler (albeit otherwise uninteresting) Turing machine would be one that starts in the HALT state.
Edits: Grammar, Formatting, and removed a needless description of Turing machines.
Response to comment:
Correct me if I'm misinterpreting your comment, but I did not mean to suggest a Turing machine could be in more than one state at a time, only that the number of possible states can be any finite number. For example, for a 3-state machine, you might label the possible states A, B, and C. (In the example you provided, you labeled the two possible states as '1' and '2') At any given time, exactly one of those values (states) would be in the machine's memory. We would say, "the machine is in state A" or "the machine is in state B", etc. (Your machine starts in state '1' and terminates after it enters state '2').
Also, it's no longer clear to me what you mean by a "simpler/est" machine. The smallest known Universal Turing machine (i.e., a Turing machine that can simulate another Turing machine, given an appropriate tape) requires 2 states and 5 symbols (see the relevant Wikipedia article).
On the other hand, if you're looking for something simpler than a Turing machine with the same computation power, Post-Turing machines might be of interest.
I believe that the concept of state is basically the same as in Finite State Machines. If I recall, you need a separate termination state, to which the turing machine can transition after it has finished running the program. As for why 3 states I'd guess that the other two states are for intialisation and execution respectively.
Unfortunately none of that is guaranteed to be correct, but I thought I'd post my thoughts anyway since the question was unanswered for 5 hours. I suspect if you were to re-ask this question on cstheory.stackexchange.com you might get a better/more definative answer.
"State" in the context of Turing machines should be clarified as to which is being described: (i) the current instruction, or (ii) the list of symbols on the tape together with the current instruction, or (iii) the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol. Reference