Failed to refine any pending goal - proof

I am trying to prove a theorem in Isabelle and I am stuck in this step:
theorem exists_prime_factor: " (n > Suc 0) ⟶ (∃xs::nat list. prod_list xs = n ∧ all_prime xs)"
proof (induct n rule: less_induct)
case (less k)
assume HI: "⋀y::nat. (y < k ⟹ Suc 0 < y ⟶ (∃xs. prod_list xs = y ∧ all_prime xs))"
then show ?case
proof -
show "(Suc 0 < k) ⟶ (∃xs. prod_list xs = k ∧ all_prime xs)"
proof -
assume "Suc 0 < k" then show "(∃xs. prod_list xs = k ∧ all_prime xs)" sorry
In the last goal I need to prove an implication. As usual I assume the premises and try to show the conclusion. However when I write the last line I get "Failed to refine any pending goal". Is it because of the induction principle I applied before? Because without that induction I am able to to use the implication introduction rule as usual (assume premises then show conclusion).
Does anyone have an idea of what might be going on?
Thank you very much.

The "problem" indeed has to do with the proof -. The statement opens a new subproof without applying any proof methods to the goal. If you write proof without -, the proof method rule will be applied implicitly, which does the trick in this situation.
proof rule picks the most straight-forward rule to apply to your goal. In this case, this will be equivalent to proof (rule impI), because the object level statement you want to prove is of the form "a --> b". impI is the introduction rule for implication. It allows you to lift an object level implication of the form "a --> b" to the meta logical "a" ==> "b".
You need your goals to be of the form "a" ==> "b" to continue with subproofs of the form assume "a" [...] show "b".

Related

Please could you explain to me how currying works when it comes to higher order functions, particularly the example below

Can someone please help me with the below,
applyTwice :: (a -> a) -> a -> a
applyTwice f x = f (f x)
I do not understand how the above works. If we had something like (+3) 10 surely it would produce 13? How is that f (f x). Basically I do not understand currying when it comes to looking at higher order functions.
So what I'm not understanding is if say we had a function of the form a -> a -> a it would take an input a then produce a function which expects another input a to produce an output. So if we had add 5 3 then doing add 5 would produce a function which would expect the input 3 to produce a final output of 8. My question is how does that work here. We take a function in as an input so does partial function application work here like it did in add x y or am I completely overcomplicating everything?
That's not currying, that's partial application.
> :t (+)
(+) :: Num a => a -> a -> a
> :t (+) 3
(+) 3 :: Num a => a -> a
The partial application (+) 3 indeed produces a function (+3)(*) which awaits another numerical input to produce its result. And it does so, whether once or twice.
You example is expanded as
applyTwice (+3) 10 = (+3) ((+3) 10)
= (+3) (10+3)
= (10+3)+3
That's all there is to it.
(*)(actually, it's (3 +), but that's the same as (+ 3) anyway).
As chepner clarifies in the comments (quoted with minimal copy editing),
partial application is an illusion created by the fact that functions only take one argument, and the combination of the right associativity of (->) and the left associativity of function application. (+) 3 isn't really a partial application. It's just [a regular] application of (+) to an argument 3.
So seen from the point of view of other, more traditional languages, we refer to this as a distinction between currying and partial application.
But seen from the Haskell perspective it is all indeed about currying, i.e. applying a function to its arguments one at a time, until fully saturated as indicated by its type (i.e. a->a->a value applied to an a value becomes an a->a value, and that then becomes an a value when applied to an a value in its turn).

How to revert beta-reductions to named functions in a lambda calculus-based system?

Well, suppose that I have a set of functional definitions (with a syntax tree) in church encoding :
true : λx -> λy -> x
false : λx -> λy -> y
Giving the definition λx -> λy -> y, it is very clear how to return the named definition, applying a matching with alpha-equivalence will be enough.
α true λx -> λy -> y = false
α false λx -> λy -> y = true
But consider the example below:
0 : λf λz -> x
succ : λn λf λx -> f (n f x)
3 : succ (succ (succ 0)))
So, when 3 suffers from beta-reduction it will unfold to some definition like :
3_unfolded : (λf -> (λx -> (f (f (f x))))) : (A -> A) -> A -> A
You can see the term can get bigger easily, of course, it is not a good way to represent pure data, because of the size of the term. So, I want to know if there is an algorithm able efficiently to rename again every definition after suffering evaluation. Them 3_unfolded will become (succ (succ (succ 0))) again by giving the set of definitions of natural church encoding (0, and succ only).
I know there are some side effects, like ambiguous representations, but let's ignore that (if you expand the same definition of succ and rename to succ_2, for example).
This is essentially the problem of beta-equivalence, and it’s undecidable in general; it also won’t necessarily produce usable output even when it could produce something, e.g. with some restrictions including strong normalisation. Therefore I think your best strategies here will be heuristic, because by default, reductions destroy information. The solutions are to retain information that you care about, or avoid needing information that’s gone. For instance:
Decouple the memory representation of terms from their LC representations, in particular cases where you care about efficiency and usability. For example, you can store and print a Church numeral as a Natural, while still allowing it to be converted to a function as needed. I think this is the most practical technical angle.
Retain information about the provenance of each term, and use that as a hint to reconstruct named terms. For example, if you know that a term arose by a given shape of beta-reduction, you can beta-expand/alpha-match to potentially rediscover an application of a function like succ. This may help in simple cases but I expect it will fall down in nontrivial programs.
Instead of considering this an algorithmic problem, consider it a usability design problem, and focus on methods of identifying useful information and presenting it clearly. For example, search for the largest matching function body that is also the most specific, e.g. a term might match both λx. x (identity) and λf. λx. f x (function application), but the latter is more specific, and even more specifically it can be a numeral (λs. λz. s z = 1); if there are multiple possibilities, present the most likely few.
Whenever you encounter a problem that’s undecidable for arbitrary programs, it’s worth remembering that humans write extremely non-arbitrary programs. So heuristic solutions can work remarkably well in practice.

How is the recursion and control sturctures in this LISP code different from normal functions and control structures?

So this piece of code was on a worksheet that my teacher gave to prepare us for a recursion exam. The teacher states that in LISP, you write the operand before the variables, which makes enough sense. What doesn't make sense is that my prof says that the if block at line 2 is still read normally, "if y less than 1", but he says the code at line 3 executes when "y is not less than 1". The question on the worksheet asks us to find the result of f(3 2). Any answers to either question would be appreciated!
The if statement has three children: the test condition, the result when the test is true and the result when the test is false. In your code it might be confusing because the last two elements have been combined in line 3. The true result is simply 'x', and the false result is (f (* x x) (- y 1)).
So the statement will be read as 'if y is less than 1 then return x, otherwise return (f (* x x) (- y 1))'.
I hope that's enough to get you started.

Writing an expression using only NAND, OR, XNOR

I have a 2-1 mux and I'm trying to write z = s'd0 + sd1
using only NAND, XNOR, and OR gates (not necessarily all of them).
I tried simplifying it and what I ended up with is z = NAND(NAND(s', d0), NAND(s, d1)), but I can't use NOT ('), so is there a way to write NAND(s', d0) without the NOT?
You can build NOT from NAND:
NAND(X,X) == NOT(X)
NAND gate is an universal gate; you can use it to make any other gate.
s' = nand(s,s)
Simple solution
Full version of the solution proposed by others is (A NAND S) NAND (B NAND (S NAND S)) .
By the way, NOT X could also be expressed as X NAND 1, not only as X NAND X.
Advanced solution
(S OR (A XNOR B)) XNOR A
The latter solution is definitely more interesting:
It uses a fewer number of gates (though of two different types).
It uses not functionally complete set of gates (thereby is less trivial).
How to find the latter solution?
Construct the Zhegalkin polynomial of 2:1 mux and simplify it slightly: (S AND (A XOR B)) XOR B.
Note that the boolean function dual to 2:1 mux is also 2:1 mux, but for swapped input signals.
Now "dualize" the polynomial (replace AND and XOR with OR and XNOR respectively) and swap A with B.

What type systems can prevent goal suspension in logical languages?

From section 3.13.3 of the curry tutorial:
Operations that residuate are called rigid , whereas operations that narrow are called flexible. All defined operations are flexible whereas most primitive operations, like arithmetic operations, are rigid since guessing is not a reasonable option for them. For example, the prelude defines a list concatenation operation as follows:
infixr 5 ++
...
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : xs ++ ys
Since the operation “++” is flexible, we can use it to search for a list satisfying a particular property:
Prelude> x ++ [3,4] =:= [1,2,3,4] where x free
Free variables in goal: x
Result: success
Bindings:
x=[1,2] ?
On the other hand, predefined arithmetic operations like the addition “+” are rigid. Thus, a
call to “+” with a logic variable as an argument flounders:
Prelude> x + 2 =:= 4 where x free
Free variables in goal: x
*** Goal suspended!
Curry does not appear to guard against writing goals that will be suspended. What type systems can detect ahead of time whether a goal is going to be suspended?
What you've described sounds like mode checking, which generally checks what outputs will be available for a certain set of inputs. You may want to check the language Mercury which takes mode checking quite seriously.