SRH Recap: needed a name for the particular issue that self-reference brings problems. Godel Incompleteness, Turing Halting Problem, Tarski Truth theorem, Russel's Paradox all derive from self-reference.
- Godel's statement says that "I am not provable"
- Turing Halting Contradiction says: "Loop forever if I Halt"
- Tarski says "I am false"
- Russel's Paradox is: make a set from all the sets which say 'I do not belong to myself''
In each case there is some formal expression of "I". And that proves utterly catastrophic because by negating the statement we have a contradiction. "This is True" initially appears fine but it opens the door to disaster "This is not true" by simple negation blows everything apart.
My first recognition of this was in thinking about a machine that could be self-aware. In essence it must be a machine where the state was fed into the input so that it could classify its own processing. In other words have some idea what it was doing. The state in such a system will conform to a fractal with fixed points. It will be otherwise meaningless. It was that realisation that for a system to be meaningful it must process not itself but its environment which raises the SRH. Awareness of "self" is a myth. Much later I read that Buddha concluded the same: that all things are actually non-self. (That is not to reject the self, just to point out that if you are processing it then it is not yourself).
So I crudely concluded that in some way Self-Reference is flawed. And have never progressed to define clearly what way that is. I've guessed that perhaps a complete theory of SRH requires self-reference itself so rendering it impossible.
So recursion. With recursion you define a function in terms of itself. This will not halt unless you add a condition to the function that enables a branch at some stage that returns a non recursive value. Consider this silly example:
function Sum(n) {
if (n==0) return 0
return n + Sum(n-1)
}
Print Sum(10)
This returns 55. It simply sums all the numbers up to n that is calculates n(n+1)/2. It would regress forever except the (n==0) test pulls it out and returns the absolute value 0 (not defined in terms of itself). But recursion is powerful. Consider this variety of top using just * and - operators:
function Top2(n) {
if (n==0) return 2
return Top2(n-1)*Top2(n-1)
}
It computes 2^(2^n).
You could write with much more verbose loops and state using only * like this:
function Top2(n) {
double product1 = 1
for i = 1 to n
product1 *= 2
next
double product2 = 1
for i = 1 to product1
product2 *= 2
next
return product2
}
Many recursive functions can be written in "for loops" like this. But there are functions like the Ackermann function that can only be computed through recursion. It is more powerful than basic program loops.
So the concept of computability revolves around recursion. Any computation that can be done it is hypothesised can be computed with a recursive function.
In this blog I want to look at recursive functions that include fixed points (FPs). FPs have become important in the understanding of SRH (but it isn't complete yet).
Suppose the following identity holds:
R(x1) == x1
It means that all levels of this recursive function collapse and it becomes a non-recursive function at x1!
Fixed points break recursion.
Indeed for a Recursive function to Halt it must have a fixed point. You write a halting recursive function by checking for this fixed point and returning a fixed value.
Its actually a little bit more complex because our Top2(x) only has non integer complex fixed points. Numerically I found two: 0.824679-1.56743i, 4.24218+2.09484i. Its beyond my math to understand why.
However the simple function Top2(x)-2 has a real fixed point at (0,0). I don't have time to complete this thought.
But recursive functions collapsing to non-recursive functions at fixed points is how Rainbow Tables work to break hashing algorithms. Its one things that significantly improved Turing's Bombe at Bletchley Park.
Once a dynamic function comes back to a value it has already had then it enters a loop and that effectively forms something like a fixed point. We'll call that a fixed series. A fixed point then is just a special case of a fixed series with a single member. Once looping we know the algorithm will never halt for that value and is undecided.
While recursive algorithms are written to branch at some stage to end the recursion and provide a fixed value, not all recursive algorithms will escape infinite loop. Sometimes that branch will not be executed. Consider a failed Sum() function which never stops cos the condition n*n is always >= 0.
function FailedSum(n) {
if (n*n<0) return 0
return n + FailedSum(n-1)
}
Print Sum(10)
However a recursive function written entirely in terms of recursion will always loop forever.
function InfiniteRecurse(n) {
return InfiniteRecurse(n-1)
}
function InfiniteRecurse(n) {
return InfiniteRecurse(n-1)/InfiniteRecurse(n-1)
}
True/False is often mapped to functions with 1/0 binary output. When defining these in terms of themselves you can get alternating results. In the case of "I am false" we start assuming it is true, which then tells us it is false, which means it must be true. By returning to the same value, we know the recursion loops forever. The 4 example above are of this type.
Allowing some flexibility with binding variables (for simplicity) then take the 4 paradoxes above:
Godel: Let there be a function Provable(x) which is 1 when statement index x is provable otherwise 0. But what of this statement with index g:
1-Provable(g)
If statement g returns 1 then it is not provable. So statement g is true but outside the proving system of the logic (incomplete). Or the proving system can prove a false statement (inconsistent).
Likewise Turing: Let H(f,x) decide if program index f halts for value x. We can use this to write program T(x) which uses knowledge about itself to break H.
function T(x) {
if H(T,x) then T(x)
else return 1
}
T(x) is definitely a valid function which H(f,x) must be able to compute. But if H(f,x) decides that T halts then it doesn't halt and vice versa. So Turing concludes that H(f,x) cannot exist.
Likewise Tarski: Let T(x) return 1 if statement at index x is true else 0. What about the statement with index t: 1-T(t). When this statement returns 1 it is false. Effective "I am false."
Finally Russel. TODO
In each case being able to define something in terms of itself has caused problems. I'm racing through this cos of time. But initial motivation was just to note that when we talk about self-reference we are talking about defining something in terms of itself. Which is also recursion, which is the basis of the concept of computability.
No comments:
Post a Comment