Thursday, 28 May 2009

SRH (self-reference hypothesis)

Was thinking about a surgeon performing an operation on themselves. A minor operation they could do with a mirror. But an operation that involved fixing something "essential" would require writing a list of instructions for a student or a robot and then go under anaestetic. The brave surgeon may even do open heart surgery upon themselves giving instructions on what was happening from a VT screen.

But what if it was an operation on the mouth or nervous system or eyes. Operations on features of the information or communication system compromise the surgeons role as instructor. Then it starts to become an "essential" operation because they cannot effect it upon themselves.

What if it was a deep neurological operation to fix fault with central brain functions say for example judgement itself? Could a surgeon with a faulty brain perform an operation to fix themselves? This is the essence of the SRH.

Hegel argues similarly when he put paid to Kant and says what use is there in investigating the validity of Reason? If Reason is valid then we may as well just use it and if it is faulty then our Reasoning would be faulty anyway so it is pointless investigating the validity of Reason. Thus sprung up Phenomenology (well I recon anyway). He likened it to a man with a telescope trying to use the telescope to see if the telescope was faulty. If using a mirror the images were faulty then how would he be able to see the telescope clearly to diagnose things?

So maybe the proof lies in the notion of "faults" or "problems" in a system. Rather than a system trying to express itself or refer to itself, how can a system refer to a fault, error in inaccuracy in itself?

Now if it can't detect fault then it is reasonable to argue that it can't detect correct either. This is what I recognised during primative AI experiments - an AI machine with 100% feedback processes only meaningless signals. This is not a cause of cosciousness as has been speculated both by professionals and amateurs like friends and myself. If feed back occurs it is non-miraculous processing of information encoded from sources that are both the machine and the environment with no inherent distinction between self and other.

Meaning as has been discovered and rediscovered the world over depends upon context and inter-dependence with the environment. How can there be self and other in an inter-dependent relationship?

When we talk of a "problem" we are a priori comparing the state with an external state. This highlights more clearly what happens when we consider a "normal" state - which involves the same process.

A self-reference involves comparing something with something else but never "itself".

So to identify a problem a system must have an long term recorded state of what is "correct" and a current recorded state of "what is". Differences between these trigger the new state "problem". (This is crude and Platonic but we are talking machines here).

Now how would such a machine identify problems in this mechanism?

No comments:

"The Jewish Fallacy" OR "The Inauthenticity of the West"

I initially thought to start up a discussion to formalise what I was going to name the "Jewish Fallacy" and I'm sure there is ...