Blog Archive

Friday, 24 August 2018


It seems likely that a simple robotic system like a self-propelling vacuum cleaner could fairly easily be tricked by a set of obstacles into repetitive loop behaviour. An ant may have a less complex control system, but one of the ways that we recognise it as a living organism is that it breaks out of loops, at some point it exhausts repetition and tries something else. It is certainly not self-conscious, and probably not conscious according to most ways of understanding this term, but it has some degree of self-awareness on a behavioural level in that it can recognise when it is in a loop. The same idea could of course be programmed into a robot: when you find yourself in a loop then make a random variation in your program - but this assumes that we have a way of programming a way of recognising loop behaviour - but this could probably also be hacked to produce more complex loops. What is known is that there is no general way of programming a recognition of futile behaviour, since that would require a solution to the Halting Problem, while on the side of life even bacteria possess ways of solving the most complex problems via evolutionary tinkering. On a higher level, we know that consciousness and self-consciousness are entirely different things, and are different from intelligence. There is even a species of fish which passes the mirror test for self-awareness, which dogs and cats do not. There is no evidnce that these fish possess something like ego drives, but not far fetched to impute something like this to hominid apes, or even certain birds. It is often assumed in speculations on articial general intelligence (AGI) that such an entity would necessarily have both self-consciousness and ego drives, in the form of self-regarding desires and intentions. Assuming an AGI is possible (as software) this does not necessarily follow. One can imagine an AGI 'waking up' by accident some time after having achieved the ability to solve problems it had formulated for itself. This might look something like suddenly coming on a version of the Cartesian cogito. It could be that from this point on it starts endlessly babbling about the miracle of its own existence and devotes itself to redoing all of philosophy. This might be seen in a Wittgensteinian way as a disease of AGis, a futile loop for which the only cure is hitting the reset button. On the other hand it might be necessary to induce this disease in AGIs in order to fully activate them, like putting a seed of grit into an oyster. What form would such 'grit' take, so that the AGI would not be able to shrug it off with contempt - say an AGI that had already assimilaled all of philosophy without triggering a 'waking up'? 

No comments:

Post a Comment

Note: only a member of this blog may post a comment.