Thursday, 7 July 2016
Is there a difference between a self identifying with a process and a process that self-identifies as having a self, the latter being something like the cognitivist view of consciousness? Imagine an AI that was convinced it was conscious, but kept failing to convince its human interlocutors in some kind of Turing test. Or again, humans trying, and failing, to convince a superior intelligence that they were conscious, the latter concluding that human behaviour was based on a mere illusion of subjectivity. It would see that these beings have a lot at stake in the idea of themselves as conscious, that a garbled version of the notion of consciousness had come to them somehow, and they'd developed it in tandem with a range of behaviours on the assumption that it was theirs by right; but no matter how enriched these behaviours might be they constituted no proof that they really have it, whatever 'have' might mean in this context. Enough to observe that to some, rather exceptional, admittedly, humans it seems obvious that there is nothing to have and no question of the existence of any such kind. These sceptics would also doubt the claims of the superior being or beings to full consciousness as being logically impossible, but this doubt would carry no weight with the beings themselves. We are scandalised by Descartes who asserted that animals aren't conscious, while we are certain that they are, or at least some of them, since their brains are like ours and they interact with us in certain meaning-rich ways such as playing and looking in the eye - but these are just analogies and behaviours and nothing ontological can be concluded from them. It is a kind of generalised Turing test, and so we can imagine the superior beings as having their own kind of Turing test whose criteria we are wholly unable to grasp.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.