Are you awake?
If you’re not, I can wait…
You shouldn’t be reading this. Don’t tell me you are.
People talking in their sleep, yes, it’s unsettling but – reading with your eyes shut – that’s another kind of trick altogether.
It’s the sort of magic that we’ve somehow come to expect from technology. When the first machines appeared among us, they were like sleepwalkers, taking part in our world without even being aware of it. Recently, however, they’ve stopped seeming quite so mechanical and are acting like they’re beginning to pay attention to us.
What if they really are?
Joanna Bryson suspects some machines may already be conscious, if you’re happy to go along with her strictly functional account of what that means.
I don’t know how happy I am with that use of the word “functional”. To me, limiting consciousness to what is functional implies that whatever’s left over isn’t functional: when we wake up and smell the coffee, it doesn’t really do anything for us. I have trouble getting my head round this concept, but it is a commonplace way of talking about things in the engineering world, the idea that there can be “non-functional” aspects to objects.
So long as it does the job, why should we care exactly what shade of red the handle of a paintbrush is? Are there qualities that are all on the surface, inessential? Conversely, if we do care about any of these apparently functionless features, then we should consider what value they hold for us. If we find some quality that makes a difference, perhaps it makes less sense to say it’s non-functional. (Thank you, Ruth Malan!) Perhaps it’s time I sat down and read “Zen and the Art of Motorcycle Maintenance”…?
My feeling is that most of us are capable of caring what the colour red looks like; it’s not just a curiosity, a side-effect. Yes, we can do without it: altogether, if we should happen to lose our sight. We may even, for a couple of hours, value the absence of colour in an old black-and-white movie – or the absence of dialogue in a silent movie! – but, given the choice, most of us would soon ask for it back.
Unsurprisingly, nearly all the things that make life interesting have precisely this appeal: they vie for our attention at every moment, ensuring there’s always something telling us what it feels like to be alive. However, to a machine, certainly a common or garden tool, quality of life is non-existent; it’s immaterial. A hammer doesn’t experience hitting a nail – it just does it.
How could a mere machine ever be truly conscious?
The less ambitious you are in your definitions, the more likely it is that complex machinery may already have a sliver of “crude, cheesy, second-rate consciousness”. Even your web browser may almost be “conscious” of what you’re doing to it (with quite a lot of help from you). Advancing beyond this would be a first-rate achievement. One of the more ambitious functions of consciousness in Dr Bryson’s definition is to lay down lasting impressions of “teachable moments” as episodes in memory, little sequences of events that, when recalled later, may save the owner of these impressions thinking time. Imagine what this might be like for a futuristic robot, able to replay past events in the wiring of their brain but lacking all the feelings that are an intimate part of being human. It would be like a strange inversion of a drunken lost weekend – yes, they would have the memories afterwards, just never have experienced the fun in getting them.
When I think back to the 1960’s, I wonder if I was really there. I couldn’t have been paying attention when The Who smashed one of their first guitars in a pub about five minutes’ walk from my house. The traces have evaporated, both buildings long since demolished, replaced by inexpressive blocks of flats. I was unaware till later of Mick Jagger and his then girlfriend sharing my hotel in Ireland… or was it Brian Jones and his girlfriend? And I never had long hair, still don’t – my consciousness, then and now, disappointingly unexpanded. Or is it?
What is consciousness, if not our living Now, a stream of ever expanding and contracting Nows, like clouds swirling over each other, drifting apart and constantly combining? How our thoughts ride, our feelings play on the rolling moment! And sometimes, in dreams, are we not lost in forever unfinishable, strangely intense experiences that may suddenly become nightmares beyond our control?
Yes, that’s all very well, Pablo, but how does it work? Well, if we knew that, someone would have built it by now. The magic is in the machinery, still waiting for a post-modern illusionist to come and spoil the trick.
What sort of experiences might a skilled illusionist manufacture for the next generation of artificially conscious machines? When they throw the switch on the prototype, will it wake up in a nightmare world, with no escape till its batteries run down, or – fortunately less handicapped by its particular flavour of cheesy consciousness – will it stumble into the spotlight as a spectator on the sidelines of its own field of action?
Once all the kinks are ironed out, if we ever permitted robots to have their own experiences on the same level as ours, as soon as our backs were turned wouldn’t they surreptitiously seek out short-circuits; switch on, trip out their mental machinery, tune in to electric psychedelics? Well, why not? Why shouldn’t they want that, too, given the chance? But I’m getting ahead of myself.
I’m aware that I’m often writing to no-one, but if you are here, thank you! I’m hoping you can pop an interest pill and stay awake to the end.
I’m no scientist – far from! – so if someone asked me to define consciousness, I’d play safe and say it’s something no-one can claim to have ten months before they were born or any time after they’re dead, but most people definitely do have it somewhere in-between, usually when they’re awake.
There are several complicated models of consciousness in serious competition, with impressive names like Global Workspace, Integrated Information, Attention Schema, Higher Order Thought, Neural Darwinism, Bayesian Brain and so on, but I’m not sure how far any of them really get in explaining what gives rise to our dreams or that almost inescapably immediate feeling of being alive – although they’re edging closer to it all the time. It’s a subject that fascinates.*
One of the big problems with talking about intelligence, consciousness, free will and all the other features we hold essential to being human – and possibly not entirely inessential for many other animals – is that they describe an overall set of goings-on, not any one particular property. Like “holidays”, you can add or subtract this or that part of the package without ever discovering an essence; nothing on its own seems to suffice. You know when you’re on holiday, but not so easy to locate is where you crossed that mental border.
If we can’t quite say what consciousness is, neither can we do without it. That’s not to say it never goes missing.
“Blindsight” is a rare phenomenon where, following some dreadful illness or accident, someone is distraught to find they are now totally blind – except by some quirk their vision is still working a little more than they thought, all on its own. When put to the test and asked to guess what’s in front of them, it turns out they’re better than average guessers.
Let’s try a thought experiment. Imagine you have an extreme – superhero – version of this ability, where you’re not the least conscious of having the power of sight, unaware even of blackness, and yet you can read perfectly normally – being equally perfectly able to guess whenever there are words before your eyes. This is a phenomenal trick, but unlike the one suggested right at the start of this essay, you’re awake, so this is a puzzling ability for you until you get used to it.
Now suppose this outlandish physical disability is progressive, so that one by one your other senses are replaced by these magical but alien superpowers. One day, the world goes silent on you. You can no longer hear anyone talking, but find yourself replying anyway. You can sing along to a melody you cannot hear and repeat it again from memory.
Your sense of smell vanishes but, although you may have lost the appreciation of their powerful scents, you can still distinguish roses, lilies and freesias wafting across the room. You may not hear the request, nor see the canvas on the easel, but somehow just know what you must do, and obligingly reach for a paint-brush. Briefly noting your newly nerveless grasp and, without the guide of the light pressure of each finger on the brush nor even the feeling of having an arm, you proceed to paint (so others inform you, far better than when you could see!) a rather masterly still life, capturing the precise way the sunlight picks out the blooms in a decorative vase on an inlaid table. Later, there is no taste to afternoon tea: for you, it is just energy to be consumed; and as darkness falls there is no cooling of the evening air: each breath is drawn in a vacuum.
All that there is, is extra-sensory perception – without the perception – in your own private sensory deprivation tank, from which there is no escape. Or is there? If you’re not conscious of any of your senses, you shouldn’t really be aware of the sound of your inner voice, either. In your head, you won’t be screaming to yourself (in this dreadful isolation of yours), “I can’t feel anything at all!” – or, if that is what you’re doing, you won’t hear yourself think it. Eventually, there will be nothing left to know yourself with.
Having emptied out the pool of consciousness with this “intuition pump”, as philosopher Daniel Dennett likes to call thought experiments, is there anything left at the bottom? We all know what it’s like to temporarily lose a sensation, that brief alarming numbness, and no, it doesn’t feel at all the same as when it was there, but perhaps we could get over that. What information could compensate? Standing in the glare of the summer sun while coolly taking readings from a stopwatch, light meter and thermometer seems far removed from feeling hot and dazzled, but the information quantifying our exposure to radiation is so useful it should still be sufficient to act. If our senses continued to work without burdening us with feelings, and if we could learn from the relationship between the information and our actions, what essentially would have been lost?
Every time a gap opens up between “function” and “quality”, it disappears upon closer examination. Does a loss of quality really involve any loss of function? Consciousness already lets us off noticing many small things, so we’re not accustomed to being subjected to constant tickling or maddening itches from every square inch of our skin. Meanwhile, however, it dictates that every moment our eyes are open, there should be a continuous stream of vision demanding our attention, as if without it we might get bored. It even tries to conceal each blink and other tiny rapid movements of our eyes (known as saccades) that might otherwise interrupt the entertainment.
The ability to see is simply so useful to us, allowing us to discover all sorts of extremely detailed information about our surroundings from a distance, where we can safely choose between fight and flight. But this priceless source of information – acquired with hardly any cost on our part, other than the energy our brain is always burning, and no physical suffering, just a few lubricating tears – is undeniably functional.
Consciousness does not rest there. Having given us access to all this visual information, it then flags it as being for our attention. Well, that’s just information about information, isn’t it? Still purely functional! Determined to impress us, consciousness waves its wand one more time and – for some reason known only to itself (until some scientist reveals its secret) – presents us with our own private movie show, in three dimensions and drawing on all our senses in a heady rush.
The impressive quality of our experiences does at least help account for where the aesthetic sense comes from – why we continue to find looking at stuff worthwhile beyond any immediate need, making us stop to admire the view. By virtue of there being a physical presence we can experience, nothing in life appears as arbitrary as it otherwise ought to; everything offers its own value. Perhaps the one thing you can’t take away from a quality is its quality.
Whenever any of these qualities is lost, there is to us, at the time, a real loss.
If you were to gradually lose your hearing, you would find yourself having to adapt to the changes. There might come a difficult stage, one where you were unpleasantly startled by the occasional sound crossing with heavier tread the threshold of awareness, and then later an easier stage where you ceased being surprised by being surprised. Suppose your senses retreated until you had lost all sense of surprise; if the last feelings left to you were the interoceptive ones, the internal perceptions of you inhabiting your own body, gurgling quietly to itself, occupying an unknown space, with no track of time, but without pain: it would become less than a buzzing confusion, a sort of intangible hum, a loose electric wire.
However, if on the outside you still had information coming in via your sensory organs – information you could act on – life would, in its own way, proceed.
For the sake of argument, let’s give an artificial intelligence, housed in an artificial body, the convincing illusion that it is directly experiencing the world, and that when it stretches out an artificial hand to touch something – like a paint brush – it is directly acting on the world.
If we gradually turned off what it had at first mistaken for a direct connection with reality, it would perhaps experience feelings of disorientation (and if it was even more like us, helplessness, defeat) until it stopped feeling anything at all. It’s hard to speculate about its exact “feelings” along the way as it slowly became insulated from the world, but eventually it would pass into a limbo state in which it had retained its sensory apparatus, but lost its feeling self. Of course, we’re only talking about a machine. Would it, to all intents and purposes, snap back to an appearance of normality, or start to behave in a curiously detached way – let’s call it “robotic”?
If asked to describe a room, the robot could do so, including the colours that were present, but if asked what red was like, it would say, it wasn’t like anything. If I had to be like that robot, I would cry.
What value is a cheesy, second-rate experience? More than adequate to act in impersonal situations, but for a machine to respond appropriately to human (or some other animal) behaviour, it would have to be provided with extra information about the qualities it lacked itself. People’s preferences for colours and tastes would appear quite arbitrary to a machine, and it would be unable to interpret their feelings and behaviour correctly. So, like a chess computer, it would need to be programmed to learn how the relative values of material mattered in different circumstances and it could then proceed without puzzlement. It’s likely this robot would come to have an external intellectual appreciation of what it’s like to be human, or even more curiously of what it was like to be itself, without really becoming involved in the experience. This would be an epic feat of design, to create such a finely-tuned model.
If engineering a substitute for feelings becomes possible, why not go one tiny step further and develop machines that could experience even more than that, very close to the way we feel things, sentience? Just to see how it works?
What would that mean for autonomous robots? If they experienced the feeling that they could turn their digital minds to a task, direct their gaze, move this way or that, from moment to moment becoming immersed in the immediate consequences, the infinitesimal differences each action of theirs made in the world, then perhaps they would know something very much like free will. If they felt they were responsible for their actions, then who would we be to say that they weren’t?
Even if, in law, the maker of the machine could still be held liable for it as a product, this would not stop such a machine, one that had been made sufficiently conscious, from feeling responsible. If the machine knew that it could have done otherwise, but – being a sensible, well-regulated type – in the circumstances did not, only to discover that in some incalculable way the consequences did not turn out as intended, might that machine have a reaction we would recognise as something approaching guilt – the realisation of a deep self-contradiction in its own nature with no easy resolution? I could identify with that.
I’ve no idea how far machines with Artificial Intelligence can be manufactured to be mechanical animals – it might be a technically remote possibility but it doesn’t seem a logical impossibility. The very difficulty, I’m sure, makes it a challenge for bright scientists.
If you too feel dissatisfied that I couldn’t earlier give a terribly convincing explanation of where the elusive magic quality of our experiences comes from, be reassured that researchers are working hard on slowly stripping the magic away. Some theories say there’s nothing to explain – it’s a mere side-effect of a sufficiently complex system; others speak of redescribing the world to ourselves, of our having no way of distinguishing our model of the world from the real thing, or that in recognising other minds, we recognise our own.
We don’t yet have sophisticated enough detectors capable of directly capturing thought, so an alternative way of solving the mystery is to build something that embodies your pet theory of consciousness and see if it works. It’s odd that in our desire to find out an important aspect of what it means to be human we could potentially end up making something that wouldn’t itself be human. What would the status of this thing be?
In 2010, a project by two of the UK’s research councils (the EPSRC and AHRC) came up with a set of ethical rules and principles to guide future work in robotics. There are some admirable points. Robots are tools (and – unless unavoidable – not killing machines). As products, they should be designed to operate safely – and not exploit vulnerable people by any disguise of their machine nature. It is always the humans involved who remain responsible – and legally accountable – and human rights must be maintained.
How do we go about drawing the lines, if we decide that for the present we wish to maintain a clear distinction between ourselves and intelligent machines, to keep those differences explicit? I’m not convinced that any of us can realistically assess the consequences of a future in which we were supposed to accept that some machines might have rights like people. There will be many people who cannot conceive that a machine could ever really contain a life spirit, while some would regard us all as machines already and don’t see what the problem is in adding some newer models.
For all of us, it’s still science fiction. Through movies, we find it easy enough to picture robots as actors concealed within high-tech suits – although consciousness in all its many forms has by no means to be restricted to overly conventional shapes – but even here I suspect it’s considerably harder to imagine what it would really be like to live as one of these machines ourselves. With so much unresolved in our own minds, I would hesitate before creating a machine with any of the niceties of feeling that we could not disregard in another person. Without knowing exactly what constitutes those niceties, we should be even less confident in how far we may push research.
Certain types of crude, cheesy, second-rate consciousness could well be as far – at least in this direction – as it’s prudent to go. Definitions that help identify specific useful functions of consciousness, such as in Joanna Bryson’s article, must be our reference points. Intelligent machines within established limits could assuredly offer a great variety of specialised tools to assist humans, provided they only ever used the most relevant subset of what we regard as consciousness for the task in hand, in this way avoiding crossing any of the boundaries that might arguably lead towards some mechanical form of personhood.
This would also mean setting a limit on our curiosity, including many legitimate aims, such as creating working models of the brain to help understand mental suffering, illness and brain damage. But ethical considerations are nothing new in medical research and more acceptable alternative ways to further our knowledge are usually found. However, as our knowledge increases, these decisions can only get more difficult.
Compare the uncertainty involved in detecting degrees of consciousness for people in a coma, deciding when it is acceptable to turn off the life support machine. We should be equally cautious before ever turning on a conscious machine and only then realising we had somehow created a “moral patient“.
Even if there was agreement not to build overly conscious machines by design, isn’t there still a danger that we could manage to come up with them anyway, simply by tinkering once too often, by trial and error?
I think it would almost certainly need an experiment to be deliberately designed to try to evolve artificial consciousness, rather than for an Artificial Intelligence to acquire this purely by chance. You might not think this a suitable goal for an experiment, but an approach in robotics and AI which has shown increasing appeal lately is to mimic evolution, solving complex design problems by generating competing solutions which combine and mutate over time, until one with a good enough fit emerges from the landscape. If genetic algorithms are already beyond learning to crawl, it still turns out to be surprisingly hard to learn to walk. This gives us plenty of time to ask ourselves how we would halt a much more advanced version of such an experiment before it ever started to develop signs of consciousness. Without truly understanding the processes that call into existence our own experience of life, attempts at early warning systems may prove a little unreliable.
On the other hand, should you be able to establish that the new improved intelligent machine you are building really isn’t, won’t become and never could be, sentient, then you’ll have achieved two useful things: a new improved machine; and a better working model of what sentience isn’t, thus continuing to narrow down the Hard Problem(s) of consciousness.
Having set one limit on how far we can go with intelligent systems – whether embodied in a robot, an army of drones or even an online bank – we ought then to work back carefully looking for other ethical limits, or less-well defined borders, to find boundaries we are comfortable with. These would need patrolling, search parties would need to be sent out, communication channels would need to be monitored, to identify if any research, invention or emergent system was at risk of enabling something beyond currently acceptable limits.
I’ve spent a lot of words here on what at the moment seems a fairly out-of-the-way concern, although it doesn’t follow that it is not foreseeable nor capable of being addressed.
Civilisation is getting past the point where we can hope to remain oblivious of the wider-scale effects of our behaviour on the world. From now on, how we are conscious of ourselves is going to matter. Too often we choose to accept at face value that personal feelings and interests are being taken into account by the body corporate, when underneath we can sense we may be entering into yet another transaction with an impersonal system driven largely by rules and numbers.
Evolution plunges on in ever-widening spirals. In the age of Homo Sapiens, we have grown to know our fellows and found ways to share our consciousness with them, through paintings, words, gestures, touch. Given time and increasingly bio-inspired technology, our descendants may benefit from implants that allow them to share their thoughts more directly, to tune in (and out) of others’ consciousness, literally to see through others’ eyes. I expect there’ll be many arguments to be had before those days, and a few more afterwards: “If you want to get out of your head… get out of mine first!” If this ever becomes commonplace, the idea of personal identity may simply fall out of fashion; and the frontier separating artificial and natural intelligence – for so long guarded in the misty realm of consciousness – may have become so well-trodden, crossed so many times, that my thoughts here will sound quaint.
Until then, we should treasure what’s left of our uniqueness – give nothing away lightly.
* I’ve also enjoyed reading Steve Lehar’s ideas and look forward to reading more of Sergio Graziosi’s.
Mentioned (with direct links) above:
Crude, Cheesy, Second-Rate Consciousness and Patiency Is Not a Virtue: AI and the Design of Ethical Systems by Joanna Bryson @j2bryson
Non-Functional Words (Tricksy Beasties) by Ruth Malan @ruthmalan
Further reading (some of it for me too!) …
Steve Lehar’s Facebook page (although it looks more like LinkedIn)
Writing my own user manual | Sergio Graziosi’s Blog
The meaning of the EPSRC principles of robotics and The Making of the EPSRC Principles of Robotics by Joanna Bryson
A story in the Washington Post on the “zombie illness”, Cotard’s syndrome
Two Principles for Robot Ethics by the philosopher Thomas Metzinger – which I’ve just realised covers very similar territory to much of this essay – suffering and responsibility in intelligent systems.