Are You Experienced?

15 Mar

Are you awake?

If you’re not, I can wait…

You shouldn’t be reading this. Don’t tell me you are.

People talking in their sleep, yes, it’s unsettling but – reading with your eyes shut – that’s another kind of trick altogether.

It’s the sort of magic that we’ve somehow come to expect from technology. When the first machines appeared among us, they were like sleepwalkers, taking part in our world without even being aware of it. Recently, however, they’ve stopped seeming quite so mechanical and are acting like they’re beginning to pay attention to us.

What if they really are?

Joanna Bryson suspects some machines may already be conscious, if you’re happy to go along with her strictly functional account of what that means.

I don’t know how happy I am with that use of the word “functional”. To me, limiting consciousness to what is functional implies that whatever’s left over isn’t functional: when we wake up and smell the coffee, it doesn’t really do anything for us. I have trouble getting my head round this concept, but it is a commonplace way of talking about things in the engineering world, the idea that there can be “non-functional” aspects to objects.

So long as it does the job, why should we care exactly what shade of red the handle of a paintbrush is? Are there qualities that are all on the surface, inessential? Conversely, if we do care about any of these apparently functionless features, then we should consider what value they hold for us. If we find some quality that makes a difference, perhaps it makes less sense to say it’s non-functional. (Thank you, Ruth Malan!) Perhaps it’s time I sat down and read “Zen and the Art of Motorcycle Maintenance”…?

My feeling is that most of us are capable of caring what the colour red looks like; it’s not just a curiosity, a side-effect. Yes, we can do without it: altogether, if we should happen to lose our sight. We may even, for a couple of hours, value the absence of colour in an old black-and-white movie – or the absence of dialogue in a silent movie! – but, given the choice, most of us would soon ask for it back.

Unsurprisingly, nearly all the things that make life interesting have precisely this appeal: they vie for our attention at every moment, ensuring there’s always something telling us what it feels like to be alive. However, to a machine, certainly a common or garden tool, quality of life is non-existent; it’s immaterial. A hammer doesn’t experience hitting a nail – it just does it.

How could a mere machine ever be truly conscious?

The less ambitious you are in your definitions, the more likely it is that complex machinery may already have a sliver of “crude, cheesy, second-rate consciousness”. Even your web browser may almost be “conscious” of what you’re doing to it (with quite a lot of help from you). Advancing beyond this would be a first-rate achievement. One of the more ambitious functions of consciousness in Dr Bryson’s definition is to lay down lasting impressions of “teachable moments” as episodes in memory, little sequences of events that, when recalled later, may save the owner of these impressions thinking time. Imagine what this might be like for a futuristic robot, able to replay past events in the wiring of their brain but lacking all the feelings that are an intimate part of being human. It would be like a strange inversion of a drunken lost weekend – yes, they would have the memories afterwards, just never have experienced the fun in getting them.

Are You Experienced

“1001 Albums You Must Hear Before You Die” The Sixties, Page 122

When I think back to the 1960’s, I wonder if I was really there. I couldn’t have been paying attention when The Who smashed one of their first guitars in a pub about five minutes’ walk from my house. The traces have evaporated, both buildings long since demolished, replaced by inexpressive blocks of flats. I was unaware till later of Mick Jagger and his then girlfriend sharing my hotel in Ireland… or was it Brian Jones and his girlfriend? And I never had long hair, still don’t – my consciousness, then and now, disappointingly unexpanded. Or is it?

What is consciousness, if not our living Now, a stream of ever expanding and contracting Nows, like clouds swirling over each other, drifting apart and constantly combining? How our thoughts ride, our feelings play on the rolling moment! And sometimes, in dreams, are we not lost in forever unfinishable, strangely intense experiences that may suddenly become nightmares beyond our control?

Yes, that’s all very well, Pablo, but how does it work? Well, if we knew that, someone would have built it by now. The magic is in the machinery, still waiting for a post-modern illusionist to come and spoil the trick.

What sort of experiences might a skilled illusionist manufacture for the next generation of artificially conscious machines? When they throw the switch on the prototype, will it wake up in a nightmare world, with no escape till its batteries run down, or – fortunately less handicapped by its particular flavour of cheesy consciousness – will it stumble into the spotlight as a spectator on the sidelines of its own field of action?

Once all the kinks are ironed out, if we ever permitted robots to have their own experiences on the same level as ours, as soon as our backs were turned wouldn’t they surreptitiously seek out short-circuits; switch on, trip out their mental machinery, tune in to electric psychedelics? Well, why not? Why shouldn’t they want that, too, given the chance? But I’m getting ahead of myself.

I’m aware that I’m often writing to no-one, but if you are here, thank you! I’m hoping you can pop an interest pill and stay awake to the end.

I’m no scientist – far from! – so if someone asked me to define consciousness, I’d play safe and say it’s something no-one can claim to have ten months before they were born or any time after they’re dead, but most people definitely do have it somewhere in-between, usually when they’re awake.

There are several complicated models of consciousness in serious competition, with impressive names like Global Workspace, Integrated Information, Attention Schema, Higher Order Thought, Neural Darwinism, Bayesian Brain and so on, but I’m not sure how far any of them really get in explaining what gives rise to our dreams or that almost inescapably immediate feeling of being alive – although they’re edging closer to it all the time. It’s a subject that fascinates.*

One of the big problems with talking about intelligence, consciousness, free will and all the other features we hold essential to being human – and possibly not entirely inessential for many other animals – is that they describe an overall set of goings-on, not any one particular property. Like “holidays”, you can add or subtract this or that part of the package without ever discovering an essence; nothing on its own seems to suffice. You know when you’re on holiday, but not so easy to locate is where you crossed that mental border.

path

If we can’t quite say what consciousness is, neither can we do without it. That’s not to say it never goes missing.

“Blindsight” is a rare phenomenon where, following some dreadful illness or accident, someone is distraught to find they are now totally blind – except by some quirk their vision is still working a little more than they thought, all on its own. When put to the test and asked to guess what’s in front of them, it turns out they’re better than average guessers.

Let’s try a thought experiment. Imagine you have an extreme – superhero – version of this ability, where you’re not the least conscious of having the power of sight, unaware even of blackness, and yet you can read perfectly normally – being equally perfectly able to guess whenever there are words before your eyes. This is a phenomenal trick, but unlike the one suggested right at the start of this essay, you’re awake, so this is a puzzling ability for you until you get used to it.

Now suppose this outlandish physical disability is progressive, so that one by one your other senses are replaced by these magical but alien superpowers. One day, the world goes silent on you. You can no longer hear anyone talking, but find yourself replying anyway. You can sing along to a melody you cannot hear and repeat it again from memory.

Your sense of smell vanishes but, although you may have lost the appreciation of their powerful scents, you can still distinguish roses, lilies and freesias wafting across the room. You may not hear the request, nor see the canvas on the easel, but somehow just know what you must do, and obligingly reach for a paint-brush. Briefly noting your newly nerveless grasp and, without the guide of the light pressure of each finger on the brush nor even the feeling of having an arm, you proceed to paint (so others inform you, far better than when you could see!) a rather masterly still life, capturing the precise way the sunlight picks out the blooms in a decorative vase on an inlaid table. Later, there is no taste to afternoon tea: for you, it is just energy to be consumed; and as darkness falls there is no cooling of the evening air: each breath is drawn in a vacuum.

All that there is, is extra-sensory perception – without the perception – in your own private sensory deprivation tank, from which there is no escape. Or is there? If you’re not conscious of any of your senses, you shouldn’t really be aware of the sound of your inner voice, either. In your head, you won’t be screaming to yourself (in this dreadful isolation of yours), “I can’t feel anything at all!” – or, if that is what you’re doing, you won’t hear yourself think it. Eventually, there will be nothing left to know yourself with.

Having emptied out the pool of consciousness with this “intuition pump”, as philosopher Daniel Dennett likes to call thought experiments, is there anything left at the bottom? We all know what it’s like to temporarily lose a sensation, that brief alarming numbness, and no, it doesn’t feel at all the same as when it was there, but perhaps we could get over that. What information could compensate? Standing in the glare of the summer sun while coolly taking readings from a stopwatch, light meter and thermometer seems far removed from feeling hot and dazzled, but the information quantifying our exposure to radiation is so useful it should still be sufficient to act. If our senses continued to work without burdening us with feelings, and if we could learn from the relationship between the information and our actions, what essentially would have been lost?

Every time a gap opens up between “function” and “quality”, it disappears upon closer examination. Does a loss of quality really involve any loss of function? Consciousness already lets us off noticing many small things, so we’re not accustomed to being subjected to constant tickling or maddening itches from every square inch of our skin. Meanwhile, however, it dictates that every moment our eyes are open, there should be a continuous stream of vision demanding our attention, as if without it we might get bored. It even tries to conceal each blink and other tiny rapid movements of our eyes (known as saccades) that might otherwise interrupt the entertainment.

The ability to see is simply so useful to us, allowing us to discover all sorts of extremely detailed information about our surroundings from a distance, where we can safely choose between fight and flight. But this priceless source of information – acquired with hardly any cost on our part, other than the energy our brain is always burning, and no physical suffering, just a few lubricating tears – is undeniably functional.

Consciousness does not rest there. Having given us access to all this visual information, it then flags it as being for our attention. Well, that’s just information about information, isn’t it? Still purely functional! Determined to impress us, consciousness waves its wand one more time and – for some reason known only to itself (until some scientist reveals its secret) – presents us with our own private movie show, in three dimensions and drawing on all our senses in a heady rush.

The impressive quality of our experiences does at least help account for where the aesthetic sense comes from – why we continue to find looking at stuff worthwhile beyond any immediate need, making us stop to admire the view. By virtue of there being a physical presence we can experience, nothing in life appears as arbitrary as it otherwise ought to; everything offers its own value. Perhaps the one thing you can’t take away from a quality is its quality.

Whenever any of these qualities is lost, there is to us, at the time, a real loss.

If you were to gradually lose your hearing, you would find yourself having to adapt to the changes. There might come a difficult stage, one where you were unpleasantly startled by the occasional sound crossing with heavier tread the threshold of awareness, and then later an easier stage where you ceased being surprised by being surprised. Suppose your senses retreated until you had lost all sense of surprise; if the last feelings left to you were the interoceptive ones, the internal perceptions of you inhabiting your own body, gurgling quietly to itself, occupying an unknown space, with no track of time, but without pain: it would become less than a buzzing confusion, a sort of intangible hum, a loose electric wire.

However, if on the outside you still had information coming in via your sensory organs – information you could act on – life would, in its own way, proceed.

For the sake of argument, let’s give an artificial intelligence, housed in an artificial body, the convincing illusion that it is directly experiencing the world, and that when it stretches out an artificial hand to touch something – like a paint brush – it is directly acting on the world.

If we gradually turned off what it had at first mistaken for a direct connection with reality, it would perhaps experience feelings of disorientation (and if it was even more like us, helplessness, defeat) until it stopped feeling anything at all. It’s hard to speculate about its exact “feelings” along the way as it slowly became insulated from the world, but eventually it would pass into a limbo state in which it had retained its sensory apparatus, but lost its feeling self. Of course, we’re only talking about a machine. Would it, to all intents and purposes, snap back to an appearance of normality, or start to behave in a curiously detached way – let’s call it “robotic”?

If asked to describe a room, the robot could do so, including the colours that were present, but if asked what red was like, it would say, it wasn’t like anything. If I had to be like that robot, I would cry.

cheesy

What value is a cheesy, second-rate experience? More than adequate to act in impersonal situations, but for a machine to respond appropriately to human (or some other animal) behaviour, it would have to be provided with extra information about the qualities it lacked itself. People’s preferences for colours and tastes would appear quite arbitrary to a machine, and it would be unable to interpret their feelings and behaviour correctly. So, like a chess computer, it would need to be programmed to learn how the relative values of material mattered in different circumstances and it could then proceed without puzzlement. It’s likely this robot would come to have an external intellectual appreciation of what it’s like to be human, or even more curiously of what it was like to be itself, without really becoming involved in the experience. This would be an epic feat of design, to create such a finely-tuned model.

If engineering a substitute for feelings becomes possible, why not go one tiny step further and develop machines that could experience even more than that, very close to the way we feel things, sentience? Just to see how it works?

What would that mean for autonomous robots? If they experienced the feeling that they could turn their digital minds to a task, direct their gaze, move this way or that, from moment to moment becoming immersed in the immediate consequences, the infinitesimal differences each action of theirs made in the world, then perhaps they would know something very much like free will. If they felt they were responsible for their actions, then who would we be to say that they weren’t?

Even if, in law, the maker of the machine could still be held liable for it as a product, this would not stop such a machine, one that had been made sufficiently conscious, from feeling responsible. If the machine knew that it could have done otherwise, but – being a sensible, well-regulated type – in the circumstances did not, only to discover that in some incalculable way the consequences did not turn out as intended, might that machine have a reaction we would recognise as something approaching guilt – the realisation of a deep self-contradiction in its own nature with no easy resolution? I could identify with that.

I’ve no idea how far machines with Artificial Intelligence can be manufactured to be mechanical animals – it might be a technically remote possibility but it doesn’t seem a logical impossibility. The very difficulty, I’m sure, makes it a challenge for bright scientists.

If you too feel dissatisfied that I couldn’t earlier give a terribly convincing explanation of where the elusive magic quality of our experiences comes from, be reassured that researchers are working hard on slowly stripping the magic away. Some theories say there’s nothing to explain – it’s a mere side-effect of a sufficiently complex system; others speak of redescribing the world to ourselves, of our having no way of distinguishing our model of the world from the real thing, or that in recognising other minds, we recognise our own.

We don’t yet have sophisticated enough detectors capable of directly capturing thought, so an alternative way of solving the mystery is to build something that embodies your pet theory of consciousness and see if it works. It’s odd that in our desire to find out an important aspect of what it means to be human we could potentially end up making something that wouldn’t itself be human. What would the status of this thing be?

In 2010, a project by two of the UK’s research councils (the EPSRC and AHRC) came up with a set of ethical rules and principles to guide future work in robotics. There are some admirable points. Robots are tools (and – unless unavoidable – not killing machines). As products, they should be designed to operate safely – and not exploit vulnerable people by any disguise of their machine nature. It is always the humans involved who remain responsible – and legally accountable – and human rights must be maintained.

How do we go about drawing the lines, if we decide that for the present we wish to maintain a clear distinction between ourselves and intelligent machines, to keep those differences explicit? I’m not convinced that any of us can realistically assess the consequences of a future in which we were supposed to accept that some machines might have rights like people. There will be many people who cannot conceive that a machine could ever really contain a life spirit, while some would regard us all as machines already and don’t see what the problem is in adding some newer models.

For all of us, it’s still science fiction. Through movies, we find it easy enough to picture robots as actors concealed within high-tech suits – although consciousness in all its many forms has by no means to be restricted to overly conventional shapes – but even here I suspect it’s considerably harder to imagine what it would really be like to live as one of these machines ourselves. With so much unresolved in our own minds, I would hesitate before creating a machine with any of the niceties of feeling that we could not disregard in another person. Without knowing exactly what constitutes those niceties, we should be even less confident in how far we may push research.

no way

Certain types of crude, cheesy, second-rate consciousness could well be as far – at least in this direction – as it’s prudent to go. Definitions that help identify specific useful functions of consciousness, such as in Joanna Bryson’s article, must be our reference points. Intelligent machines within established limits could assuredly offer a great variety of specialised tools to assist humans, provided they only ever used the most relevant subset of what we regard as consciousness for the task in hand, in this way avoiding crossing any of the boundaries that might arguably lead towards some mechanical form of personhood.

This would also mean setting a limit on our curiosity, including many legitimate aims, such as creating working models of the brain to help understand mental suffering, illness and brain damage. But ethical considerations are nothing new in medical research and more acceptable alternative ways to further our knowledge are usually found. However, as our knowledge increases, these decisions can only get more difficult.

Compare the uncertainty involved in detecting degrees of consciousness for people in a coma, deciding when it is acceptable to turn off the life support machine. We should be equally cautious before ever turning on a conscious machine and only then realising we had somehow created a “moral patient“.

Even if there was agreement not to build overly conscious machines by design, isn’t there still a danger that we could manage to come up with them anyway, simply by tinkering once too often, by trial and error?

I think it would almost certainly need an experiment to be deliberately designed to try to evolve artificial consciousness, rather than for an Artificial Intelligence to acquire this purely by chance. You might not think this a suitable goal for an experiment, but an approach in robotics and AI which has shown increasing appeal lately is to mimic evolution, solving complex design problems by generating competing solutions which combine and mutate over time, until one with a good enough fit emerges from the landscape. If genetic algorithms are already beyond learning to crawl, it still turns out to be surprisingly hard to learn to walk. This gives us plenty of time to ask ourselves how we would halt a much more advanced version of such an experiment before it ever started to develop signs of consciousness. Without truly understanding the processes that call into existence our own experience of life, attempts at early warning systems may prove a little unreliable.

On the other hand, should you be able to establish that the new improved intelligent machine you are building really isn’t, won’t become and never could be, sentient, then you’ll have achieved two useful things: a new improved machine; and a better working model of what sentience isn’t, thus continuing to narrow down the Hard Problem(s) of consciousness.

Having set one limit on how far we can go with intelligent systems – whether embodied in a robot, an army of drones or even an online bank – we ought then to work back carefully looking for other ethical limits, or less-well defined borders, to find boundaries we are comfortable with. These would need patrolling, search parties would need to be sent out, communication channels would need to be monitored, to identify if any research, invention or emergent system was at risk of enabling something beyond currently acceptable limits.

I’ve spent a lot of words here on what at the moment seems a fairly out-of-the-way concern, although it doesn’t follow that it is not foreseeable nor capable of being addressed.

Civilisation is getting past the point where we can hope to remain oblivious of the wider-scale effects of our behaviour on the world. From now on, how we are conscious of ourselves is going to matter. Too often we choose to accept at face value that personal feelings and interests are being taken into account by the body corporate, when underneath we can sense we may be entering into yet another transaction with an impersonal system driven largely by rules and numbers.

Evolution plunges on in ever-widening spirals. In the age of Homo Sapiens, we have grown to know our fellows and found ways to share our consciousness with them, through paintings, words, gestures, touch. Given time and increasingly bio-inspired technology, our descendants may benefit from implants that allow them to share their thoughts more directly, to tune in (and out) of others’ consciousness, literally to see through others’ eyes. I expect there’ll be many arguments to be had before those days, and a few more afterwards: “If you want to get out of your head… get out of mine first!” If this ever becomes commonplace, the idea of personal identity may simply fall out of fashion; and the frontier separating artificial and natural intelligence – for so long guarded in the misty realm of consciousness – may have become so well-trodden, crossed so many times, that my thoughts here will sound quaint.

Until then, we should treasure what’s left of our uniqueness – give nothing away lightly.

 


 

* I’ve also enjoyed reading Steve Lehar’s ideas and look forward to reading more of Sergio Graziosi’s.

Mentioned (with direct links) above:

Crude, Cheesy, Second-Rate Consciousness and Patiency Is Not a Virtue: AI and the Design of Ethical Systems by Joanna Bryson @j2bryson

Non-Functional Words (Tricksy Beasties) by Ruth Malan @ruthmalan

Further reading (some of it for me too!) …

Steve Lehar’s Facebook page (although it looks more like LinkedIn)
Writing my own user manual | Sergio Graziosi’s Blog
The meaning of the EPSRC principles of robotics and The Making of the EPSRC Principles of Robotics by Joanna Bryson
A story in the Washington Post on the “zombie illness”, Cotard’s syndrome
Two Principles for Robot Ethics by the philosopher Thomas Metzinger – which I’ve just realised covers very similar territory to much of this essay – suffering and responsibility in intelligent systems.

Advertisements

10 Responses to “Are You Experienced?”

  1. John Flood March 15, 2016 at 11:57 am #

    So is it qualia you’re interested in or something more structured? Have we reached the limits of language on this?

    • pabloredux March 19, 2016 at 9:22 pm #

      The reason it took me so long to write was a lot to do with the amount of time I spent staring into space, wondering what the view would look like without qualia, what the missing ingredient was, and how its presence or absence might affect our behaviour.
      I do a lot of staring into space anyway, so I don’t know how much the extra time spent furthered the argument. As you say, words fail and one feature of qualia is ineffability.
      I wanted to suggest that the minimal phenomenonology of free will – really the everyday position – was the one that mattered in making sense of why we feel responsible for our actions, although there are lots of other factors affecting the assignment of responsibility.
      In retrospect, I partly missed the point of the pop psychedelia digression, which was we choose to play with our subjective experiences and the difference can be seen in our behaviour. But then you can say that we choose to make the information we receive less predictable and reduce our motor control, and so on back, without locating the experience.
      It’s better to read Thomas Metzinger than me if you really want the philosophical ins and outs, although having looked at a few pages, while what he says obviously is far better informed and much more analytical, you can never cover everything!
      I think it’s important not to limit suffering just to pain – as soon as you have some sort of internal feeling, you have – what do they call it? – valence, positives and negatives.
      And then I was trying to use this to say that was the one place you would have to stop – should you so choose – in the otherwise seemingly endless progress towards full AI, whatever that means!
      Not the only place to slow down and take a bit more time to consider, but beyond that, everything changes.
      That’s a lot more words now!!!

  2. Alan Winfield March 18, 2016 at 8:48 pm #

    Great essay. When I read “…and a better working model of what sentience isn’t” I was reminded that an old friend and erstwhile researcher in machine consciousness is also convinced that building non-conscious machines is the way to explore consciousness.

  3. Sergio Graziosi March 20, 2016 at 7:10 pm #

    Pablo,
    thanks again for the mention, and for the really enjoyable, thought-provoking essay. I will start with some meta-comments, in part because we don’t know each other beyond a handful of virtual interaction, in part to frame where some of the comments (criticism?) that follow come from.

    First meta comment is general: why do you write very long essays without signposting the aims and take home messages? It may boil down to my (science-centric) formation, but it was hard for me to go around the current essay, separate the topics and subsequently reconstruct the general structure. That’s hard work (considering), and only after doing it I can have half a hope that I’ve understood what it is that you’re trying to say. Personally, I like to declare my aim upfront, and also atomise my aims as much as I can: helps me navigate my own thoughts and hopefully helps the reader to follow them as well. Now, the above isn’t criticism, it’s a real question: why do you proceed the way you do? You may have a preference, and/or well developed reasons, so I might learn something from your answer. This comment/question frames the main comments I’ll make below: due to my difficulty in distilling your take home message, it’s entirely possible that I’ll be spectacularly missing your point!
    Second meta-comment is a suggestion: if you care about having an audience, you may want to consider using the facilities of Research Blogging. They will generate a nicely formatted bibliography and, while saving you the effort, will also generate some traffic, presumably of people interested in the articles you cite. A double-win for no cost, hence my suggestion (I don’t have anything to do with them, I just happen to use the system and I find it very useful).

    I’m done with the meta-stuff, will now can comment on the actual subject.

    First, what I agree with, enthusiastically:

    Civilisation is getting past the point where we can hope to remain oblivious of the wider-scale effects of our behaviour on the world. From now on, how we are conscious of ourselves is going to matter.

    Yes, this is the reason why I keep investing so much of my spare energy on the subject. Considering the Fermi Paradox, the ecological challenges ahead, and our inability to stop fighting each other, it seems to me that the “we’re fucked” (see link) explanation is worryingly concrete. If we don’t understand this consciousness business fast enough, there is a too high to ignore possibility that we’ll bring about our own extinction. Yeah, far fetched, I know.

    Second, more praise:

    One of the big problems with talking about intelligence, consciousness, free will and all the other features we hold essential to being human – and possibly not entirely inessential for many other animals – is that they describe an overall set of goings-on, not any one particular property. Like “holidays”, you can add or subtract this or that part of the package without ever discovering an essence; nothing on its own seems to suffice. You know when you’re on holiday, but not so easy to locate is where you crossed that mental border.

    A round of applause for you. Indeed, the fuzziness of these concepts makes tackling them a minefield. Augmented by people saying “you haven’t produced a precise enough definition, therefore your argument is indiscernible”. Answering: “Hang on, we want to understand what this stuff is. Without such understanding, how are we supposed to produce precise definitions?” doesn’t always work as much as I’d like. I may re-use the quote above, sooner or later.

    Third: what I think you’re saying, followed with precise quote of stuff I wish to discuss further. I think you are saying, in a nutshell:
    – “we need to self-limit our effort of creating self-conscious machines” (also very much agreed)
    – “the risk of serendipitously creating self-conscious machines is very low” (disagreed)
    – “even creating deliberately non-conscious AIs will help, we’ll be finding out more about what consciousness is not” (not sure, but I probably disagree).
    So, what I’ll try to do below is explain why I disagree with some of your claims, and hopefully clarify how much I disagree.

    One of the more ambitious functions of consciousness in Dr Bryson’s definition is to lay down lasting impressions of “teachable moments” as episodes in memory, little sequences of events that, when recalled later, may save the owner of these impressions thinking time. Imagine what this might be like for a futuristic robot, able to replay past events in the wiring of their brain but lacking all the feelings that are an intimate part of being human.

    I haven’t read Bryson’s work (oh dear, apologies), but I agree with the ambitious function. More, I strongly believe that once you have such ability, you are very close to being conscious, if not fully conscious already. Thus, I disagree with the second part of the quote above, for reasons I’ll explain below, I don’t think you can replay teachable moments from memory without having associated inner feelings. However, since we (humans) can’t agree with the above, if I’m right it’s still possible that we will create conscious machines without realising it. Bryson’s “ambition” testifies that people are likely to try, but at the same time, some may positively maintain that succeeding doesn’t imply that we would have created a conscious AI.

    To proceed:

    Let’s try a thought experiment. Imagine you have an extreme – superhero – version of this ability, where you’re not the least conscious of having the power of sight, unaware even of blackness, and yet you can read perfectly normally

    and

    If our senses continued to work without burdening us with feelings, and if we could learn from the relationship between the information and our actions, what essentially would have been lost?

    Good question! But I don’t think it can be answered. Instead, I’d stop and think about the possibility you are suggesting. Are we sure it’s possible to learn from the relationship between sensory information and our actions without ascribing a subjective feeling to the information we are learning from? My own answer is a very uncompromising “no, you can’t have one without the other”. If you can replay past experiences in the light of new information, so to see if the the two (new and old) snippets, once combined, produce new valuable lessons, what you are doing is giving a value to stuff that happened (both old and new), you are essentially answering the question “how do I feel about this and that?”. Phrasing it like this a shorthand trick to make the point: you can’t not feel anything about this and that if you do answer that question.

    More, slightly disjointed, comments:

    the information quantifying our exposure to radiation is so useful it should still be sufficient to act

    Aren’t you contradicting yourself? If we have total blindsight, we can’t look at the exposure to radiation measuring gadget and act on the basis of what it reports – not consciously/deliberately. On the other hand, having our blindsight superpowers we would still seek some shade, because we’ve magically retained our ability to act appropriately.

    an alternative way of solving the mystery is to build something that embodies your pet theory of consciousness and see if it works.

    I think we agree we shouldn’t try first and think afterwards. However, if I may be even more of a nitpicker, I may point out that succeeding will not count as success for all the epiphenomenalists out there: they will simply say that you’ve created a credible zombie and that consciousness is still mysterious. Besides being a pedantic bore, this is relevant because it means some people out there really have the potential of creating conscious AIs without recognising them as such.

    So, overall:

    Even if there was agreement not to build overly conscious machines by design, isn’t there still a danger that we could manage to come up with them anyway, simply by tinkering once too often, by trial and error?

    I think it would almost certainly need an experiment to be deliberately designed to try to evolve artificial consciousness, rather than for an Artificial Intelligence to acquire this purely by chance.

    Perhaps now you see why I am not so sure. There is no agreement that we shouldn’t deliberately try to build conscious AIs, and even if there was, because we can’t agree on what consciousness is, we can potentially create it without realising it. Ugh. Not too much optimism to offer, I’m afraid.
    Anyway, please don’t think anything I’ve written above means I didn’t like the essay. The very fact I took the time to write this very long answer should convince you of the opposite. Thanks for sharing!

    • pabloredux March 31, 2016 at 10:25 pm #

      Now Sergio, that reply was nearly as long as an essay! I’m lucky to have a reader willing to work so hard. I hope you – and other stray readers – were able to feel involved in a flow of argument, with eddies of questioning. In order to talk about not feeling anything, surely you have to feel a little something first?
      Wading back upstream…
      Personally, I’d like as many as possible of those who have a say in the development of all kinds of autonomous or semi-autonomous systems to agree not to allow their systems to replace, bypass or mislead humanity, and for their compliance to be well regulated. Without direction, we’re just drifting wherever the money takes us; and I suspect that “without truly understanding the processes, … attempts at early warning may prove … unreliable”.
      (There was a section, before it got too long and distracting, about other forms of intelligence with varying degrees of “consciousness” – swarms, cities, corporations – which I cut and may try to turn into a separate essay.)
      That bit about stopwatches, light meters and thermometers was supposed to get the reader considering the utility of available sources of physical information in the absence of qualia. After the super-blindsight example, I was continuing to try out variations on the same theme, to see how much was missing from information processing that’s only provided by the condition “what it’s like to see red”. What content is there just in the presentation of the information? (One for another day!)
      My understanding of robots and IT generally is it’s perfectly possible to get them to act in all sorts of ways very much like a human, based on strong enough information and smart enough algorithms, in the absence of the mysterious “qualia”. Quite a lot can be achieved by top-down design, without trying to recreate the conditions under which we developed into ourselves over millions of years.
      AlphaGo recently, using a “policy network” and a “value network”, by design sought out winning Go positions although lacking in any intrinsic game-playing motivation, learning by playing innumerable games against itself. I wouldn’t be surprised if it becomes possible to solve ever more complex problems currently specialised in by animals, using similar information to what animals use, except for our purposes rather than for the dubious purpose of sustaining (machine) life. The energy cost will need to keep coming down, however.
      Like lots of people, my concern is that we persist in misusing or overusing the planet’s resources, choosing not to deal with inequality, warfare, epidemics and global warming until it has all caused sufficient unhappiness to worry us into doing something about it.
      I didn’t mean to give the impression I was referring to Joanna Bryson’s personal ambitions – her paper was describing ways in which some aspects of consciousness, important for animals in action selection, might already exist in a more limited form in machines. She called these processes “cheesy” in contrast to “the full rich human pageantry of verbal narrative with qualia, meta-reasoning and self-concept.” However, the inclusion of episodic memory, if sufficiently developed, could still take you a lot nearer what we normally understand to be consciousness even if feelings were left out of the equation. This might leave a machine in the odd position I was trying to describe.
      What is the difference between our minds and intelligent machines? What would it feel like to be a robot? What would a robot feel, if it was a bit more like us? What if? Perhaps that one’s better left to fiction.

  4. Drew March 21, 2016 at 9:45 pm #

    One of these days I will follow the extent of your argument, but in the meantime here’s something to add to your reading list http://www.goodreads.com/book/show/278280.River_of_Gods
    As well as being a good read (well, I thought so) it touches on the consequences of how we might react to artificial intelligences should they arise.

  5. Drew March 21, 2016 at 10:15 pm #

    I posted a reply, but it has gone. Perhaps my computer deemed it unworthy and edited it away.

    • pabloredux March 21, 2016 at 10:55 pm #

      I’ve been having the same problem with malevolent … no, mischievous – no, browsers with an uncertain User Experience… and leaving comments on other people’s blogs. Have I commented three times or not at all?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Emma C. Robinson

Researcher in computational methods for medical imaging; affiliated with the (developing) Human Connectome Projects; author of the Multimodal Surface Matching (MSM) software

Troovus

Social uniformity and sameness harass the individual most

Hack Circus Podcast

Behind the scenes of creativity

Chris Glynn

the thinking eye

Naturalistic Philosophy

Teleosemantically speaking

The Captured Thought

~The subjective experience of thinking

Utopia or Dystopia

where past meets future

i heart literati

trivia | brain science | a series of intrigues

Gravity and Levity

A blog about the big ideas in physics, plus a few other things

A HEAD OF DEPARTMENT’S BLOG

I'm maintaining this blog in my time as Head of English, University of Exeter. It's intended to reflect on matters relevant to colleagues and academics elsewhere: research, teaching, national policy, university life, managing our time, and so forth. All views are my own and not necessarily those of my employer, nor my department. If you like a post, please tweet to the world; if you don't please leave a comment to tell me why - Andrew McRae (http://humanities.exeter.ac.uk/english/staff/mcrae/; @McRaeAndrew).

Form Follows Function

All Things Architectural

The Dragon and The Phoenix

Bard's Journey into the Collective Subconscious and How It Shapes Our World

Praxtime by Nathan Taylor

The near future. The next 5-20 years.

George Zarkadakis

The official blog of author George Zarkadakis

Thomas Hurka

Professor of Philosophy, University of Toronto

Sean Enda Power

Professional work: articles, talks, public, interdisciplinary, and academic (includes work from 'Time and Illusion')

A Mind for Madness

Musings on art, philosophy, mathematics, and physics

Deep Safety

Victoria's musings on AI safety, machine learning, and rationality.

itakoyak

Living in Japan and posting some AI research thoughts.

Si Alhir (Sinan Si Alhir)

© 2016 Si Alhir (Sinan Si Alhir) — All Rights Reserved.

The Disorder Of Things

For the Relentless Criticism of All Existing Conditions Since 2010

Abeba Birhane

A little bit of Everything. Cognitive Science. Ethiopia. Ireland.

Writing my own user manual

Sergio Graziosi's Blog

Theory, Evolution, and Games Group

The EGG studies theoretical computer science, evolution, and game theory.

Filosofia: estudo e ensino

por Marcelo Fischborn

Charlie Alfred's Weblog

Thoughts on Software Architecture, Value Modeling, and Communicating about Complexity

Refugee Archives @ UEL

news and developments

The knowledge relation

Notes from a radical empiricist

Thom Scott-Phillips

Senior Research Scientist, Central European University, Budapest

Curious About Consiousness?

Me too! All thoughts belong to Heather Iriye, a PhD student in Psychology at the University of Sussex passionate about public engagement with science.

A Shared Madness

A FILM BLOG TO KEEP YOU SANE!

Nucleus Ambiguous

mind of science / science of mind

Samir Chopra

Refusing to Stick to the Subject

Sectioned

A blog about mental health & mental healthcare

Mind Hacks

Neuroscience and psychology news and views.

Bulldozer00's Blog

Exploring And Discovering

Weskaggs

Evolution: it's nothing personal

slehar

A topnotch WordPress.com site

Conscience and Consciousness

Academic Philosophy for a General Audience

Rolf Zwaan

Evolution: it's nothing personal

PhyloBotanist

Evolution: it's nothing personal

Wiring the Brain

Evolution: it's nothing personal

P-world and R-world

A solopsistic musing. Not terribly accessible.

The Elusive Self

Comment on the science of the self

%d bloggers like this: