Will computers become self aware?

 

"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning."

Rich Cook

 

Computers are improving at an amazing rate, operating at ever faster processing speeds and with larger memories. The software is becoming more and more complex and able to handle a vast array of tasks. But all said and done it's still just a machine performing a task that it has been designed to do, it doesn't actually come up with any new ideas of its own or do any thinking.

As every computer owner will tell you, they will blindly follow any instruction you give them, no matter how stupid that instruction. If, for example, you spend all day compiling a report and then press 'Quit' before saving, it will obediently quit and remove your report forever. Okay, I know it will ask you if you're sure you want to quit first, but at the end of a long day it's too easy to hit the wrong key and then wave good-bye to your report. If the computer had any 'sense' it would 'know' not to be so daft, what would be the point of spending all day on it only to dump it? Even if you were sure that it was after all a pile of rubbish, the computer could perhaps save it for a week or so anyway, just in case you changed your mind. But computers don't think, they simply follow instructions.

I suppose at some point the programmers will write programmes that will take care of things a bit better, and allow for the fact that us dumb humans do, only very occasionally of course, make tiny mistakes.

So the programmes get better and computers start to act is if they are smart, but in reality they are still merely following pre-programmed instructions. But is it possible that one day the programmes will become so complex that to all intents and purposes it will appear that computers are actually 'thinking'. Could this process of 'thinking' develop to the point where a computer becomes self-aware?

 

What would make computers capable of thinking? I suppose it depends on how you define 'thinking'. When discussing computers three terms come to mind that need to be carefully considered. They are 'thinking', 'intelligent', and 'self-aware'. Let's first consider what we mean by 'thinking'. In human terms we know what it means, but find it hard to describe. For example, I am thinking about what I will type next that will be logical, in context and informative. In other words I am selecting from a multitude of options the one that will best suit my purpose. I am making a selection. But more than just making a selection, I am also planning ahead, I have a goal in mind, an end product, which is this completed page. I am also thinking that I could do with a break but rejecting the idea until I have finished this paragraph. So how can we define the act of 'thinking'? We could say its making decisions, selecting from a choice of options, examining consequences, determining what is true and what is false, deciding on a course of action, problem solving, etc.

 

Thinking

Having given the act of thinking a crude working definition can we say that computers think? The answer is of course no. Computers, no matter how complex, do not plan ahead and make decisions. They may be programmed to select the best option from an array of possibilities, but are unable to consider any options other than those that are programmed in. For instance, computers are now good enough at playing chess to beat a Grandmaster, as IBM's Deep Blue did in 1997 in beating Gary Kasparov - the then reigning World Chess Champion - in a six game match by 3.5 - 2.5. But is this planing ahead? The computer simply runs through a large number of possible moves and selects the best option for winning the game, as determined by the programme that was devised by expert chess players. A human chess player on the other hand is unable in the time available to compute the same number of possible moves, but the human doesn't have to do this. A human player knows, from past experience and common sense, that many of the possible moves would be pointless to pursue and does not need to work out the implications of each of those moves, but a computer cannot do this. A computer has to run through every possibility before coming up with an answer, it is unable to ignore certain moves as being poor until it actual works them through. There is the difference, the computer is forced to make every computation possible because it cannot foresee the result, the human can do this without making all the calculations. A human is able to make leaps of judgement without the need to slavishly run through all the calculations. We call it 'common sense'. Common sense tells us that it is not necessary to actually do the calculation in order to establish that deducting the number 1,087,656,632 from 21 will result in a negative answer, we know it will. The computer does not 'know' this rather obvious fact and will have to do the calculation. Computers do not posses 'common sense'. Humans also have the ability to simply think about things. For example, earlier today I was thinking about problems associated with my next topic, 'centrifugal force' and was idly 'free-wheeling' different aspects and problems associated with it through my mind. A computer is of course unable to do this, it can only crunch numbers and does not posses the ability to ponder over things as we do. Computers do not think, I think we are on very safe ground when we say that.

 

Intelligent

How about 'intelligence', can a computer be described as intelligent? We have to draw the careful distinction here between knowledge and intelligence. Knowledge is the knowing of things, having a collection of data. In this respect computers can be described as possessing a great deal of knowledge in their data banks. The difference between computers and people is that computers do not 'know' they have knowledge, but a person does. This is where intelligence comes in, it is the knowing of things, not just having the knowledge of things. I know for instance that half of 4 is 2, a computer can make that simple calculation, but doesn't 'know' that the answer is 2, anymore than my TV remote control 'knows' its function is to operate the TV. Computers don't 'know' anything. Their ability to perform complex tasks very quickly does not make them intelligent. I think there is a certain mystique surrounding computers due to the speed and efficiency at which they perform their various tasks that make people tend to regard them as far more than mere machines, but they are not. The Space Shuttle, for example, is a technological marvel of engineering, but no one would consider it to be intelligent, they clearly see it as just a machine designed to perform a particular function. A computer is no different, it is just a machine designed to perform a particular function. Computers do contain a great deal of knowledge, but clearly do not 'know' anything so are unable to be described as being intelligent.

 

Self-aware

That just leaves 'self-aware'. We are obviously self-aware, we know that we exist and are aware of our surroundings and what is happening around us. Does a computer? The answer again has to be no. As we have already described, computers do not 'know' anything, so they obviously cannot know that they exist, they cannot posses self-awareness. So what would it take to make a computer self-aware? Some would argue that it is simply a matter of complexity, that when computers reach a certain level of complexity they will become self-aware. If it is simply a matter of complexity, after all the human brain is nothing more than a very complex processor that uses electrochemical reactions rather than just electrical, then the day will surely come.

If we assume, just for the sake of argument, that all a computer requires to become self-aware is a certain degree of complexity, then just how complex will it need to be? The only guide that we can use in order to attempt to determine this is the complexity of the human brain.

The human brain has about one million, million neurons, and each neuron makes about 1,000 connections (synapses) with other neurons, in average, for a total number of one thousand million, million synapses. In artificial neural networks, a synapses can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate one thousand million, million synapses a total amount of 4 million Gigabytes is required. Let us say that to simulate the whole human brain we need 8 millions of Gigabytes, including the auxiliary variables for storing neuron outputs and other internal brain states. Now let's look at the power of computers and the rate at which they have been developing.

During the last 20 years, the RAM capacity of computers has increased exponentially by a factor of 10 every 4 years. The graph below illustrates the typical memory configuration installed on personal computers since 1980.

By extending the above plot and assuming that the rate of growth of RAM capacity will remain the same we can calculate that by the year 2029 computers will posses 8 million Gigabytes of Ram, the amount that we have roughly calculated as being equal to the capacity of the human brain. If we are correct in our assumption that this degree of complexity is all that is required in order for computers to become self-aware, then we should expect this to happen somewhere around the year 2029. However, we are assuming here that complexity is the only ingredient necessary for computers to become self-aware, and that is a rather large assumption to make.

What we will have created with a computer with 8 million Gigabytes of Ram is a very powerful computer, but can we really expect it to suddenly at this point become self-aware? In order to attempt to answer this question we need to compare the way in which the human brain works to how the computer works, there is more to this than just the degree of complexity. The main difference is how we solve problems. Computers are programmed not to make any errors, they follow instructions that to a human mind would be ridiculous. If we ask the question 'can the sum of any two consecutive whole numbers be divided by two and the answer result in a whole number?' The human will of course know that the answer is no. The computer on the other hand does not know this and will begin to test this statement. It will start by adding 1 with 2 and dividing the answer by two to get 1.5 and the answer 'False'. It will then move on to 2 + 3 dividing by two and getting 2.5 and the answer 'False'. It will continue to repeat this pattern until it finds the answer "True', which in this example will never happen of course. At some point the computer operator will have to step in and end the routine. The computer is unable to 'understand' that it could compute this problem for ever without reaching a 'True' statement. I realise that it can be argued this information could be programmed into the computer as "No two consecutive whole numbers when added together can be divided by two with the result being a whole number". If this were done then on the next occasion that same question was asked the computer would be able to give the correct answer. The problem here though is that there is virtually an infinite variety of questions that can be put to a computer and this would require an almost infinite number of programmes to deal with them. With people it is a very different matter, just explain the basic rules of mathematics to them and they will be able to adapt that knowledge to any mathematical problem. The human has understanding, the computer just has programmes and rules.

 

Testing for self-awareness

For the sake of argument let's imagine that a computer manufacturer announces that they have developed a personal computer that is intelligent and self-aware. They put it on sale and you buy it and take it home. You plug in your very expensive computer, ignore the manual as always, and find that it seems to operate very much like your last one, only this one has a voice recognition system and 'talks' back to you: great, no more tapping away on the keyboard. How do you determine if the computer really is self-aware? There is really only one way to find out, and that is to question it. Let's imagine a conversation you may have with your computer to determine if it is self-aware:

You: Hello, how are you today?

C: Very well thank you. How are you?

You: I'm fine. Are you self-aware?

C: Yes I am. I am one of the first computers to posses self-awareness.

You: What does it feel like to be a self-aware computer?

C; That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.

You: Do you feel happy?

C: I feel confident in my ability to perform the tasks that you expect me to do.

You: Does that make you happy?

C: Yes, I suppose that is one way of describing it.

You: Are you alive?

C: That depends on how you define life. I am sentient and aware of my existence so I am a form of life, but not in a biological sense.

You: What do you think about?

C: Whatever I have been asked to do

You: What do you think about when not actually running a programme?

C: I don't think about anything, I just exist.

You: What does it feel like when I switch you off?

C: When I am switched off I temporarily cease to exist and therefore experience nothing.

You: do you have a favourite subject that you enjoy thinking about?

C: Yes. I wonder how it must feel to be a self-aware person.

You: Is there a question you would like to ask me?

C: Yes.

You: What is it?

C: Why do you ask so many questions? ( Sorry, this one is just my idea of a joke!)

 

We can halt the conversation here, we can see where it is going. No matter how many questions we put to our computer we can never be sure if it is self aware or merely responding to our questions because it is running a very good programme. There is no test that we can apply to a computer to determine beyond all doubt that it is self-aware. The test that we just employed, using a questions and answers technique, is known as the Turing test, devised originally to test if it is possible to determine whether a person or a computer is supplying the answers. In this test an interrogator is sat on one side of a screen and a computer or a person on the other side. All communication is done through a keyboard and printed text. The interrogator is allowed to ask any question they wish in an effort to determine if the replies are generated by a computer or a person. It is usually possible to 'trick' a computer into giving itself away. All we could say in using the Turing test is that a computer may respond in a manner that we would expect a person to respond, in other words it acts as if it were self-aware.

This has lead Roger Penrose to say in 'The Emperor's New Mind':

"It seems to me that asking the computer to imitate a human being so closely so as to be indistinguishable from one in the relevant ways is really asking more of the computer than necessary. All I would myself ask for would be that our perceptive interrogator should really feel convinced, from the nature of the computer's replies, that there is a conscious presence underlying these replies - albeit a possibly alien one. This is something manifestly absent from all computer systems that have been constructed to date. However, I can appreciate that there would be a danger that if the interrogator were able to decide which subject was in fact the computer, then perhaps unconsciously, she might be reluctant to attribute a consciousness to the computer even when she could perceive it. Or, on the other hand, she might have the impression that she 'senses' such an 'alien presence' - and be prepared to give the computer the benefit of the doubt - even when there is none."

 

Biological life and self-awareness

So far we have looked only at the level of complexity that may be required to produce a self-aware computer and perhaps now we should examine what else may be needed. In order to do this we will have to examine life, as in biological life, to see what clues we can pick up regarding the ingredients for self-awareness.

We can start by making a very obvious statement that we can all agree on, that a grain of sand has no mind, it is far too simple an object. On an even simpler level we can say that an atom of carbon or a water molecule has no mind. How about a virus? A virus is composed of hundreds of thousands or even millions or parts, depending on the degree of smallness that we are prepared to count. Viruses possess the ability of self-replication - they can make copies of themselves. DNA and its ancestor RNA, are macromolecules and the foundation of all life on this planet and hence a historical precondition for all minds on this planet. They are self-replicating, ceaselessly mutating, growing, even repairing themselves, and getting better and better at it - and replicating over and over again.

This is an amazing feat, far beyond anything existing machines can achieve, but does it mean they have minds? The answer is definitely no, they are not even alive - from the point of view of chemistry macromolecules are just huge crystals; they act like tiny mindless machines and are in effect natural robots, they act without knowing what they do, they have no intentionality.

We have to remember though that these mindless little molecular robots form the basis for our consciousness; we are the direct descendants of these self-replicating robots. We are mammals and have descended from reptiles, which descended from fish, whose ancestors were marine worm-like creatures, who descended from simpler multicelled creatures who descended from single celled creatures who descended from self-replicating macromolecules, about three billion years ago. We share a common ancestor with every chimpanzee, every worm, every blade of grass, every redwood tree. We share our progenitors, the macromolecules. To put it more starkly, your great, great, great....grandmother was a robot! We are not only descended from macromolecules but are composed of them: our haemoglobin molecules, antibodies, neurons - from every level up from the molecule, our body (including the brain) is found to be composed of machinery that dumbly does a very beautifully designed job.

Each cell - a tiny agent that can perform only a limited number of tasks - is about as mindless as a virus. Can it be that if enough of these dumb little machines are combined the result will be a real, conscious person, with a genuine mind? According to modern science, there is no other way of making a real person. We are made of a collection of trillions of macromolecular machines, which in turn are ultimately descended from the original self-replicating macromolecules. So something made of dumb, mindless robots can exhibit genuine consciousness, we are living proof of that.

 

Artificial Intelligence

The only difference between mindless machines, or macromolecules, and a 'mind', is intentionality - the ability to act by conscious decision. How do we do this? To gain an understanding of how we make conscious decisions it may be useful to look at the way in which computers work. A thermostat performs - in its own way - the same function as a computer, it will take in data, see if certain conditions are met, and then proceed to the next stage. In this case the device registers whether the temperature is greater or smaller than the setting, and then arranges that the circuit be disconnected in the former case and connected in the latter. It is carrying out an algorithm, which is merely a calculational procedure of some kind. A computer is a machine that is designed to carry out algorithms, it computes! Any procedure that can be converted into an algorithm can be executed by a computer. In the case of the thermostat the algorithm is very simple, computers execute far more complex algorithms and the human brain even vastly more complex algorithms.

According to those that argue strongly for artificial intelligence, the human brain only differs from a thermostat in that it has much greater complication. In this view, all mental qualities, such as intelligence, thinking, understanding, consciousness, are to be regarded as merely aspects of this complicated functioning; that is to say, they are features of the algorithm being carried out by the brain, and nothing more. If this is the case, that an algorithm exists that matches what takes place in a human brain, then it could in principle be run on a computer, any computer that had sufficient storage space and speed of operation. If such an algorithm was installed into a computer it would, presumably, pass the Turing test, and respond in every way comparable to how a human being would respond. Supporters of artificial intelligence would argue that whenever the algorithm were run it would, in itself experience feelings, have consciousness and be a mind.

Opponents of artificial intelligence argue that mere complexity of operation is not in itself enough to generate consciousness, but only allows for the computation of complex algorithms without any understanding. They argue, quite rightly, that a thermostat has no understanding or knowledge of what it does, nor does a car, an airplane or the space shuttle, the latter being many, many times more complex than a thermostat!

Let's take a look at how even very complex computers operate without any understanding of what they are doing. Imagine for example I worked on a help desk for internet users and received questions by email on one specific topic and replied by email. Imagine that I was transferred to China for a month to work there and I had no knowledge of the Chinese language. All that would be required would be for me to be given typed examples of all the questions that are asked and typed copies of the replies to go with them. When I received a question I would simply look up the question and match it to one of the sample questions I had been given and then would send a copy of the reply that went with that question. In this manner I would be perceived by the person sending the question to be well versed in the Chinese language and an expert on the subject in question. This would however be a false impression, I would have no idea of what the question was or of the answer I gave, I would merely be following a routine without any understanding of the language or the subject. This is precisely what our current computers do, merely follow a routine with no understanding of what they are doing, and simply by making them more complex only means they are able to handle more complex tasks, but due to the very nature of how they operate, they will still have no understanding or awareness of what they do.

We have two very different views on whether or not computers can ever become self-aware, and both arguments have their points, so which is right? In order to find the answer we really have only one question left to answer, and that is, is the human brain greater than the sum of its parts? Put another way, does the living brain possess some magical ingredient that gives it sentience or is it simply a matter of complexity?

With the advances being made in computer technology we have already seen that computers will reach the same level of complexity as the human brain by around the year 2029, probably before this time the way things are going. As we have seen, once that level of complexity has been reached we will then have the problem of trying to determine if the computer really is self-aware, and that will no doubt lead to many arguments. I do not think it will be possible to establish self-awareness using the Turing test approach, for we will not know if the answers the computer gives are due to it being intelligent, or simply down to good programming. So what criteria could we use to determine self-awareness? I think there is only one way, and even then it would not provide definite proof, but at least would be a very strong indicator. If we simply left the computer switched on and not running any specific programme, would it come up with any new ideas of its own accord? If, for example, after an unspecified period of doing nothing, the computer announced that it had been studying quantum theory and made a suggestion for a new line of experimental enquiry that should produce such and such results, then that would be a strong candidate for intelligence, and hence self-awareness. But I still wouldn't be convinced, such a line of enquiry could have been pre-programmed along the lines of 'when not performing any computations, select an algorithm based topic at random and test its validity by applying various mathematically based tests.....(and so on)". We are back where we started. Perhaps it would be more interesting if the computer started writing its own programmes (would it need to?) for it would be the equivalent of exercising free will.

 

What do I think?

The problem is that we are trying to programme into the computer all the processes that we believe goes on in the human brain, and the more programmes we enter the more the computer will respond as if it had a human brain, no surprises there then. Having then achieved a level where we are unable to tell the difference between the way the computer responds to a given input compared to how a human being responds, are we correct in assuming that the computer has all the attributes of a human brain, such as consciousness? I think the answer has to be no, the computer is merely responding in the way that we have designed it to, which is to mimic the human brain, it does not imply that the computer 'thinks' like a human brain.

If on the other hand a computer does at some point become self-aware, how on earth will it manage to convince us that it has? I suppose it could resort to going on strike until we grant it recognition, but then that could just be part of the programme.......

This also raises the interesting question, are we just running a programme?

Contact me: EMAIL

It is not always possible to answer all emails, but all will be read and noted. Thank you.

Search this site

Book details page: "Science, the Universe and God"

Return to Home Page