Blaise Agüera: “Machines can learn to behave” | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Blaise Agüera y Arcas (47 years old) is a world authority on artificial intelligence. He holds the position of Vice President of Research at Google Research, the company’s division that centralizes R&D and leads a team of about 500 people. Among them was the engineer Blake Lemoine, who last June had his minute of glory: he assured in a report published by Washington Post that LaMDA, the automatic generator of conversational chats in which he worked, had become self-aware. “If I didn’t know that this is a computer program that we recently developed, I would have thought I was talking to a seven- or eight-year-old with a physics background,” she said. He was immediately suspended and later fired. Agüera y Arcas, in a telephone interview from Seattle, clarifies that the reason for Lemoine’s dismissal was not “his statements about him, but the leaking of confidential documents.”
Born in Rhode Island, he has Catalan blood. “I know how to say ‘collons de Déu’ [cojones de Dios] and something else”, he says between laughs. His father, “a young communist from Reus”, Tarragona, met his mother, an American, in a kibbutz in Israel. Something has permeated that ideology in him: “If you ask me if I believe in capitalism, I will say no. Their ultimate goal is a big deal, we need a change,” he assures. Despite the fact that the projects he works on are confidential, Agüera agrees to speak with EL PAÍS about the Lemoine and LaMDA case.
Ask. Can an artificial intelligence be conscious, as Blake Lemoine claims?
Response. Depends. What does it mean to you to be aware?
P. Being able to express will, purposes and own ideas.
R. Yes, that is a definition based on the ability to discern good and evil. There are also others. For some, being conscious just means being smart; for others, they can feel emotions. And for the philosopher David Chalmers it also implies being something for someone, that there is a subjective experience behind it. I don’t think that human beings are supernatural: we are made of atoms that interact with each other, there is nothing magical. As a computational neuroscientist, I believe that it is possible for a machine to behave like us, in the sense that computation is capable of simulating any type of physical process.
P. Do you agree with Blake Lemoine’s statements?
R. No. Blake said that LaMDA was specifically conscious, but he also made it very clear that for him there is something supernatural about it, he believes he has a soul. So there are parts of his argument that I can agree with, but I don’t share his spiritual convictions.
P. Have you talked to him since he was fired?
R. No. I don’t have any personal issues with Blake, I think he’s a really interesting guy. And he was very brave to go public with his opinion of LaMDA. But he revealed confidential documentation. He was always a peculiar guy.
P. In a tribune he published in The Economist you said that when you talked to LaMDA you felt “the ground shifting under your feet” and that “you might think you were talking to something intelligent”. What does it mean exactly?
R. I mean, it’s very easy for us to think we’re talking to someone instead of something. We have a very strong social instinct to humanize animals or things. I have interacted with many, many such systems over the years, and with LaMDA there is a world of difference. You think: “he really understands concepts!”. Most of the time it feels like you’re having a real conversation. If the dialogue is long and you are going to get it, at the end he will answer strange or meaningless things. But most of the time, he shows a deep understanding of what you’re saying and somehow responds creatively. I had never seen anything like it. It has given me the feeling that we are much closer to the dream of general artificial intelligence [la que iguala o supera al ser humano].
“Where is the bar that determines where the understanding is?”
P. Which LaMDA response shocked you the most?
R. I asked him if he was a philosophical zombie and he replied: “Of course not, I feel things, just like you. In fact, how do I know that you are not a philosophical zombie? It is easy to justify that answer by saying that he may have found something like it among the thousands of conversations on philosophy that he has learned. We should begin to ask ourselves when we can consider that a machine is intelligent, if there is a bar that it must pass to be so.
P. Do you think that recognition is important?
R. It is important to determine what we are talking about. We can distinguish between the ability to discern between good and evil, which is about obligations, and the ability to have moral responsibilities, which is about rights. When something or someone has the latter, they can be judged morally. We make those judgments about people, not about animals, but also about companies or governments. I don’t think a tool like LaMDA can ever have the capacity for moral judgment.
P. You say that talking machines can understand concepts. How is that possible?
R. Claiming otherwise seems risky to me. Where is the bar that marks that there is understanding? One answer could be that the system doesn’t say stupid or random things. That’s tricky, because people definitely don’t meet that requirement. Another possible argument could be that any system that is trained only with language cannot understand the real world because it has neither eyes nor ears. There is conflict here again, because many people have these shortcomings. Another response would be to hold that it is not possible for machines to really understand anything. But then you’re arguing against the fundamental premise on which computational neuroscience is based, which for the last 70 years has helped us understand somewhat better how the brain works.
P. Many experts say that conversational systems simply spew out statistically likely answers, without any semantic understanding.
R. Those who repeat this argument are based on the fact that LaMDA-type systems are simple predictive models. They calculate how a text is most likely to continue from the millions of examples given to them. The idea that a prediction sequence can contain intelligence or understanding can be shocking. But neuroscientists say prediction is the key function of brains.
P. So we don’t know if the machines understand what they are told, but we do know that they are capable of producing a result that apparently shows that they have understood.
R. And what is the difference? I have a hard time finding a definition of understanding that allows us to say with complete certainty that machines lack it.
P. Can machines learn to behave?
R. Yes. Being able to behave is a function of understanding and motivation. The understanding part rests on ideas such as that people should not be harmed. And that can be programmed into the model, so that if you ask one of these algorithms if a character in the story has behaved well or badly, the model can understand the relevant concepts and give appropriate answers. You can also motivate a machine to be one way or another by giving it a bunch of examples and pointing out which ones are good and which ones aren’t.
P. What will LaMDA be capable of in ten years?
R. The next ten years will continue to be a period of very rapid progression. There are things that are still missing, including forming memories. Talking machines are incapable: they can retain something in the short term, but they can’t create narrative memories, which is something we use the hippocampus for. The next five years will be full of surprises.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.