DeepMind: Google’s artificial intelligence is capable of learning like a baby | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Yoshua Bengio, one of the fathers of machine learning, or machine learning, often says that artificial intelligence (AI) is actually very stupid. The recent Princess of Asturias Award for Scientific Research refers to the fact that these systems can perform extremely complex calculations in milliseconds, but at the same time they are incapable of understanding common sense issues, such as that water is wet or that if you drop an object it will fall to the ground . Getting machines to learn the so-called intuitive physics, the basic notions about the functioning of the world that humans have mastered from a very early age, is one of the pending challenges for specialists.
A team of researchers from DeepMind, the Google company focused on AI developments, has tried to do it in an innovative way. They have created a computational model to see if a deep learning system (or deep learning, an advanced AI technique based on so-called neural networks) can gain an understanding of certain basic physical principles from the processing of visual animations. That is, observing, just like people. His conclusion, published today in the magazine Natureis striking: it is easier for the machine to achieve this if it is taught to learn like babies, by studying its environment.
It is not clear what is the mechanism that allows human beings to learn intuitive physics so quickly. Evolutionary psychology, one of the disciplines that has studied the phenomenon the most, maintains that the fundamental principles of physics are internalized through the observation of objects and their interactions. The DeepMind team led by Luis S. Piloto started from these theories and developed a machine learning system which they named Plato and which was shown videos of balls falling to the ground and balls disappearing from the image when they pass behind another object. and then reappearing. They focused on exploring whether their system was capable of learning certain basic principles, such as solidity (that objects don’t pass through others) and continuity (that they don’t spontaneously appear and disappear).
Studies show that a three-month-old baby reacts with surprise when he sees any situation that defies this logic. For example, if we close a coin in our fist and when we open it it is not there because we have hidden it in our sleeve. Or if a ball passes behind a box and, when it should reappear on the other side, it doesn’t.
Piloto and his colleagues showed 28 hours of videos to the tool with various examples of balls in motion. “We use a system to discriminate that certain pixels belong to a ball and others to a box. The next step is to say that the pixels that belong to the ball form a group”, explains Piloto by videoconference. After training, the machine was able to predict approximately where and when the ball would reappear after passing behind a box.
The results indicate that the visual animations are enough for the system to acquire some of this basic knowledge, but not all that the babies accumulate. In other words, computational models must initially include a series of basic knowledge about how objects behave, but to consolidate these notions it is necessary to observe.
And, as in the case of the baby surprised by an easy magic trick, Plato also expresses his confusion if the object does not follow elementary physical rules. The model was able to extrapolate the expectations learned about the behavior of some objects to new situations and artifacts that did not appear in the images that were shown to the tool. “In our experiment, the surprise is the difference between the number of color intensity of a pixel that the system predicts compared to the real number of the color,” explains Piloto. And that’s what the tool did quite successfully: shade the space where it expected the object in question to reappear when its intuition I told him that this would happen.
The DeepMind research team denies that they work on specific applications of use for your system. They see it rather as a work that can serve as a reference for other scientists who are studying the same problem. However, they acknowledge that it could have potential in robotics and autonomous driving systems.
From AI to evolutionary psychology
What implications does this work have for the field of AI? “We do not know how information systems represent the world. machine learning (automatic learning). What we have done is tell Plato that he has to understand it as a series of objects that are related to each other, ”points out Peter Battaglia, DeepMind scientist and co-author of the article.
That is the approach to the subject of evolutionary psychology. “The work of Piloto and his colleagues is expanding the frontiers of what everyday experience can or cannot contribute in terms of intelligence,” say Susan Hespos and Apoorva Shivaram, researchers in the Department of Psychology at Northwestern University (Evanston). . “This is an important effort because it assesses what types of perceptual experience are needed to explain the knowledge that is evident in a three-month-old baby,” say the experts, who are part of the team of scientists that has reviewed Pilot-led work before. of your publication.
According to Hespos, another contribution from paper Directed by Pilot is to highlight the great complexity of a tremendously sophisticated process but one that we take for granted, such as learning the basic operating rules of objects. “The article formalizes some of the steps involved in doing something as simple as predicting where a ball going down a ramp will go. That is not to say that the computational model works exactly like the mind. Its value lies in the process of learning to define the steps, tests and more tests that are needed to achieve something similar to human behavior”, emphasizes Hespos.
The Northwestern University professor believes that AI still has a lot to contribute to psychology. It can serve, for example, as a testing ground for experiments that are impossible today. “There are some questions that I have always asked myself, such as whether babies’ expectations about the behavior of objects would be different if they were raised in a place with zero gravity,” Hespos describes. “Perhaps Pilot’s model and her team can help us see how learning might change in that environment.”
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
Leave a Reply