Nerea Luis: “Massive automation is a danger that you have to know how to handle” | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Expert in artificial intelligence and robotics, Nerea Luis (Madrid, 31 years old) navigates with enthusiasm, but restlessness, the most complex moment of her discipline. On the one hand, the accelerated development of fascinating applications, such as in the world of medicine or natural language processing; on the other, the “catastrophic cases” in which they make mistakes or abuse opacity, due to poor regulation. For this reason, beyond her work at the Sngular company, she is deeply involved in her work as a technology promoter, chaining talks, such as the one that took her to the Tenerife GG, and now also in her new section in the Órbita Laika program on La2 : “We are at a point where we need to understand the applications of artificial intelligence; maybe there is one that has already affected you and you don’t know it”.
Ask. What is the perception of the people?
Response. The science fiction theme has made a dent. It is very noticeable that there is still that topic of superintelligence, of singularity, but the part of what is its application is beginning to be seen more. Older people tell you: I’m worried about how it’s going to affect the future of my job. They live it as something very own.
P. Like a threat.
R. A younger person sees it as even something that can help him in his career. But someone who has been working for 40 years and sees that this is picking up more and more speed, wonders if now he is going to have to learn new, reinvent himself. There is a lot of talk about reinventing ourselves, we still have to think about other options.
P. DALL-E, the program that converts words into images, has been very successful, but it has become a parody, most people use it to look for laughter and meme. Isn’t there a risk that it will be perceived that artificial intelligence is only used to do silly things?
R. It’s true that sometimes it can become a parody, but I see it as a good thing, because since everyone understands what an image is and finds it funny, but in the end he says: oysters, this is already happening and I try it with my hands. Just like what happened back in the day with these mobile applications that helped them understand what a deepfake is because they watched the videos. That helps a lot to understand at least one of the aspects.
You have to demand it [a las compañías]: give more information, especially when it affects people
P. And how do you interpret the controversy with the LaMDA program, when a Google engineer assured that he was aware?
R. It has generated a lot of noise and in the end it has been overshadowed by all the mess. There is no longer any discussion of what LaMDA does or the issue it is trying to resolve. LaMDA is brutal, if you read the conversation with the engineer, you say: oysters, what arguments he is capable of launching. That is very difficult in terms of coherence and achievement of the conversation. But everyone talked about salsa, if feelings, if it doesn’t have a life of its own. These issues of whether you can become aware generate a lot of noise because people don’t understand. And you forget the brutal advance in the field of word processing and reasoning. The people of Google are investing a lot of dough in solving the problem, which is very difficult, and that it comes out like this, it outshines you. There comes a time when everything is so new and so futuristic that you can immediately fall into that world of science fiction and novels.
P. He finds it annoying that the Terminator ends up flooding the debate about artificial intelligence.
R. I understand it, if I didn’t dedicate myself to this, surely it would be one of the things that caught my attention. I’m not going to lie to you, that’s why I entered this field, because I saw robotics, intelligent programs and I thought: it’s cool and it sounds very futuristic. But now that there are so many possible applications… With the speed at which we are going, I am not saying that they are going to take your job away, because I honestly believe that it will not happen, but it will be noticeable. That speech is a bit overshadowed. The same when the election season arrives we begin to hear things like these [ríe].
P. Are you worried about the role of big tech companies?
R. It worries me a little, because it is true that the penetration has been very strong. But, on the other hand, if we look all the way, I am convinced that without these companies it would have been very difficult to professionalize everything that has been done, with pure investment. They are starting to post a lot [sus avances en revistas científicas] open, much more than I imagined. And we started to see public-private collaboration. It is the way, a trend has been set and now it is very difficult to go back. But I think it will do a lot on the regulatory side if it is finally enforced. That is to say, until now we have advanced a bit like crazy, but even for certain things, we do have to be accountable. On the other hand, there are major companies that have withdrawn from controversial issues such as facial recognition. It is good that they have done it themselves, even if it is for a reputational issue, because it already sets a precedent around these issues, because they have a lot of power. But you have to be there a bit to the parrot: to whom are we giving all the power? Because it will be a brutal monopoly in a few years. We have little left and my feeling is that we should demand a minimum so that the university community can take advantage of it.
P. In what sense?
R. Information. How to replicate what they do. At least have a way of seeing something, even if it’s not totally transparent. You have to demand it directly: that they give a little more information in certain cases, especially when it affects people.
P. For example?
R. Algorithms applied in human resources, that you are not informed that your CV has been screened by an algorithm. Topics of employment, justice, education, health. In the latter, artificial intelligence is more regulated, but not so much in the rest of the sectors, beyond what the company wants to do when it goes on the market.
It’s all so futuristic that you can fall into that sci-fi world
P. How dark is the black box [el desconocimiento de por qué los programas deciden lo que deciden]?
R. It’s hard. For that you have to understand how it works. deep learning [aprendizaje profundo, uno de los métodos con los que se entrena la inteligencia artificial]. When we talk about a black box, it is really what lies behind it: behind it there are a number of calculations and patterns that are encoded as vectors, such as hyperplanes, mathematical components… Shedding light on that, unless you do a specific job of trying to visualize it, is very difficult and traditionally it has not been done. More progress has been made in the line of seeking solutions, in success metrics. Why not create new algorithms that are monitoring how it changes? You will still not understand one hundred percent how the learning has been done, but it does serve to contrast and debug something that does not work correctly.
P. And is there anything that worries you about the development of artificial intelligence, other than the Terminator taking over the world?
R. Could be [ríe]. What worries me the most is how you link these developments with people’s daily lives. The technicians go their own way removing all this, it is being put into production, there are people accessing these systems that affect people and it begins to be observed with a magnifying glass. But how a regulation is implemented, what it translates into at the training level: there is a gap there between disclosure, training and the future of work, which is the part that worries me the most. In the end, you see that everything goes so fast and it gives you a bit of vertigo that between the fact that you don’t control the issue, the regulation is halfway, people from certain sectors get more nervous. I understand it, if someone is told that there are some new tools that do 80% of their work, now what will their function be? Or in an emergency triage they put you last and it’s because the algorithm is wrong… There will be situations like these, which are going to be very new and that are going to be difficult to solve, we have to think about them today and we are going to learn. It’s a bit like what happened with driving in the beginning. Cars arrived and people drove as they pleased and then regulations have been put in place to reduce accidents. The airbag appears, apart from the seatbelt, the signals appear… a little bit that is what artificial intelligence lacks today.
P. Is artificial intelligence missing signals?
R. Sure, and you need to be accountable. Who assures me that I am not discriminating. It’s going to have to go through an apprenticeship, obviously, and we’re going to see catastrophic cases. I am convinced that we will continue to see reputational crises around artificial intelligence. But knowing it, try to work with transparency, with regulation. With artificial intelligence you have a power that you did not have before: mass automation. It is a danger that you have to know how to handle. You can’t make everyone an engineer. If we approach it well, there will be more friendly environments with algorithms. And if not, the word will be demonized and everything will be seen as something negative. It’s very cool, but at the same time, go brown.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
Exclusive content for subscribers
read without limits