- Current AI systems like ChatGPT don’t have human-level intelligence and aren’t even as smart as a dog, Meta’s AI chief Yann LeCunn said.
- LeCun spoke about the limitations of generative AI, such as ChatGPT, and said they are not very intelligent because they are trained solely on language.
- Meta’s LeCun said that, in the future, there will be machines smarter than humans, which shouldn’t be seen as a threat.
Meta’s chief AI scientist Yann LeCun spoke at the Viva Tech conference in Paris and said that AI currently lacks human-level intelligence, but it may one day.
chesnot | Getty Images News | Getty Images
Current AI systems like ChatGPT lack human-level intelligence and are barely smarter than a dog, Meta’s AI chief said, as debate rages on over the dangers of the rapidly growing technology.
ChatGPT, developed by OpenAI, is based on a so-called large language model. This means that the AI system has been trained on huge amounts of linguistic data which allows a user to submit it to questions and requests, while the chatbot responds in the language we understand.
The rapid development of AI has raised concern among leading technologists that, if left unchecked, the technology could pose a danger to society. Tesla CEO Elon Musk said this year that artificial intelligence is “one of the greatest risks to the future of civilization.”
At the Viva Tech conference on Wednesday, Jacques Attali, a French economic and social theorist who writes about technology, said whether AI is good or bad will depend on how it is used.
“If you use AI to develop more fossil fuels, it’s going to be awful. If you use AI [to] develop more terrible weapons, it will be terrible,” Attali said. “In contrast, AI can be amazing for health, amazing for education, amazing for culture.”
On the same panel, Yann LeCun, chief AI scientist at Facebook parent company Meta, was asked about the current limitations of AI. He focused on generative AI trained on large language models, saying they are not very intelligent, because they are trained exclusively on language.
“These systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they’re purely text-trained, a huge amount of text,” LeCun said.
“Most human knowledge has nothing to do with language, so some human experience isn’t captured by AI.”
LeCun added that an AI system could now pass the Bar in the United States, an exam required to become a lawyer. However, you said the AI can’t load a dishwasher, which a 10-year-old could “learn in 10 minutes.”
“What that tells you is that we’re missing something really big in achieving not only human-level intelligence, but dog-level intelligence as well,” concluded LeCun.
Meta’s AI chief said the company is working on training AI on video, rather than just language, which is a more difficult task.
In another example of AI’s current limitations, he said a five-month-old would look at a floating object and not think too much about it. However, a nine-month-old would look at this object and be surprised, as he realizes that an object shouldn’t float.
LeCun said that “we have no idea how to reproduce this ability with machines today. Until we can do that, we won’t have human-level intelligence, we won’t have dog-level or cat-level intelligence.” [intelligence].”
Taking a pessimistic tone about the future, Attali said, “It is common knowledge that humanity will face many dangers in the next three to four decades.”
He has noted climate disasters and war among his main concerns, also noting that he is concerned that robots “will turn against us”.
During the conversation, Meta’s LeCun said that, in the future, there will be machines smarter than humans, which shouldn’t be seen as a danger.
“We shouldn’t see it as a threat, we should see it as something very beneficial. Each of us will have an AI assistant, it will be like a staff that assists you in your daily life who are smarter than you,” LeCun said. .
The scientist added that these AI systems must be created as “controllable and basically subservient to humans”. He also rejected the notion that robots would take over the world.
“A fear that has been popularized by science fiction [is]that if robots are smarter than us, they will want to take over the world, there is no correlation between being smart and wanting to take over,” LeCun said.
Observing the dangers and opportunities of artificial intelligence, Attali concluded that barriers need to be put in place for the development of the technology. But he wasn’t sure who would.
“Who will set the borders?” he asked.
AI regulation has been a hot topic at Viva Tech. The European Union is moving forward with its own AI legislation, while top ministers in the French government told CNBC this week that the country wants to see global regulation of the technology.
#smart #dog #Meta