How will we know that we have achieved a form of artificial general intelligence? For some AI experts, this will be when the machine has human-level cognition


Achieving artificial general intelligence (AGI) is the ultimate goal of AI companies. But while AI experts try to predict when this will happen, almost none of them can clearly answer the question: How will AGI manifest itself? The answer seems easy, but not everyone agrees on the subject. According to some technologists and AI experts, people are unlikely to realize they are interacting with an AGI. Another group believes that we will be in the presence of AGI when the machine has human-level cognition. Within the community, the very notion of AGI is controversial.

Simply put, artificial general intelligence is a “hypothetical” form of artificial intelligence in which a machine can learn and think like a human. For this to be possible, the AGI would need to have self-awareness and consciousness at all, so that it can solve problems, adapt to its environment, and perform a wider range of tasks. If AGI – also called “strong AI” – seems like science fiction, it’s because it is still relevant today. Existing forms of AI have not yet reached the level of AGI, but AI researchers and companies are still working to make it a reality.

AI (so-called weak) is in fact trained from data to carry out specific tasks or a series of tasks limited to a single context. Many forms of AI rely on algorithms or pre-programmed rules to guide their actions and learn to operate in a certain environment. On the other hand, AGI is intended to be capable of reasoning and adapting to new environments and different types of data. Instead of depending on predefined rules to operate, AGI takes a problem-solving and learning approach. According to experts, AGI should demonstrate the same reasoning abilities as humans.

Weak or narrow AI is the type of AI that drives autonomous vehicles, image generators, and chatbots. In other words, it is an AI that performs a limited range of tasks. Two subsets of AI fall into the weak AI category: reactive machines and limited-memory machines. Reactive machines can respond to immediate stimuli, but cannot store or learn from memories of past actions. Memory-constrained machines can store past information to improve their performance over time. They represent the majority of AI tools available today.

However, AGI blurs the line between human intelligence and machine intelligence. There are three main opinions on the subject. Some AI experts believe we will experience AGI when machines demonstrate human-level cognition. But this claim is controversial among other experts who believe that the manifestation of AGI will be implicit and that it will be difficult to prove that a form of AI is AGI. A third group of experts refutes both arguments and says AGI is “unworkable.” In addition, it should also be noted that experts are divided on the very notion of artificial general intelligence.

AGI is not a defined line where it exists on one side and does not exist on the other. This is a subjective state of AI that, in my opinion, will gradually develop on a spectrum. my opinion, some people will think it exists, others won’t, and it will gradually adjust until there are more people who think it exists than people who don’t, can we read in the comments. Another wrote: We will reach AGI when the capabilities of machines are far superior to humans in many areas. The AGI will never exist, because as soon as it is there, it will already be superior to man.

This is not a binary thing. The Transformers already do low-level things when it comes to AGI. AGI will likely increase gradually, with occasional breakthroughs resulting in larger than normal advances. There is no evidence that AGI will go from 1,1000 overnight. I don’t even think Transformers are capable of human-level AGI, and I don’t know of any architecture that allows for it. So I won’t bet on human-level AGI in the foreseeable future, another reviewer wrote. Even leading AI experts seem divided on the issue.

After the release of GPT-4, a team of Microsoft scientists claimed in a research report that OpenAI’s technology shows “sparks” of AGI. Given the breadth and depth of GPT-4’s capabilities, we believe it could reasonably be considered an early (but still incomplete) version of an AGI. But this claim and the method the team used to reach this conclusion have been the subject of much controversy. But how will AGI manifest itself once it becomes a reality? Experts predict its arrival, but don’t say how we’ll know it’s there:

Sam Altman, co-founder and CEO of OpenAI

In an interview with AI expert and podcast host Lex Fridman last March, Altman said that although rapid progress is being made in the field of AI, the timeline for AGI is uncertain. He stressed the importance of discussing and addressing the possibility that AGI poses an existential threat to humanity. He advocates discovering new techniques to mitigate potential hazards and iterating through complex problems to learn early and limit high-risk scenarios. Altman then asked his interlocutor if he thought GPT-4 was an AGI, to which Fridman replied:

I think if that’s the case, like the UFO videos, we wouldn’t know right away. I think it’s hard to know when I think about it. I was playing around with GPT-4 and wondering how I would know if it is an AGI or not. In other words, how much of a part of the interface do I have with this thing? And how much wisdom does it contain? Part of me thinks that we could have a model capable of “super intelligence” and that we haven’t quite unlocked it yet. This is what I saw with ChatGPT. Altman then spoke about the potential dangers of AGI and its benefits.

Altman said this week in the Reddit forum r/singularity that his company had developed human-level AI, but he immediately demurred and claimed that the product developed by OpenAI only “mimics” human-level AI. human intelligence. “Obviously it’s just a meme, don’t be afraid, when the AGI is done it won’t be announced by a comment on Reddit,” he said.

Geoffrey Hinton: Turing Prize winner and ex-Googler

Geoffrey Hinton is a Canadian researcher specializing in AI and more particularly artificial neural networks. A former member of the Google Brain team, he chose to leave his Alphabet position to warn of the risks of AI. After his departure, he predicted the moment when AI will surpass human intelligence. I now predict five 20s, but without much confidence. We live in very uncertain times. It is possible that I am completely wrong about the fact that digital intelligence is beyond us. “No one really knows and that’s why we should be worried right now,” he said in May.

Ray Kurzweil: author, researcher and futurist

Ray Kurzweil, a famous American futurist and researcher, has made many predictions over the years and some have proven admirably accurate. At SXSW 2017 in Austin, Texas, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence. This means that computers will have human intelligence, we will put them in our brains, we will connect them to the cloud and we will expand who we are. Today, it is not just a future scenario. This is already the case, in part, and it will accelerate.

Ben Goertzel: CEO of SingularityNET and Chief Scientist of Hanson Robotics

A controversial figure in technology circles, Ben Goertzel helped popularize the term AGI. He is also prone to making bold statements about the future of technology. At a conference in 2018, he added a few more. I don’t think we need fundamentally new algorithms. I think we need to wire up our algorithms in a different way than we do today. If I’m right, then we already have the basic algorithms we need. “I think we’re less than ten years away from creating human-level AI,” he said.

But Goertzel added a sentence that suggested he was joking about the prediction. This will happen on December 8, 2026, my 60th birthday. “I will delay the event until then to organize a big birthday party,” he added.

John Carmack: computer engineer and developer of Doom

John Carmack believes AGI could be achieved by 2030 and has launched a partnership with a research institute in Alberta to accelerate its development. Carmack shared his views at an event to announce the hiring of Richard Sutton, chief scientific advisor of the Alberta Machine Intelligence Institute, to Keen, his development startup. of the AGI. Sutton believes that it is not impossible to code an AGI with current techniques and sees the year 2030 as a possible target for an AI prototype to show signs of consciousness.

Yoshua Bengio: professor of computer science at the University of Montreal

Yoshua Bengio is also a Turing Prize winner. Like his friend and colleague Yann LeCunn, winner of the Turing Prize, Bengio prefers the term “human-level intelligence” to artificial intelligence. In any case, he is skeptical about the predictions regarding his advent. “I don’t think it’s plausible to know when, in how many years or decades, we’ll have to reach human-level AI,” Bengio said.

Demis Hassabis, CEO of Google DeepMind

Demis Hassabis built Google DeepMind (formerly DeepMind), headquartered in London, England, into one of the world’s leading AI labs. Its main mission is to develop an AGI. He defines AGI as “human-level cognition” and said earlier this year: The progress made in recent years has been quite incredible. I see no reason for this progress to slow down. I even think they could accelerate. So I think we’re only a few years away, maybe even a decade away. He also shares the view that AGI is an existential threat to humanity.

And you ?

What is your opinion on the subject?
What do you think of the notion of AGI and the controversies surrounding it?
What do you think of the predictions about when the first form of AGI will arrive? Are they realistic?
How will we know we have achieved some form of AGI? Do they already exist?
Do current AI systems suggest the imminent arrival of a form of AGI? For what ?
Do you share the view that some form of AGI will never be achieved? For what ?

See as well

Doom developer John Carmack thinks general-purpose AI (AGI) is feasible by 2030, launches partnership with Alberta research institute to accelerate development

Microsoft claims GPT-4 shows sparks of general artificial intelligence: We believe GPT-4’s intelligence signals a true paradigm shift

The threat that AI poses to the world could be “more urgent” than climate change, according to Geoffrey Hinton, a pioneer of artificial intelligence research

Leave a Reply

Your email address will not be published. Required fields are marked *