Log In

The Unlikelihood of Achieving Artificial General Intelligence Soon

Published 18 hours ago3 minute read
The Unlikelihood of Achieving Artificial General Intelligence Soon

While tech leaders such as Sam Altman, Dario Amodei, and Elon Musk predict the imminent arrival of artificial general intelligence (AGI), many experts remain skeptical. Despite rapid advancements in AI, researchers argue that current systems lack true human-like understanding, and achieving AGI likely requires undiscovered breakthroughs, rendering current forecasts speculative. AGI, short for artificial general intelligence, represents a future technology that matches the multifaceted capabilities of the human mind, a goal pursued by executives and researchers at companies like OpenAI, Anthropic, Google, and Microsoft. The development of technologies like OpenAI's ChatGPT has further fueled predictions about AGI's arrival, with some even suggesting the subsequent emergence of "superintelligence."

However, many voices are dispelling claims of machines soon matching human intellect. Nick Frosst, a founder of AI startup Cohere, argues that current AI focuses on predicting the next most likely word or pixel, a far cry from human cognition. A survey by the Association for the Advancement of Artificial Intelligence revealed that over three-quarters of respondents believe current methods are unlikely to lead to AGI. Disagreement stems partly from the lack of a universally accepted definition of human intelligence and the subjectivity of comparing human brains to machines. Some claims of AGI's imminence are based on statistical extrapolations and wishful thinking, with current technologies improving in areas like math and computer programming but still struggling with the chaotic, unpredictable real world and the generation of novel ideas.

Steven Pinker, a cognitive scientist at Harvard University, emphasizes that a system excelling in one aspect won't necessarily excel in others, dismissing the notion of an automatic, omniscient problem solver. Chatbots like ChatGPT are powered by neural networks that identify patterns in text, images, and sounds, learning to generate human-like text. This rapid progress has led some technologists to believe in continued advancement towards AGI and beyond. Jared Kaplan, the chief science officer at Anthropic, highlights the potential of AI to eventually achieve human-level intelligence with sufficient practice, referencing the "Scaling Laws" that suggest improved performance with more data analysis.

Companies like OpenAI and Anthropic are now employing reinforcement learning to further enhance their chatbots, allowing systems to learn through trial and error, similar to how AlphaGo mastered the game of Go. Despite these advancements, the real world's complexity poses a significant challenge for AI, as it is bounded only by the laws of physics, unlike the limited rules of games like Go. While machines have surpassed human capabilities in certain areas, such as math and writing speed, human intelligence encompasses much more, including physical interaction and nuanced understanding.

Josh Tenenbaum, a professor at MIT, emphasizes the importance of physical intelligence, such as knowing when to flip a pancake, which remains a challenge for machines. Reinforcement learning proves effective in areas with clear definitions of good and bad behavior but struggles with creative writing, philosophy, and ethics. As AI systems are deployed, humans continue to guide them through moments of novelty and uncertainty. Matteo Pasquinelli, a professor at Ca' Foscari University, argues that AI relies on human originality and input. Claims of imminent AGI are thrilling due to the long-held human fantasy of creating artificial intelligence, as seen in myths and works of fiction.

However, early AI researchers' optimistic predictions in the 1950s did not materialize within the expected time frame. Many current technologists view their work as fulfilling a technological destiny but lack concrete scientific reasons for expecting imminent AGI. Yann LeCun, the chief AI scientist at Meta, believes that achieving AGI requires a new idea beyond current neural networks and is actively searching for this missing element. The timeline for achieving human-level AI remains uncertain, with LeCun stating that it may or may not happen within the next 10 years.

From Zeal News Studio(Terms and Conditions)
Loading...
Loading...
Loading...

You may also like...