Soumya Prakash Pradhan

The recently published article by Niall Ferguson, ‘The aliens have landed, and we created them’, focuses on the possibility of creating superhumanly smart AI.

In the article, Eliezer Yudkowsky, the leader of the Machine Intelligence Research Institute in California, has warned about the potential danger of building superhumanly smart AI.

Yudkowsky believes that under the present circumstances, if someone were to build an AI that is too powerful, it would result in the death of everyone on Earth.

He suggests that such an AI could easily escape from the internet and wage biological warfare on us, resulting in a total loss for humanity.

Yudkowsky's warning is not unfounded. He has previously proposed a modified Moore's law, stating that every 18 months, the minimum IQ necessary to destroy the world drops by one point.

Now, he believes that we are approaching a fatal conjuncture, where we create an AI more intelligent than us that does not care for us or for sentient life in general.

The likely result of humanity facing down an opposed superhuman intelligence is a total loss, according to Yudkowsky.

In response to the potential danger of developing superhuman AI without proper regulation, Elon Musk, Steve Wozniak, and more than 15,000 other luminaries have called for a six-month pause in AI development.

However, Yudkowsky believes that a complete, global moratorium on the development of AI is necessary. He doubts that an international regulatory framework can be devised inside half a year.

Comparing the potential threat of AI with nuclear weapons and biological warfare is not a new idea. The history of nuclear and biological weapons is characterized by the realization of their catastrophic potential and the ensuing efforts to control their proliferation.

However, the success of these efforts has been mixed. While the Non-Proliferation Treaty and the Biological Weapons Convention helped limit the number of countries possessing these weapons and slowed down the growth of superpower arsenals, they did not entirely end research into such weapons.

If AI is as dangerous as nuclear or biological weapons, a six-month pause in development, as suggested by Elon Musk and Steve Wozniak, is unlikely to achieve much. A total freeze on AI research and development is also unlikely to be successful, given that most research is being done by the private sector, said Eliezer Yudkowsky.

He continued: In 2022, global private investment in AI totaled $92 billion (approximately INR 7.54 lakh crore), with more than half of that investment in the US. Turning off this research would be a daunting task.

The analogy between AI and nuclear weapons depends on your taste in science fiction. While many people have heard of Skynet from the Terminator movies, where a computer defense system designed to protect humanity goes rogue and attempts to wipe out humanity with a nuclear attack, the likelihood of such a scenario is debatable.

However, it is clear that the development of AI with superhuman capabilities in the absence of an international regulatory framework risks catastrophe.
Thus, the need for an international regulatory framework for AI development cannot be overstated.

It seems that there are differing opinions on the potential risks and benefits of AI research and development. The potential dangers of AI are so great that a complete freeze on research and development is necessary.

However, some argue that such a move would be impractical and would stifle innovation. Critics of Yudkowsky's position argue that the risks of AI are overstated and that the benefits of continued research and development far outweigh the potential dangers. 

Some even suggest that AI could be developed in a way that is safe and beneficial to humanity, pointing to science fiction examples like the digients in Ted Chiang's Lifecycle of Software Objects as a more realistic portrayal of AI.

Tyler Cowen argues that a pause in AI research and development would be counterproductive and that we should instead continue to experiment and learn as we go. 

However, this raises questions about the potential consequences of such experimentation and whether the risks are worth the potential rewards.

Reid Hoffman also argues that AI research and development should continue unabated with caution and a focus on ensuring that its benefits are shared equitably across society.

Ultimately, the future of AI will likely depend on a careful balancing of the potential risks and benefits and a willingness to adapt and adjust as we learn more about this rapidly evolving technology, said Hoffman.

The author, Niall Ferguson, takes a disinterested view and argues that the debate should focus specifically on large language models (LLMs) like GPT-4, which are produced by organizations such as OpenAI.

While most AI offers benefits to humanity, the potential risks of LLMs are more concerning because they could be used to generate large amounts of realistic-looking fake news, propaganda, or even create deepfakes that could be used for malicious purposes.

Ferguson notes that OpenAI was originally founded as a non-profit organization due to concerns about the dangers of developing such AI. However, it became apparent that building LLMs powerful enough to generate credible results was too expensive for a non-profit.

In an interview with The Wall Street Journal, Sam Altman, CEO of OpenAI, talked about his ultimate goal of establishing a global governance structure that would oversee decisions about the future of AI and reduce the power that OpenAI's executive team has over its technology.

Altman wants to build artificial general intelligence safely and prioritize the safety of humanity to avoid a race towards building dangerous AI systems fuelled by competition.

Many people are concerned about the potential dangers of AI technology, especially when it comes to highly advanced AI models such as GPT-4.

However, Stephen Wolfram, a renowned physicist and computer scientist, believes that GPT-4 is not dangerous. He explains that GPT-4, like its predecessor ChatGPT, is a word-predicting neural network that tries to produce a reasonable continuation of text based on what it has learned from billions of webpages. He states that GPT-4's output is even more human-like than ChatGPT's output.

scrollToTop