November 30, 2023 – Elon Musk, the CEO of Tesla, has emphasized the need for some form of regulation in the field of artificial intelligence. Musk went on to say, “In my view, artificial intelligence is more dangerous than nuclear weapons.”
Musk has been closely monitoring the development of artificial intelligence and warning about its hidden risks. This is not the first time Musk has used the analogy of nuclear weapons to describe the dangers of AI. Back in 2017, he pointed out that AI technology was becoming even more dangerous than nuclear warheads because it could replicate itself without limits, beyond human control.
As early as 2013, when Google was planning to acquire DeepMind, Musk expressed his opposition, stating, “The future of artificial intelligence should not be determined solely by Larry,” referring to Larry Page, one of Google’s co-founders.
In fact, the artificial intelligence research lab “OpenAI,” which launched GPT, was co-founded by Musk along with others as a response to the potential risks posed by AI. Musk’s vision was to create a firewall against these risks by making AI technology accessible through open-source software.
In 2018, Musk attempted to merge OpenAI with Tesla, but his proposal was rejected, leading to a rift between him and OpenAI. In 2023, Musk founded X.AI, with the aim of ensuring the safety of artificial intelligence through competition and innovation.