The Gist
- Mission statement. xAI aims to understand the true nature of the universe.
- Safety approach. xAI will focus on building an AI that's maximally curious and truth-seeking with safety as a priority.
- Competition status. Musk sees xAI as a competitive alternative to Google DeepMind and OpenAI.
Elon Musk, billionaire business magnate recently in the headlines for his Twitter takeover, launched his long-awaited artificial intelligence company, xAI, on July 12.
The company is separate from X Corp. but will work closely with Twitter, Tesla and other companies to make progress toward its intended mission — an arrangement that will be mutually beneficial, Musk said in a Twitter Spaces event Friday.
What Is xAI?
xAI’s goal is to understand the true nature of the universe — something a Twitter Spaces attendee pointed out as “vague, ambitious and not concrete enough.”
Musk countered that the universe is the entire purpose of physics. “So, I think it’s actually really clear. There’s just so much that we don’t understand right now. Or we think we understand, but don’t in reality.”
He pointed to the Fermi paradox, which he summed up as: If we are almost 14 billion years old, why is there not massive evidence of aliens? “If anyone would have seen evidence of aliens, it’s probably me. And yet I have not seen even one tiny shred of evidence. And I would jump on it in a second if I saw it.”
We’ve seen no evidence of consciousness thus far anywhere else, said Musk. “It suggests that what we have is extremely rare. I guess you could reformulate the xAI mission statement as: What the hell is really going on?”
Related Article: Elon Musk May Build ChatGPT Alternative, but Is He Sam Altman's Hero?
How Will xAI Approach Safety?
You can’t call anything artificial general intelligence (AGI) until it can solve at least one fundamental question, said Musk — because humans have solved many fundamental questions.
xAI’s goal is to achieve a true AGI. The safest way to do that, said Musk, is to build an AI that is maximally curious and maximally truth-seeking.
“I think to a super intelligence, humanity is much more interesting than not humanity,” he explained. He pointed to all the planets, moons and asteroids in our solar system. “Probably all of them combined are not as interesting as humanity.”
The right kind of approach to “growing” an AI, he said, is to approach it with that kind of ambition.
xAI Will Avoid the Morality Problem, Says Musk
Another plus for Musk’s “maximally curious, maximally truth-seeking” approach, he said, is that it helps avoid the morality problem.
“If you try to program a set morality, you can basically invert it and get the opposite — what is sometimes called the Waluigi problem. If you make Luigi, you risk creating Waluigi at the same time.”
There’s significant danger, he explained, in training AI to be politically correct or training AI to not say what it actually thinks is true. If you look at where things go wrong in “Space Odyssey,” he added, it’s when they tell HAL 9000 to lie.
“At xAI, we have to allow the AI to say what it really believes is true and not be deceptive or politically correct,” said Musk, adding that it will likely result in criticism. “But I think it’s the only way to go forward, is rigorous pursuit of the truth or the truth with least amount of error.”
Where Did Musk Get the Idea for xAI?
According to Musk, OpenAI exists because, following Google's acquisition of DeepMind and extensive discussions with his friend Larry Page, the co-founder of Google, he realized that Page wasn't taking AI safety seriously, at least not at the time.
“In fact, at one point, he called me a speciesist for being too much on team humanity, I guess,” commented Musk.
Alphabet had three-quarters of the AI talent in the world, lots of money and lots of computers, said Musk. And he decided: “We need some kind of counterweight here.”
The opposite of Google DeepMind, Musk decided, was an open-source nonprofit — AKA, OpenAI, which Musk co-founded in 2015. “Because fate loves irony,” said Musk, “OpenAI is now a super closed source and frankly voracious for profit.”
At this point, said Musk, artificial general intelligence is going to happen. So there’s two choices, either be a spectator or participant.
“As a spectator, one can’t do much to influence the outcome. As a participant, we can create a competitive alternative that is hopefully better than Google Deep Mind or OpenAI Microsoft.”
Unlike Alphabet & Microsoft, he explained, xAI is not publicly traded and is not subject to market-based incentives or the nonmarket based ESG incentives, which Musk said “push companies in questionable directions.”
“We’re freer to operate and our AI can give answers that people may find controversial even though they are actually true. They won’t be politically correct at times, and probably a lot of people will be offended by some of the answers. But as long as you’re trying to optimize for truth with least amount of error, I think we’re doing the right thing.”
Related Article: The Open Source Revolution: Challenging AI Giants Google and OpenAI
Will xAI Offer Products?
One thing became clear at Friday’s Twitter Spaces event: The company plans to release tools and products for both businesses and consumers, and it plans to do so soon.
“We’re already working on a first release,” one team member said. “Hopefully, in a couple of weeks or so, we can share a bit more information around this.”
A Twitter Spaces attendee asked the group if they saw themselves as competition to OpenAI (which partnered with Microsoft) and Google Bard, to which Musk responded, “Yeah, I think we’re competition. We’re definitely competition.”
xAI is just starting out, he explained, describing the company as “embryonic.” But the goal, he said, would be to make useful AI for consumers and businesses.
There’s value in having multiple entities in the game, he added. “You don’t want to have a unipolar world where just one company kind of dominates in AI — you want to have some competition. Competition makes companies honest.”
Will xAI Use Twitter Data?
One session attendee asked Musk if he planned to use Twitter’s data at xAI — a question to which Musk responded with a chuckle: “Every AI organization large and small has used Twitter data for training, basically in all cases illegally.”
He pointed to the rate limits induced by data scraping, which he said was bringing the system to its knees. “It was either that, or Twitter didn’t work.”
xAI will use public tweets for training, Musk said, “just like basically everyone else had. It’s certainly a good data set for text training. And arguably for image and video training as well.”
At a certain point, he said, you run out of human-created data. “Really, for things to take off in a big way, AI’s got to basically generate content, self-assess the content, and that’s really the path to AGI is something like that, self-generated content that basically plays against itself.”
AI is not vast amounts of code, he added. “It’s actually shocking how small the lines of code are.” Instead, it’s a lot of data curation. “How the data is used, what data is used, the signal noise of that data, the quality of that data, is immensely important.”
He used humans as an example. If you, as a human, are trying to learn something, which is better: vast amounts of drivel or a small amount of high-quality content?
Related Article: What's Behind Elon Musk's Tweet Viewing Limits?
Will Musk Still Promote AI Regulation?
Musk has been a vocal proponent of AI regulation in the US and abroad, even going so far as to recommend a halt on the development of the technology.
“We need some regulatory oversight,” Musk said on Friday. “It’s not some perfect nirvana, but it’s better than nothing. Enforcement is difficult, but we should still aspire to do something in this regard.”
One of the biggest arguments against AI regulation, said Musk, is that China will lead ahead of the US because we’re regulating and they’re not. He believes China will regulate, but added that “proof will be in the pudding.”
In a meeting with China, Musk said he pointed out that if you succeed in creating a digital super intelligence, it could end up being in charge. “The CCP does not want to find themselves subservient to a digital super intelligence. That argument did resonate.”
Musk did not comment on China’s newly released guidelines on generative AI services, which mandate that AI tools offered to the public must “adhere to the core values of socialism” and not attempt to overthrow the socialist system.
Is AGI Closer Than We Think?
When it comes to artificial general intelligence, we’re missing the mark in the way things are currently being done, said Musk. “By many orders of magnitude.”
“It’s basically that AGI is being brute forced, and still actually not succeeding.”
What he’s learned at Tesla, he said, is that we overcomplicated the problem. “We were too dumb to realize how simple the answer was. But, you know, over time we get a bit less dumb. So I think that’s what we’ll probably find out with AGI as well.”
Once artificial general intelligence is solved, he said, we’ll look back on it and say: Why did we think it was so hard?