The blink-if-you-missed-it four-day drama at the tech firm OpenAI requires deep attention. On the surface it looks like power shenanigans; underneath lies a tale of humanity’s future and geopolitics.

The strange saga began a week ago, when the board of the nonprofit decided to fire its AI guru and cultish leader, Sam Altman. But when the 700+ staffers of OpenAI wrote an open letter saying that they too would go with the ousted CEO, he was swiftly reinstated.

The hairpin plot twists of this power struggle have been breathtakingly hard to follow. Reports are now surfacing that Altman had announced that he was on the brink of achieving a significant AI breakthrough only the day before he was fired. A letter was sent to the Board advising them that this tech discovery — an algorithm known as Q* (pronounced Q-Star) — could “threaten humanity”. The algorithm was deemed to be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), a system smarter than humans.

Altman’s dream was to then marry AGI with an integrated supply chain of AI chips, AI phones, AI robotics, and the world’s largest collections of data and LLMs (large language models). Its working name is Tigris.

To achieve this, Altman would need vast computing resources and funding. Perhaps that is why reports suggest he has been talking to Jony Ive, the designer behind the iPhone, Softbank, and Cerebras — which now makes the fastest AI chips in the world. Cerebras chips are big. The size of a dinner plate and more powerful than any traditional chips.They also have Swarm X software, which allows them to knit together into clusters that create a computational fabric that can handle the massive volume of data needed to build better AI.

Cerebras represents a great threat to the manufacturer of the world’s fastest supercomputers and AI chips, Nvidia. The most powerful of these supercomputers, Summit and Sierra, are central to the defence of the American nation and are kept at the highly protected nuclear facility in Oak Ridge, Tennessee. But almost every big organisation depends on Nvidia chips or computers. A year ago, Nvidia was worth $300 billion; now it is worth $1.35 trillion — the most dramatic increase in the value of a Nasdaq firm since 1971. Yet Cerebras has designed a chip 20-times faster than Nvidia’s. This is why some say the Cerebras IPO will kill Nvidia. Now we begin to see a national-security component to this story.

While the West has been focused on generative AI that has no cognitive ability, China has taken a different path. It has built the world’s only Quantum optical computer, which can solve in 47 seconds a problem that would take a traditional supercomputer 240 years.

Similarly, Altman wants to build a new generation of computers for the AI era. In July, he partnered with Cerebras and the Emirati incubator G42 to unveil the Condor Galaxy, the “World’s Largest Supercomputer for AI Training”. G42 is behind the world’s largest Arabic LLM (Large Language model), which, like ChatGPT, generates new linguistic content, and is also working with Amazon to gather and process DNA information to develop massive new global genomics, proteomics and biobanking services.

To an AI data scientist, the data collected for these innovations — the languages, nationalities and DNA involved — are honeypots brimming with opportunity. The West doesn’t have the mechanisms or the mores to gather such a quantity of meaningful data or the money to finance what Altman wants. But international investors do, including the Emiratis, the Chinese — who are already very interested in Altman — and all the others backing G42.

Moreover, if the world wants AI designed for true diversity, if it wants medical and financial products created to suit the broadest range of humans, then this will only happen where that diverse data can be found. It’s not going to be created in the US, where young, white male tech bros all design AI, and the FDA-approved testing of medicines pretty much excludes anyone but white males.

When AI people say they worry about Altman’s lack of “guardrails”, they mean he is willing to take the risk that he builds something that, like a Djinni, cannot be put back in a bottle. He is prepared to build something that might not be controllable. He is like Oppenheimer who was willing to smash atoms to win the Second World War, even though he might have ignited the atmosphere and incinerated Earth to do it.

By this measure, Altman is a mad scientist who will put us all at risk to achieve a historic breakthrough. Hence the letter to the board of OpenAI by some staff, along with others keen for the world to take note that the huge staff attrition rate at OpenAI was not due to “bad culture fits” — rather, this was due to “a disturbing pattern of deceit and manipulation”, “self deception” and the “insatiable pursuit of achieving artificial general intelligence (AGI)”. For Altman, this “pursuit” entailed incorporating superintelligence inside a robot’s body. Which, presumably, is why OpenAI started investing in humanoid robotics made by IX in Norway back in March.

You can understand why the OpenAI Board might have become uneasy when they realised that Altman was racing around the Middle East and Asia trying to raise billions for this vision. But, as Bloomberg wrote, “these are not “side ventures”, they are “core ventures”. This is about redefining the cutting-edge of chip design, data collection and storage, computational power, and the interface between AI and physical robotics.

This new supply chain would not only challenge America’s IT infrastructure; it could be facilitating the diminishment of US power. It implies innovation is shifting outside of the US and acquiring data that is beyond the reach of regulators. No doubt the US authorities looked at all this and saw Altman collaborating with G42 as equivalent to fraternising with the enemy because it is seen to be backed by the folks who own ByteDance, the parent company of TikTok (G42 has a significant stake in them). They saw that G42 owns Pax AI, which some say is Pegasus (the notorious spyware) reconfigured. Was Altman under surveillance as he pursued this grand vision? How could he not be?

The truth is Altman does not believe in borders. He has one goal: to build the best AI possible. He has a vision that probably worried his board and unnerved Washington. Given that the US is trying to slow down technological innovation in other parts of the world by restricting the sale of the best chips and computers, it is hugely challenging when he says: “We’ll build our own stuff — in fact we’ll build our own supply chain and ecosystem.” So much for ITAR, the US system for banning the export of critical tech.

There was a time when an American would have been arrested for selling such protected high-tech innovations abroad. Today, can you stop a smart American from innovating outside the US? Can you tell entrepreneurs not to take foreign money and not to partner with foreign firms? Can you demand that they stop challenging existing firms like Nvidia? No. Not when others are offering so much money.

What took place over the past week at OpenAI confirms that none of the young leaders of this new AI space see borders. As Andrew Feldman, co-founder and CEO of Cerebras Systems said: AI “isn’t a Silicon Valley thing, it isn’t even a US thing, it’s now all over the world — it’s a global phenomenon”. Yet, the US authorities and its allies will have known the significant challenge to Western notions of security and control that Altman’s vision presents.

Two days after Xi and Biden met on November 15 and agreed to play nice, Altman was fired. The Daily Dot suggests that Altman was terminated because Xi hinted to Biden that Altman’s OpenAI was surreptitiously involved in a data-gathering deal with a firm called D2 (Double Dragon), which some thought to be a Chinese Cyber Army Group. David Covucci reported, “This D2 group has the largest and biggest crawling/indexing/ scanning capacity in the world 10 times more than Alphabet Inc (Google), hence the deal so Open AI could get their hands on vast quantities of data for training after exhausting their other options.”

There is something deeper happening here. China has already forged an AGI path of its own. They are avoiding generative AI in favour of cognitive AI. They want it to think independently of prompts. China is already giving AI control over satellites and weaponised drones. No doubt, Altman would love to work with that capability and the Chinese would love to work with him.

Could that be why Larry Summers, the Former Secretary of the Treasury with connections to US politics and business, also ended up on the new OpenAI Board? Perhaps the US Government realised they could not stop Altman but might claim a seat at his table?

What does all this mean for us? Experts like Altman say they are designing AI to do good. But AI designers disagree about what constitutes “good”. It is clear this arms race could have civilisational consequences.

view 3 comments

Disclaimer

Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.

We would be glad to have your feedback.

Buy Me A Coffee

Source: UnHerd Read the original article here: https://unherd.com/