After all the red flags have been dismissed, Worldcoin might now be the best hope for AI-skeptics.
In the five-day Silicon Valley power struggle that culminated in a victory over moral and ethical concerns surrounding the pace of technological development, Sam Altman went from being sacked from the company he created on a Friday to "King Sam the Unsackable" the following Tuesday.
As the people and mechanisms who stood up to restrain the rapid pace of artificial intelligence development at the world's leading AI firm were thrust out and replaced, the global identity crypto network now stands as the only Altman-approved counterweight to the negative consequences of the incursion of AI in everyday life.
OpenAI and Worldcoin are independent projects at different stages of adoption. The latter, at a $292 million market cap, is much smaller than the former, which, in the year since launching ChatGTP, has gathered over 2 million customers and was, before all the drama of the past week, set to sell employee shares at an $80 to $90 billion valuation.
Together, the projects form a balance crucial for the AI-powered future that the tech entrepreneur envisions. Officially launched in July this year, Worldcoin is an identity network that protects real humans from being mistaken as bots online by creating unique digital passports using users' biometrical data. It is also an instrumental financial network in what Altman sees as the next stage of economic production, when AI will produce "most of the world's basic goods and services." To soften the loss of millions of real jobs and the sharp decline in individual income of people worldwide, through Worldcoin, the new production agent will redistribute its profits as WLD tokens.
Such a drastic change in knowledge and product creation will be tremendously disruptive to the social and economic systems. This is why, aware of the risks when founding OpenAI in 2015, Altman structured the company as a non-profit, handing the power to a board dominated by scientists who, rather than putting profit first, would consider the implications AI could have on society and tame the pace of its development.
The co-founder of OpenAI is himself rather vocal about the potential dangers of what he helped build, but believing the benefits outweigh even the risk that AI might lead to mass extinction, the CEO has been relentless in trying to push it forward.
As the company became a leader in the global race for artificial intelligence supremacy, Altman began modifying the terms of the initial non-profit arrangement. First, by creating a for-profit beneficiary, then, by partnering with multi-million investors. Days before being fired, he announced customized versions of ChatGPT.
Altman's accelerationism led to the board's decision to sack him on November 17. Yet, the move backfired. Rather than creating a breathing space to pause and reflect, it expelled those best placed to voice concerns over artificial intelligence from the chambers of power.
With caution cast out of the decision room, the only devices to control it must come from within OpenAI.
There are several restraining mechanisms in place. For instance, in July, Sutskever, who was the deciding vote to oust Altman but has since changed his mind, created a new team within OpenAI to align "super intelligent" AI systems with what is best for humanity.
There are also people in place. Larry Summers was selected to join the company's new interim board. The veteran U.S. economist has several times spoken publicly about the dangers of AI, advertising for caution.
However, the controlling mechanisms and opposing voices have already failed to contain Altman's ambitions one time. After OpenAI's short-lived power crisis consolidated the CEO's power, his vision will be even harder to challenge.
Governments could potentially also step in to regulate the sector, but as R&D mostly takes place in the U.S., where the philosophy of 'free markets' reigns, that would seem highly unlikely in this instance.
Worldcoin, co-founded by Sam Altman as a part of his grand scheme of an AI-powering future, is something to cling to for AI skeptics at the moment. While not meaning to stop or halt technological developments, it does act to soften its impact.
Hopefully, with the new board still to be confirmed and discussions about the future of AI increasing in volume, this precarity of checks and balances will only be temporary.