A place were I can write...

My simple blog of pictures of travel, friends, activities and the Universe we live in as we go slowly around the Sun.



March 26, 2024

AI apocalypse.

The little-known AI group that got $660 million

Powered by a massive cash infusion from a cryptocurrency mogul, the Future of Life Institute is building a network to fixate governments on the AI apocalypse.

By BRENDAN BORDELON

A young nonprofit pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations.

The Future of Life Institute has only around two dozen employees spread across the U.S. and Europe. But its previously unreported war chest puts it on par with famous nonprofit powerhouses like the Brookings Institution and the American Civil Liberties Union Foundation.

And it virtually guarantees FLI will wield outsized clout in the fast-moving global debate over regulating AI, experts say — though exactly what the group intends to do, and how it will deploy that money, is still riddled with unknowns.

FLI appears to have so far spent just a fraction of its crypto money bomb, mostly on gifts to AI safety researchers and on organizations favoring tight rules on the tech’s development. Four groups receiving money from FLI are now advising Washington’s new AI Safety Institute, and several serve as key players in London’s AI safety plans.

Through its new Future of Life Foundation, FLI also plans to start “three to five new organizations per year” that would steer AI and other transformative technologies “toward benefiting life and away from extreme large-scale risks.”

That mission puts FLI on one side of a contentious debate about how and why to regulate artificial intelligence. Critics have said that long-term AI risk arguments like the ones FLI are pushing are speculative and downplay near-term harms of the technology, like discrimination or job loss. They also warn that provoking fear of AI among policymakers can serve the goals of the tech billionaires who fund many of these groups.

“I worry a lot about the influence on regulation, and how much influence these people have with respect to lobbying government and regulators and people who really don’t understand the technologies very well,” said Melanie Mitchell, AI researcher at the Santa Fe Institute and skeptic of FLI’s claim that advanced AI is an existential threat to humanity.

The rise of AI has thrust several young nonprofits into the spotlight, advocating in Washington, Brussels and other capitals for their visions of AI safety laws. All are relatively obscure by global advocacy standards, and most are small.

FLI might have remained as small as those organizations. But in May 2021, it abruptly received a donation of Shiba Inu cryptocurrency worth $665 million from crypto mogul Vitalik Buterin (a massive haul of the volatile “memecoin,” named after a Japanese dog breed, was apparently gifted to Buterin days earlier).

Tax documents provided by FLI show it had just $2.4 million at the start of that year. Now its assets dwarf those amassed by not just most other AI groups, but many high-profile policy organizations.

In the AI world, the Future of Life Institute is best known for last year’s viral letter calling for a “pause” in advanced AI research. The letter carried signatures from tech luminaries like Elon Musk and Apple co-founder Steve Wozniak, and catalyzed debate in Washington, London and elsewhere about AI’s potential threat.

FLI has called on governments to require licenses for AI development and impose other safety rules. It continues to promote those policies on both sides of the Atlantic.

Existential risk groups already enjoy considerable clout and backing from a handful of tech billionaires, and critics fear a massive infusion of FLI cash into that ecosystem could skew the debate even further.

Like many other researchers, Mitchell said government licensing regimes or limits on open-source models — safety regulations favored by FLI — could lock in advantages for leading AI firms and make it harder for startups to compete.

“It’s very beneficial for big companies who are sort of threatened by the open-source AI movement … to portray open-source AI as being extremely dangerous,” Mitchell said.

In an email, FLI spokesperson Ben Cumming noted that his organization — which as a 501c(3), is allowed to make philanthropic donations and engage in “some” lobbying — funds work on other large-scale hazards besides AI, including nuclear war and the loss of biodiversity.

Cumming rejected the notion that FLI is pushing Washington and other capitals to embrace regulations that would benefit leading companies like Microsoft, OpenAI and Anthropic. Other tech companies, including Meta and IBM, are deploying their own vast resources to steer governments away from strict AI safety rules.

“FLI’s modest efforts in support of common sense regulations pale in comparison to the financial backing of Big Tech’s anti-regulation push,” said Cumming.

The Future of Life Institute already plays a high-profile role in global efforts to regulate AI.

Its president and co-founder Max Tegmark testified at a Senate forum on AI last fall, and was a major player at the United Kingdom’s November summit on AI safety. The United Nations appointed Jaan Tallinn, billionaire Skype co-founder and FLI board member, to serve on its new AI Advisory Body in October. FLI lobbyists helped ensure that the European Union’s AI Act included new rules on foundational AI models, and they continue to crisscross Capitol Hill as they work to convince Congress of AI’s cataclysmic potential.

But FLI’s crypto-backed war chest puts it in a different category from other AI safety groups.

The group disclosed Buterin’s donation in a little-noticed EU transparency filing submitted in 2023, where it valued the haul of Shiba Inu coins at €603 million. Cumming said the value of that gift ultimately came to $665.8 million.

Much of that money remains unspent — the FLI spokesperson said the “vast majority ... has been transferred to a donor-advised fund and other nonprofit entities that primarily provide asset management services.” Cumming called it a “routine setup for nonprofits managing large resources.”

Buterin, inventor of the popular cryptocurrency Ethereum, did not respond to an emailed request for comment on his donation to FLI. But in a November blog post, he fretted that AI could become “the new apex species on the planet” and conceivably “end humanity for good.”

FLI may deploy some of Buterin’s endowment to build out its direct lobbying in Washington. The group spent $180,000 last year to lobby Congress on AI safety. But Tallinn — who has invested in leading AI labs Anthropic and Google DeepMind, and serves as both an FLI donor and a member of its five-person board — recently suggested it’s time to boost that spending.

“For the last decade or so, I’ve been really supporting people who are doing AI companies’ homework in terms of AI safety research,” the Skype billionaire said on a November episode of the AI and You podcast. “I think I will continue doing that, but it’s currently becoming increasingly clear that their research is not going to be there in time.”

“So my focus for the foreseeable future will be on regulatory interventions and trying to educate lawmakers, and helping and perhaps hiring lobbyists to try to make the world safer,” Tallinn said.

Cumming noted that as a 501c(3) nonprofit, FLI’s lobbying spend is limited to $1 million per year. The FLI spokesperson otherwise did not comment on Tallinn’s remarks.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.