Close Menu
Voxa News

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How safe are wireless headphones?

    August 8, 2025

    Villa sign Guessand, Taylor in at Liverpool, Championship kicks off: football news – live | Football

    August 8, 2025

    This Mushroom’s Incredibly Bitter Taste Is New to Science

    August 8, 2025
    Facebook X (Twitter) Instagram
    Voxa News
    Trending
    • How safe are wireless headphones?
    • Villa sign Guessand, Taylor in at Liverpool, Championship kicks off: football news – live | Football
    • This Mushroom’s Incredibly Bitter Taste Is New to Science
    • Ocean Livestream Captivates Argentina Amid Scientific Research Cuts
    • Vance to arrive in UK for Lammy meeting as Starmer denounces Israel’s plan to take control of Gaza City – UK politics live | Politics
    • BioNTech settles Covid-19 patent dispute with CureVac
    • Life-like robots for sale to the public as China opens new store
    • Wodehouse in Wonderland review – less than spiffing portrait of the artist as a light comedian | Edinburgh festival 2025
    Friday, August 8
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • Travel
    • World
    • Entertainment
    • Technology
    Voxa News
    Home»Technology»AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)
    Technology

    AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)

    By Olivia CarterJuly 17, 2025No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)
    The safety index assessed leading AI developers across areas including current harms and existential risk. Photograph: Master/Getty Images
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Artificial intelligence companies are “fundamentally unprepared” for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group.

    The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for “existential safety planning”.

    One of the five reviewers of the FLI’s report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had “anything like a coherent, actionable plan” to ensure the systems remained safe and controllable.

    AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI “benefits all of humanity”. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event.

    The FLI’s report said: “The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.”

    The index evaluates seven AI developers – Google DeepMind, OpenAI, Anthropic, Meta, xAI and China’s Zhipu AI and DeepSeek – across six areas including “current harms” and “existential safety”.

    Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-.

    The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an “unconditional” donation from crypto entrepreneur Vitalik Buterin.

    SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have “weak to very weak risk management practices” and labelled their current approach “unacceptable”.

    The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice.

    Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was “pretty jarring” that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences.

    He said: “It’s as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”

    Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. “Now the companies themselves are saying it’s a few years away,” he said.

    He added that progress in AI capabilities had been “remarkable” since the global AI summit in Paris in February, with new models such as xAI’s Grok 4, Google’s Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears.

    A Google DeepMind spokesperson said the reports did not take into account “all of Google DeepMind’s AI safety efforts”. They added: “Our comprehensive approach to AI safety and security extends well beyond what’s captured.”

    OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.

    artificial Building dangers firms humanlevel intelligence report Systems unprepared Warns
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Olivia Carter
    • Website

    Olivia Carter is a staff writer at Verda Post, covering human interest stories, lifestyle features, and community news. Her storytelling captures the voices and issues that shape everyday life.

    Related Posts

    Life-like robots for sale to the public as China opens new store

    August 8, 2025

    12 must-have gadgets for college students in 2025

    August 8, 2025

    xAI’s legal chief steps down after whirlwind year

    August 8, 2025

    Donald Trump Orders Crackdown on Politically-Motivated ‘Debanking’

    August 8, 2025

    Tesla VP Pete Bannon developing chip tech, Dojo supercomputer leaving

    August 8, 2025

    Arts and media groups demand Labor take a stand against ‘rampant theft’ of Australian content to train AI | Artificial intelligence (AI)

    August 8, 2025
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Don't Miss

    How safe are wireless headphones?

    August 8, 2025

    Do noise-cancelling headphones protect our hearing?Other Side of the Story spoke to Claire Benton, president…

    Villa sign Guessand, Taylor in at Liverpool, Championship kicks off: football news – live | Football

    August 8, 2025

    This Mushroom’s Incredibly Bitter Taste Is New to Science

    August 8, 2025

    Ocean Livestream Captivates Argentina Amid Scientific Research Cuts

    August 8, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Medium Rectangle Ad
    Most Popular

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Our Picks

    As a carer, I’m not special – but sometimes I need to be reminded how important my role is | Natasha Sholl

    June 27, 2025

    Anna Wintour steps back as US Vogue’s editor-in-chief

    June 27, 2025

    Elon Musk reportedly fired a key Tesla executive following another month of flagging sales

    June 27, 2025
    Recent Posts
    • How safe are wireless headphones?
    • Villa sign Guessand, Taylor in at Liverpool, Championship kicks off: football news – live | Football
    • This Mushroom’s Incredibly Bitter Taste Is New to Science
    • Ocean Livestream Captivates Argentina Amid Scientific Research Cuts
    • Vance to arrive in UK for Lammy meeting as Starmer denounces Israel’s plan to take control of Gaza City – UK politics live | Politics
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    2025 Voxa News. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.