Close Menu
Voxa News

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ‘Don’t Let the Sun’ Director Interview Alienation, Climate Change

    August 8, 2025

    Engineered Garments Spring 2026 Menswear Collection

    August 8, 2025

    Alarm in China that efforts to control Chikungunya virus are infringing on rights | China

    August 8, 2025
    Facebook X (Twitter) Instagram
    Voxa News
    Trending
    • ‘Don’t Let the Sun’ Director Interview Alienation, Climate Change
    • Engineered Garments Spring 2026 Menswear Collection
    • Alarm in China that efforts to control Chikungunya virus are infringing on rights | China
    • How Lakers build around Luka – Fits, options, LeBron’s future
    • This Coastal Town Was Just Named the Wealthiest Suburb in Florida
    • Donald Trump Orders Crackdown on Politically-Motivated ‘Debanking’
    • MPM Premium Boards Mohamed Al-Daradji’s ‘Irkalla: Gilgamesh’s Dream’
    • Gloucestershire support network helped us to breastfeed, say mums
    Friday, August 8
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • Travel
    • World
    • Entertainment
    • Technology
    Voxa News
    Home»Technology»AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)
    Technology

    AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)

    By Olivia CarterJuly 17, 2025No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    AI firms ‘unprepared’ for dangers of building human-level systems, report warns | Artificial intelligence (AI)
    The safety index assessed leading AI developers across areas including current harms and existential risk. Photograph: Master/Getty Images
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Artificial intelligence companies are “fundamentally unprepared” for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group.

    The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for “existential safety planning”.

    One of the five reviewers of the FLI’s report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had “anything like a coherent, actionable plan” to ensure the systems remained safe and controllable.

    AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI “benefits all of humanity”. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event.

    The FLI’s report said: “The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.”

    The index evaluates seven AI developers – Google DeepMind, OpenAI, Anthropic, Meta, xAI and China’s Zhipu AI and DeepSeek – across six areas including “current harms” and “existential safety”.

    Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-.

    The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an “unconditional” donation from crypto entrepreneur Vitalik Buterin.

    SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have “weak to very weak risk management practices” and labelled their current approach “unacceptable”.

    The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice.

    Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was “pretty jarring” that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences.

    He said: “It’s as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”

    Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. “Now the companies themselves are saying it’s a few years away,” he said.

    He added that progress in AI capabilities had been “remarkable” since the global AI summit in Paris in February, with new models such as xAI’s Grok 4, Google’s Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears.

    A Google DeepMind spokesperson said the reports did not take into account “all of Google DeepMind’s AI safety efforts”. They added: “Our comprehensive approach to AI safety and security extends well beyond what’s captured.”

    OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.

    artificial Building dangers firms humanlevel intelligence report Systems unprepared Warns
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Olivia Carter
    • Website

    Olivia Carter is a staff writer at Verda Post, covering human interest stories, lifestyle features, and community news. Her storytelling captures the voices and issues that shape everyday life.

    Related Posts

    Donald Trump Orders Crackdown on Politically-Motivated ‘Debanking’

    August 8, 2025

    Tesla VP Pete Bannon developing chip tech, Dojo supercomputer leaving

    August 8, 2025

    Arts and media groups demand Labor take a stand against ‘rampant theft’ of Australian content to train AI | Artificial intelligence (AI)

    August 8, 2025

    Trump calls for Intel boss Lip-Bu Tan to resign over alleged China ties

    August 8, 2025

    Meta says these wild headset prototypes could be the future of VR

    August 8, 2025

    Tesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving

    August 8, 2025
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Don't Miss

    ‘Don’t Let the Sun’ Director Interview Alienation, Climate Change

    August 8, 2025

    “The heat keeps rising. People grow distant, in curious kinds of solitude. This is where…

    Engineered Garments Spring 2026 Menswear Collection

    August 8, 2025

    Alarm in China that efforts to control Chikungunya virus are infringing on rights | China

    August 8, 2025

    How Lakers build around Luka – Fits, options, LeBron’s future

    August 8, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Medium Rectangle Ad
    Most Popular

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Our Picks

    As a carer, I’m not special – but sometimes I need to be reminded how important my role is | Natasha Sholl

    June 27, 2025

    Anna Wintour steps back as US Vogue’s editor-in-chief

    June 27, 2025

    Elon Musk reportedly fired a key Tesla executive following another month of flagging sales

    June 27, 2025
    Recent Posts
    • ‘Don’t Let the Sun’ Director Interview Alienation, Climate Change
    • Engineered Garments Spring 2026 Menswear Collection
    • Alarm in China that efforts to control Chikungunya virus are infringing on rights | China
    • How Lakers build around Luka – Fits, options, LeBron’s future
    • This Coastal Town Was Just Named the Wealthiest Suburb in Florida
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    2025 Voxa News. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.