Close Menu
Voxa News

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Zara ads banned for featuring ‘unhealthily thin’ models

    August 6, 2025

    WhatsApp deletes over 6.8m accounts linked to scams, Meta says

    August 6, 2025

    Alis Copenhagen Spring 2026

    August 6, 2025
    Facebook X (Twitter) Instagram
    Voxa News
    Trending
    • Zara ads banned for featuring ‘unhealthily thin’ models
    • WhatsApp deletes over 6.8m accounts linked to scams, Meta says
    • Alis Copenhagen Spring 2026
    • RFK Jr’s health department to halt $500m in mRNA vaccine research | Trump administration
    • Trump announces he will chair White House taskforce for 2028 LA Olympics | LA Olympic Games 2028
    • The US Is Leaving UNESCO: Here’s What Comes Next
    • Trump to host Armenia, Azerbaijan leaders for peace talks: Report | Border Disputes News
    • Censorship and stolen puritanical valor
    Wednesday, August 6
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • Travel
    • World
    • Entertainment
    • Technology
    Voxa News
    Home»Technology»AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X
    Technology

    AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X

    By Olivia CarterJuly 16, 2025No Comments4 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X
    Elon Musk’s xAI apologised last week after its Grok chatbot made a slew of antisemitic and Adolf Hitler-praising comments on X. Photograph: Algi Febri Sugita/SOPA Images/Shutterstock
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The chatbot embedded in Elon Musk’s X that referred to itself as “MechaHitler” and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard.

    But an expert witness for X has argued a large language model cannot be ascribed intent, only the user.

    xAI, Musk’s artificial intelligence firm, last week apologised for the comments made by its Grok chatbot over a 16-hour period, which it attributed to “deprecated code” that made Grok susceptible to existing X user posts, “including when such posts contained extremist views”.

    The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material.

    Australia’s social media ban for under-16s is now law. There’s plenty we still don’t know – video

    X’s expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content.

    One of eSafety’s expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was “absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE”.

    “This week has been quite full of them, with X’s chatbot Grok producing [content that] fits within the definitions of TVE,” Suzor said.

    He said the development of AI has human influence “all the way down” where you can find intent, including Musk’s actions to change the way Grok was responding to queries to “stop being woke”.

    The tribunal heard that X believes the use of its Community Notes feature (where users can contribute to factchecking a post on the site) and Grok’s Analyse feature (where it provides context on a post) can detect or address TVE.

    skip past newsletter promotion

    Sign up to Breaking News Australia

    Get the most important news as it breaks

    Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

    after newsletter promotion

    Both Suzor and fellow eSafety expert witness Josh Roose, a Deakin University associate professor of politics, told the hearing that it was contested as to whether Community Notes was useful in this regard. Roose said TVE required users to report the content to X, which went into a “black box” for the company to investigate, and often only a small amount of material was removed and a small number of accounts banned.

    Suzor said that after the events of last week, it was hard to view Grok as “truth seeking” in its responses.

    “It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” he said.

    Berg argued that the Grok Analyse feature on X had not been updated with the features that caused the platform’s chatbot to make the responses it did last week, but admitted the chatbot that users respond to directly on X had “gone a bit off the rails” by sharing hate speech content and “just very bizarre content”.

    Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

    Earlier in the hearing, lawyers for X accused eSafety of attempting to turn the hearing “into a royal commission into certain aspects of X”, after Musk’s comment referring to Inman Grant as a “commissar” was brought up in the cross-examination of an X employee about meetings held with X prior to the notice being issued.

    The government’s barrister, Stephen Lloyd, argued X was trying to argue that eSafety was being “unduly adversarial” in its dealings with X, and that X broke off negotiations at a critical point before the notice was issued. He said the “aggressive approach” came from X’s leadership.

    The hearing continues.

    case chatbot considered content eSafety expert extremism making MechaHitler tells violent Witness
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Olivia Carter
    • Website

    Olivia Carter is a staff writer at Verda Post, covering human interest stories, lifestyle features, and community news. Her storytelling captures the voices and issues that shape everyday life.

    Related Posts

    WhatsApp deletes over 6.8m accounts linked to scams, Meta says

    August 6, 2025

    Censorship and stolen puritanical valor

    August 6, 2025

    Clay confirms it closed $100M round at $3.1B valuation

    August 6, 2025

    US Coast Guard Report on Titan Submersible Implosion Singles Out OceanGate CEO Stockton Rush

    August 6, 2025

    AMD earnings report 2Q 2025

    August 5, 2025

    Time Flies review – existential flight game with a bittersweet buzz | Games

    August 5, 2025
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Don't Miss

    Zara ads banned for featuring ‘unhealthily thin’ models

    August 6, 2025

    ZaraTwo adverts by fashion brand Zara have been banned for featuring models who appeared “unhealthily…

    WhatsApp deletes over 6.8m accounts linked to scams, Meta says

    August 6, 2025

    Alis Copenhagen Spring 2026

    August 6, 2025

    RFK Jr’s health department to halt $500m in mRNA vaccine research | Trump administration

    August 6, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Medium Rectangle Ad
    Most Popular

    27 NFL draft picks remain unsigned, including 26 second-rounders and Bengals’ Shemar Stewart

    July 17, 20251 Views

    Eight healthy babies born after IVF using DNA from three people | Science

    July 17, 20251 Views

    Massive Attack announce alliance of musicians speaking out over Gaza | Kneecap

    July 17, 20251 Views
    Our Picks

    As a carer, I’m not special – but sometimes I need to be reminded how important my role is | Natasha Sholl

    June 27, 2025

    Anna Wintour steps back as US Vogue’s editor-in-chief

    June 27, 2025

    Elon Musk reportedly fired a key Tesla executive following another month of flagging sales

    June 27, 2025
    Recent Posts
    • Zara ads banned for featuring ‘unhealthily thin’ models
    • WhatsApp deletes over 6.8m accounts linked to scams, Meta says
    • Alis Copenhagen Spring 2026
    • RFK Jr’s health department to halt $500m in mRNA vaccine research | Trump administration
    • Trump announces he will chair White House taskforce for 2028 LA Olympics | LA Olympic Games 2028
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    2025 Voxa News. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.