Close Menu
Voxa News

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ‘Abject horror’: the troubling history of paedophile-hunting TV shows | Documentary films

    September 20, 2025

    A hi-tech health prediction that I could do without | Medical research

    September 20, 2025

    World Athletics Championships: Tokyo warm-up facilities ‘not perfect’, says Lord Coe

    September 20, 2025
    Facebook X (Twitter) Instagram
    Voxa News
    Trending
    • ‘Abject horror’: the troubling history of paedophile-hunting TV shows | Documentary films
    • A hi-tech health prediction that I could do without | Medical research
    • World Athletics Championships: Tokyo warm-up facilities ‘not perfect’, says Lord Coe
    • Why the U.S. Might Lose the Space Race
    • This Amazon Tiny House DIY Cabin Kit Is Under $16K
    • Ukraine develops its drone industry as it seeks to triple production
    • Fed Governor Miran says he did not tell Trump how he would vote on rates this week
    • StubHub’s stock plunges in third day on NYSE as post-IPO slump deepens
    Saturday, September 20
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • Travel
    • World
    • Entertainment
    • Technology
    Voxa News
    Home»Technology»AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X
    Technology

    AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X

    By Olivia CarterJuly 16, 2025No Comments4 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case | X
    Elon Musk’s xAI apologised last week after its Grok chatbot made a slew of antisemitic and Adolf Hitler-praising comments on X. Photograph: Algi Febri Sugita/SOPA Images/Shutterstock
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The chatbot embedded in Elon Musk’s X that referred to itself as “MechaHitler” and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard.

    But an expert witness for X has argued a large language model cannot be ascribed intent, only the user.

    xAI, Musk’s artificial intelligence firm, last week apologised for the comments made by its Grok chatbot over a 16-hour period, which it attributed to “deprecated code” that made Grok susceptible to existing X user posts, “including when such posts contained extremist views”.

    The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material.

    Australia’s social media ban for under-16s is now law. There’s plenty we still don’t know – video

    X’s expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content.

    One of eSafety’s expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was “absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE”.

    “This week has been quite full of them, with X’s chatbot Grok producing [content that] fits within the definitions of TVE,” Suzor said.

    He said the development of AI has human influence “all the way down” where you can find intent, including Musk’s actions to change the way Grok was responding to queries to “stop being woke”.

    The tribunal heard that X believes the use of its Community Notes feature (where users can contribute to factchecking a post on the site) and Grok’s Analyse feature (where it provides context on a post) can detect or address TVE.

    skip past newsletter promotion

    Sign up to Breaking News Australia

    Get the most important news as it breaks

    Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

    after newsletter promotion

    Both Suzor and fellow eSafety expert witness Josh Roose, a Deakin University associate professor of politics, told the hearing that it was contested as to whether Community Notes was useful in this regard. Roose said TVE required users to report the content to X, which went into a “black box” for the company to investigate, and often only a small amount of material was removed and a small number of accounts banned.

    Suzor said that after the events of last week, it was hard to view Grok as “truth seeking” in its responses.

    “It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” he said.

    Berg argued that the Grok Analyse feature on X had not been updated with the features that caused the platform’s chatbot to make the responses it did last week, but admitted the chatbot that users respond to directly on X had “gone a bit off the rails” by sharing hate speech content and “just very bizarre content”.

    Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

    Earlier in the hearing, lawyers for X accused eSafety of attempting to turn the hearing “into a royal commission into certain aspects of X”, after Musk’s comment referring to Inman Grant as a “commissar” was brought up in the cross-examination of an X employee about meetings held with X prior to the notice being issued.

    The government’s barrister, Stephen Lloyd, argued X was trying to argue that eSafety was being “unduly adversarial” in its dealings with X, and that X broke off negotiations at a critical point before the notice was issued. He said the “aggressive approach” came from X’s leadership.

    The hearing continues.

    case chatbot considered content eSafety expert extremism making MechaHitler tells violent Witness
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Olivia Carter
    • Website

    Olivia Carter is a staff writer at Verda Post, covering human interest stories, lifestyle features, and community news. Her storytelling captures the voices and issues that shape everyday life.

    Related Posts

    StubHub’s stock plunges in third day on NYSE as post-IPO slump deepens

    September 20, 2025

    British AI startup beats humans in international forecasting competition | Artificial intelligence (AI)

    September 20, 2025

    Nvidia to invest $5bn in rival Intel

    September 20, 2025

    Trump administration to impose a $100,000-per-year fee for H-1B visas

    September 20, 2025

    Google isn’t kidding around about cost cutting, even slashing its FT subscription

    September 20, 2025

    Donald Trump Is Saying There’s a TikTok Deal. China Isn’t

    September 20, 2025
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    Glastonbury 2025: Saturday with Charli xcx, Kneecap, secret act Patchwork and more – follow it live! | Glastonbury 2025

    June 28, 20258 Views

    In Bend, Oregon, Outdoor Adventure Belongs to Everyone

    August 16, 20257 Views

    The Underwater Scooter Divers and Snorkelers Love

    August 13, 20257 Views
    Don't Miss

    ‘Abject horror’: the troubling history of paedophile-hunting TV shows | Documentary films

    September 20, 2025

    It would go a little like this.A man would arrive at a house after chatting…

    A hi-tech health prediction that I could do without | Medical research

    September 20, 2025

    World Athletics Championships: Tokyo warm-up facilities ‘not perfect’, says Lord Coe

    September 20, 2025

    Why the U.S. Might Lose the Space Race

    September 20, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Medium Rectangle Ad
    Most Popular

    Glastonbury 2025: Saturday with Charli xcx, Kneecap, secret act Patchwork and more – follow it live! | Glastonbury 2025

    June 28, 20258 Views

    In Bend, Oregon, Outdoor Adventure Belongs to Everyone

    August 16, 20257 Views

    The Underwater Scooter Divers and Snorkelers Love

    August 13, 20257 Views
    Our Picks

    As a carer, I’m not special – but sometimes I need to be reminded how important my role is | Natasha Sholl

    June 27, 2025

    Anna Wintour steps back as US Vogue’s editor-in-chief

    June 27, 2025

    Elon Musk reportedly fired a key Tesla executive following another month of flagging sales

    June 27, 2025
    Recent Posts
    • ‘Abject horror’: the troubling history of paedophile-hunting TV shows | Documentary films
    • A hi-tech health prediction that I could do without | Medical research
    • World Athletics Championships: Tokyo warm-up facilities ‘not perfect’, says Lord Coe
    • Why the U.S. Might Lose the Space Race
    • This Amazon Tiny House DIY Cabin Kit Is Under $16K
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    2025 Voxa News. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.