Neo-Nazis And White Supremacists Globally Look To Artificial Intelligence To Promote Their Message, Spread Misinformation, And Aide Their Cause, January 2023-May 2024

By: Steven Stalinsky, Ph.D., Dr. Simon A. Purdue, R. Sosnow, A. Smith, A. Agron, N. Szerman, Natan Rosenfeld, Heath Sloane, Anatoly Strandberg, J. Hughes, K. Lee, and Lorena Avraham*
print
June 20, 2024
By: Steven Stalinsky, Ph.D., Dr. Simon A. Purdue, R. Sosnow, A. Smith, A. Agron, N. Szerman, Natan Rosenfeld, Heath Sloane, Anatoly Strandberg, J. Hughes, K. Lee, and Lorena Avraham*

Since 2022, Artificial Intelligence (AI) technology has advanced meteorically, with fundamental impacts on society, both positive and negative. In addition to its significant contribution to productivity, creativity, and workflow optimization, it is a factor in the continuing erosion of trust online and has further muddied the information landscape. AI is becoming more and more controversial as its use is increasingly widespread across all population sectors, and as the products it is capable of generating are ever more difficult to distinguish from non-AI generated content.

An area of particular concern in recent months has been the wholesale adoption of AI technology by extremist groups and individuals from across the ideological spectrum, and their use of generative AI for disseminating propaganda and misinformation as well as for hatemongering. For neo-Nazis and white supremacists in particular, it is a key weapon in their online arsenal, and they have very effectively deployed AI-generated content as a disruptor in both mainstream online spaces and on their own channels.

TO READ THE FULL REPORT, GOVERNMENT AND MEDIA CAN REQUEST A COPY BY WRITING TO DTTMSUBS@MEMRI.ORG WITH THE REPORT TITLE IN THE SUBJECT LINE. PLEASE INCLUDE FULL ORGANIZATIONAL DETAILS AND AN OFFICIAL EMAIL ADDRESS IN YOUR REQUEST. NOTE: WE ARE ABLE TO PROVIDE A COPY ONLY TO MEMBERS OF GOVERNMENT, LAW ENFORCEMENT, MEDIA, AND ACADEMIA, AND TO SUBSCRIBERS

Neo-Nazis Are Again Early Adopters

Neo-Nazis' early adoption of technology is nothing new. Since the earliest days of the World Wide Web, racist extremists have been among the first to adopt, co-opt, and misuse emerging technologies to advance their hateful agenda.[1] Indeed, Matthew Hale's white supremacist Creativity Movement was among the first organized movements to host its own online message board in the early 1990s, and Stormfront led the pack in transforming its early Bulletin Board System (BBS) into a functional website in 1995.

As technology has advanced, so too has extremists' use of it. From the emergence of social networking and social media in the 2000s to the use of personal drones, laser projectors, cryptocurrency and online encryption in recent years, neo-Nazis have readily shifted their strategies to incorporate advances.[2] They have done so in large part in response to scrutiny and perceived persecution on the part of law enforcement, government, tech companies, and web users.

Extremists have been deplatformed from mainstream sites over the years, often wholesale following major events such as the 2017 Unite the Right Rally or the January 6 Capitol events. Thus, they are regularly forced to find new technologies for spreading their message unabated and for avoiding detection, deplatforming, or even legal action. Technologies allowing them to operate from behind a veil of anonymity are particularly welcomed, and these are provided by cryptocurrency, encryption, and, now, AI. As a result, we are now in a new era of online extremism.

James Mason, neo-Nazi ideologue, author of the accelerationist terror manual Siege, and a 60-year veteran of the movement said in an April 2022 livestream: "The Internet, for us, has been the greatest thing to ever come along... I am so impressed these days, in recent years, of the capacity of some of our people to produce great propaganda videos, within the computers... It's reaching thousands... and at no risk to ourselves, and at essentially no cost. It's fabulous."[3] Thus Mason succinctly articulates the vital role of emerging technologies in facilitating neo-Nazi activism, and how advancements like AI will be a force multiplier for the international racist extremist movement.

MEMRI – At The Forefront Of Monitoring Extremist Uses Of AI

The MEMRI DTTM has been on the forefront of monitoring this early adoption of technologies by extremist groups and individuals in recent years, and has reported extensively on these advancements. DTTM research has included a groundbreaking two-part series on neo-Nazi and white supremacist uses of cryptocurrency; Part I was published in July 2022 and Part II a year later.[4]

The DTTM team's coverage of AI has been no different, and we have reported extensively on the use of these emergent disruptive technologies by extremists since generative AI technologies first emerged on the public online scene. Part I of the DTTM's comprehensive review of extremist uses of AI was published in May 2023, when the generative AI boom was still very much in its nascent stages, and Part II is published herein. This two-part review is groundbreakingly comprehensive, offering a complete overview of how and why neo-Nazi and white supremacist groups around the world are using AI as a vital tool in their activism.

Extremist Use Of AI Continues To Evolve

Extremist use of AI technology is rapidly evolving and changing, and as new generative capabilities are developed by leading companies such as OpenAI, Google, and Microsoft, so too are new methods of spreading neo-Nazi propaganda.

Image Generation

The core capability which in many ways launched the current AI boom was and remains image generation. Tools like OpenAI's Dall-E and Midjourney allow users to convert short text prompts into increasingly advanced and realistic images. The nature of these images varies from Pixar-style animated movie posters to photorealistic depictions of celebrities or nature scenes. While mainstream platforms place heavy restrictions on the generation of extremist content, the democratization of the technology has allowed extremists to develop their own engines or find loopholes that allow them to create explicitly extremist imagery.

Antisemitic users have used the technology to caricature public figures as stereotypically Jewish. For example, an Irish neo-Nazi channel posted an AI-generated image of Elon Musk as an Orthodox Jew, writing: "Elon Musk's preference given to Jewish accounts on X. 33% of his interactions are with Jews. He has banned the most potent political activists in the US and UK who advocate for European peoples rights. Although he comes out with good comments like '[former Irish PM Leo] Varadkar hates Irish people.' Due to his closeness with Jews and the censorship of those European activists who have the ability to fight back against the destruction of their nations. He is a net negative." The post included a link to a YouTube video titled "I Noticed Something Interesting about Elon Musk's Tweets."

Another user created an A.I. generated image of new president-elect of Mexico, Claudia Sheinbaum, showing her as a heavily caricatured Jewish figure.

Users have also used the technology to caricature other ethnic groups, including Asian Americans, African Americans, the Latinx community, and others. One white supremacist user created a Pixar-style poster featuring George Floyd holding a pill and looking intoxicated, along with the title "Overdose," suggesting that Floyd died from a drug overdose rather than as a result of excessive force at the hands of then-MPD Officer Derek Chauvin.

Other white supremacists online have recently used the technology to generate content relating to the white genocide and great replacement conspiracy theories. On X, a neo-Nazi user an AI-generated image of a crowd of white women gathered in a square outdoors with the text "We Want Our 'Whites Only' World Back!!" The user wrote: "White Only World Is Coming Back & Staying Permanently. Europeans will be educated on Jews and what they have done to us for over 100 years of lying, censoring & chameleoning their way to manipulate Europeans against our own best interests. Jews won't win. Whites will."

More violent content relating to the same conspiracy theories also abounds, particularly among accelerationist communities online. A Canadian user posted an AI-generated image of two men with a pile of guns and ammunition standing on a rooftop looking out at a large crowd of people on the ground and a "China Tire" building in flames. The user wrote, "Time to bring in the rooftop Canadians" – an allusion to the "Rooftop Koreans," the Korean-American business owners who armed themselves and defended their properties from rioters during the 1992 Los Angeles riots. Similarly users have used the technology to advocate violence against the LGBTQ+ community, including one image which showed a drag artist being thrown out of a helicopter.

Neo-Nazis have also used the technology to glorify the Nazi regime and create graphics glorifying Wehrmacht and SS soldiers, casting them as defenders against progressive ideologies. A Neo-Nazi X user reposted an AI-generated image showing an SS soldier preparing to stab a large serpent with the colors of the Progress Pride Flag on its underbelly. The original user wrote: "It's time to cut off the head of the snake. No more brainwashing our kids with your disgusting degeneracy." The re-poster replied: "Time to destroy them once and for all! Who's with me?"

Similarly, on Telegram, in a neo-Nazi chat room, the admin shared, on March 6, an AI-generated image of a Wehrmacht soldier with a sonnenrad halo preparing to stab a demon with a Star of David pendant, with the text "Total Aryan Victory."

Translation

One of the more recent advancements widely adopted by extremists is translation of video or audio content, and even the manipulation of video to sync lip movements with translated audio. In recent months, a slew of AI-translated speeches by Hitler, Goebbels, and Mussolini have been circulated on extremist channels on social media, with many using the content to advocate for genocide or to claim that Hitler was misunderstood. This advancement has also made it easier for contemporary extremist ideologues to reach broader international audiences – a growing concern in an environment of increased ideological crosspollination and inter-ideological cooperation amongst extremist movements, particularly between neo-Nazis and anti-Israel groups in the Middle East.

A Neo-Nazi X user posted on April 11 a video of an AI-translated speech by Joseph Goebbels and wrote "White Power."

Similarly, a neo-Nazi Telegram channel posted a video featuring AI translations of Hitler speeches, writing "Adolph Hitler's speeches are being translated by AI."

Video Generation

Video generation, and by extension video manipulation, offers another way for extremist groups to use AI to spread misinformation and propaganda. Neo-Nazis have used it, including OpenAI's Sora, to produce videos of Hitler dancing in front of a crowded stadium and to generate emotional videos lamenting White replacement. This technology, as it develops, will likely present the greatest security threat, particularly as it can be used to generate deepfake videos of celebrities and political figures, perpetuating the erosion of trust and being used in information warfare.

The most prominent recent example, which was widely circulated on X, features Hitler dancing before a crowd of thousands. Previous versions of the video have shown the original from which the Hitler video was modeled, which depicts a Lil Yachty concert.

Voice Emulation

Similarly, voice emulation can and has been used to fake audio clips of mainstream political figures saying compromising things or advancing white supremacist or neo-Nazi talking points. Short real audio clips can be used to clone voices, which then allows extremists to manipulate these voices and have them say anything they want them to.

For example, on April 2, 2024, a Neo-Nazi forwarded a racist video from a neo-Nazi Telegram channel. The video mocks nature documentaries narrated by British natural historian David Attenborough. Specifically, the video features AI-generated narration in his voice making racist comments about Indians, calling them subhuman and saying that they seek to export a substandard way of life to other countries.

A Neo-Nazi user known for creating deepfake videos, wrote on Gab: "A fine gentleman contacted me on Telegram to let me know he watched my Tutorial on deepfakes! He then posts this BOMB on Odysee!!! I love to see it and I love when White people take control of the narrative to set the record straight! Especially using the images of these usurpers and deceivers! I don't know if he is on Gab, but I hope he is, and that he makes a reply here, so I can follow him!" The deepfake included in the post shows American pastor John Hagee delivering an antisemitic sermon.

Music Generation

Finally, a rash of AI-generated music is now washing across social media, and naturally this has included some overtly extremist musical content. Neo-Nazis have used AI music generation to produce racist and conspiratorial songs, and have spread this content on X and across other platforms.

In a neo-Nazi accelerationist channel on Telegram, a user shared an AI-generated song called "Doomsday Eclipse," which was generated using the Suno AI engine.

A neo-Nazi Telegram channel posted on April 24 a video featuring a racist AI generated song called "Joggers," the lyrics of which advocated for violence against Black people, referencing the murder of Ahmaud Arbery in Georgia in 2020. The song is deepfaked in the voice of Taylor Swift.

The full text of this post is available to DTTM subscribers.

If you are a subscriber, log in here to read this report.

For information on the required credentials to access this material, visit the DTTM subscription page

Subscribe to DTTM

Join U.S. and other Western government agencies and law enforcement, as well as leading businesses and business organizations, in subscribing to the MEMRI Domestic Terrorism Threat Monitor (DTTM) Project, for the latest alerts, updates, and reports on imminent and potential threats from around the world.

ONLY GOVERNMENT, MEDIA, AND ACADEMIA WITH FULL CREDENTIALS CAN REQUEST ACCESS TO DTTM REPORTS.

Subscribe to DTTM

The Cyber & Jihad Lab

The Cyber & Jihad Lab monitors, tracks, translates, researches, and analyzes cyber jihad originating from the Middle East, Iran, South Asia, and North and West Africa. It innovates and experiments with possible solutions for stopping cyber jihad, advancing legislation and initiatives federally – including with Capitol Hill and attorneys-general – and on the state level, to draft and enforce measures that will serve as precedents for further action. It works with leaders in business, law enforcement, academia, and families of terror victims to craft and support efforts and solutions to combat cyber jihad, and recruits, and works with technology industry leaders to craft and support efforts and solutions.

Read More