Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Verified Twitter Account Pushes AI-Generated Image of Pentagon 'Explosion'

The image was fake, but it quickly picked up traction online and once again shows how bad actors can exploit Twitter's paid verification system to spread convincing misinformation.

By Michael Kan
May 22, 2023
(Credit: Getty Images/Ivan Cholakov)

A Twitter account with a blue checkmark exploited that verification badge on Monday morning to spread an AI-generated image that it claimed to show an explosion near the Pentagon. 

The account pretended to come from the news agency Bloomberg by using the @BloombergFeed handle. In addition, it successfully received a blue checkmark from Twitter’s verification system, which anyone can buy for $8 or $11 a month. 

This helped the account dupe some users when it claimed a “large explosion” had occurred in Washington, D.C., near the Pentagon. As proof, it posted a photo that allegedly showed a large column of smoke next to the federal building.

The photo went viral on Twitter as more users, including Russian media agency RT.com, circulated the apparent news. The only problem? The report about the explosion was all fake. 

Local authorities have since confirmed that the Pentagon suffered no such explosion. Other users were also quick to notice flaws in the photo showing the blast. These mistakes, which include missing details in the lamp post and the Pentagon building, are all hallmarks of AI-generated images, which can struggle to fully render backgrounds in pictures.  

In response, Twitter permanently suspended the @BloombergFeed account. Nevertheless, the incident is raising concerns that bad actors could repeat the same hoax again to try to influence an election, crash the stock market, or sow chaos. 

Not helping the matter is that Twitter has stripped away official blue checkmarks from legitimate organizations —including the Pentagon's police force— making it harder to discern real news from fake on the social media platform. Further advancements in AI-generated images could blur the distinctions even more.

"What scares me: people who did today's AI explosion disinformation must've known it would not last," tweeted John Scott-Railton, a researcher at watchdog group Citizen Lab. "But if they picked a more distant area, far from a capitol, debunking would have taken *time*. Expect more of that."

As CNN notes, the stock market took a temporary dip after the images first circulated.

Users already exploited Twitter’s paid verification system when it first rolled out in November. But even so, the company has given no indication it’ll try to stamp out the problem. Twitter didn’t respond to a request for comment, automatically replying back with a smiling poop emoji.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Michael Kan

Senior Reporter

I've been with PCMag since October 2017, covering a wide range of topics, including consumer electronics, cybersecurity, social media, networking, and gaming. Prior to working at PCMag, I was a foreign correspondent in Beijing for over five years, covering the tech scene in Asia.

Read Michael's full bio

Read the latest from Michael Kan