Though none of them had been politically active before, they became part of a growing national movement that pits the tech industry and its billionaires against a diverse coalition of parent groups, religious leaders, environmentalists and former Tea Party activists. Politically they range from the populist firebrand Stephen K. Bannon to Bernie Sanders, the progressive senator from Vermont.
The reasons they are pushing back against the technology are as varied as their backgrounds. But they all worry that tech companies are more focused on cashing in on A.I. than how it may affect regular people. They also share a sense that all that money will flow into the hands of Silicon Valley’s ultrawealthy, while the middle and working classes shoulder the costs.
Many of these A.I. critics say they are far from being Luddites just having a bad reaction to new, scary technology. They believe that people in Washington, especially President Trump, are protecting Silicon Valley rather than reeling it in. They want regulation — or at least a debate before A.I. becomes entrenched in American life.
“Given A.I. and robotics are going to impact every man, woman and child in this country, one might think that there’d be a massive debate in the United States Congress: What does it mean? Where do we go? How do we deal with it?” Mr. Sanders said in an interview with The New York Times. “There has been minimal, minimal discussion.”
A White House spokesman, Davis Ingle, said in a statement that “it is the policy of the Trump administration to sustain American A.I. dominance to protect our national security and ensure we remain the world’s leading economy.”
The White House’s policy framework for A.I., which was issued last month, calls on A.I. services to protect children. This year, Mr. Trump also issued a proclamation that said tech companies “must pay for the full cost of the energy and infrastructure needed to build and operate data centers.”
When OpenAI released ChatGPT in 2022, the chatbot became the fastest-growing software product ever, with 100 million using it in just two months. It didn’t take long for the industry to bet its future on the new A.I. technology, spending hundreds of billions of dollars to build the massive data centers they need to develop the technology, which are now popping up around the world.
Even in the early days of the A.I. boom, industry leaders like Elon Musk, OpenAI’s Sam Altman and Anthropic’s Dario Amodei frequently warned that A.I. was a risk to jobs and could have unforeseen, even dangerous consequences.
“If this technology goes wrong, it can go quite wrong,” Mr. Altman told lawmakers in 2023.
The public may have taken those warnings to heart. In a recent Quinnipiac University poll of American adults, 55 percent said they saw A.I. as a force for harm rather than good — a surprisingly negative reaction to a technology that has become a driver of the economy.
Mr. Bannon has said the negativity reflects concerns about how the technology has been introduced. “There’s not clarity, there’s not transparency, and there’s certainly not accountability,” he said on a podcast, “The Last Invention,” in January. “That’s why you’ve seen not just interest but building anger of working-class people.”
People new to this movement are finding a number of already established organizations with ties to effective altruism, a philosophy that, among other things, is concerned about the safety of A.I. Dustin Moskovitz, a Facebook co-founder, and Pierre Omidyar, the founder of eBay, have been funding some of these groups.
A.I.’s reputation with the public hasn’t been helped by the social media era that preceded it. Social media, despite its wild popularity, has been criticized for heightening political polarization and worsening mental health.
In March, Meta and YouTube, which is owned by Google, were found responsible by a jury in Los Angeles for creating an addictive product that harmed a young user. The two companies, which together make more than $50 billion in profit each quarter, were fined $6 million. A jury in a separate trial in New Mexico ordered Meta to pay $375 million in damages for failing to protect young users from sexual predators.
Job cuts in the tech industry are also fueling the perception that Silicon Valley is gutting its own work forces with A.I. before turning it on the rest of the economy. Just last week, Meta said it was cutting 10 percent of its workers, while Microsoft targeted up to 7 percent of its veteran employees in the United States with buyout offers. Nationwide, tech jobs declined by about 150,000 from 2022 through 2025, according to data from the Census Bureau.
Amy Kremer, a former Tea Party leader, recently became chair of Humans First, a conservative anti-A.I. group. It was spun out of the Center for A.I. Safety, which has had effective altruism ties. She said the “monster of social media” and lack of regulation had inspired her to get involved.
“This is the battle of our lifetime,” she said.
(The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)
Tech leaders are keenly aware of the backlash. The risks were driven home this month when a man opposed to A.I. threw a Molotov cocktail at the front gate of Mr. Altman’s San Francisco compound.
Not all tech executives have warned that A.I. could be dangerous while they build out their A.I. empires. Jensen Huang, the chief executive of Nvidia, the A.I. chip maker and world’s most valuable publicly traded company, has consistently emphasized the opportunities of A.I. Mr. Huang says A.I. will help people do their jobs better, not replace them.
“More jobs will be created,” he said in January. “Living will be more affordable.”
So far, the industry’s most notable response has been to pour hundreds of millions of dollars into super PACs targeting politicians questioning A.I. The industry has also downplayed the backlash as a product of paranoia peddled by so-called A.I. doomers, who worry the technology could destroy humanity, and NIMBYs, or not-in-my-backyard activists.
But those labels, common in Silicon Valley, are foreign to many of the people pushing back against A.I. “I’ve been called a lot of things over the years working on issues, but doomers is a new one,” said Sandy Bahr, director of the Sierra Club’s Grand Canyon Chapter.
After hearing about a marriage damaged by an A.I. companion, Mr. Grayston, the pastor in Austin, hosted an hourlong discussion at LifeFamily Church with the leader of a local nonprofit dedicated to A.I. education, the Alliance for Secure A.I. Action, which receives donations from some individuals with ties to effective altruism.
Persuaded that there was a dark side to the technology, Mr. Grayston, 42, has since spoken about its dangers at other churches, written an opinion piece for a religious news site run by the conservative outlet RealClearPolitics and helped draft educational materials about A.I. for other faith leaders.
“I’m not advocating for the absolution of A.I.,” Mr. Grayston said. “I want common-sense regulation.”
Mr. and Mrs. Snyder of Wolcott didn’t know what a data center was, they said, until they discovered that one had been approved in their backyard. After learning that the proposed facility would use more than four million gallons of water a day, Mr. Snyder, 59, worried that it would turn the backyard pond where he fishes for largemouth bass into a crater. He sued to halt the project and funded a campaign to unseat the three officials who supported it.
The Snyders, who are self-proclaimed “hard-core Republicans” and support Mr. Trump, said the best thing that had come from the process were the relationships they had built with people of different political stripes.
“Now, I don’t care what affiliation you are,” Mr. Snyder said. “If you’re against data centers, we’ll join forces.”
In the Boise music venues where Mr. Gardner’s rock band, Animus Gem, plays, the 30-year-old bassist is among many artists, musicians and writers troubled by how the technology, which was trained using copyrighted material, can instantaneously create songs, images and books.
He and his wife, Cathryn, started a local affiliate of PauseAI, a U.S. nonprofit that seeks to halt A.I. development, which has some funding from effective altruists. They made Boise one of 30 active groups in cities that the organization expanded to last year, up from five in 2025. They now have 10 volunteers and 500 signatures on a petition to slow A.I.
“It’s really felt like exponential growth,” Ms. Gardner said. “The artistic community in Boise has been really passionate about it.”















Leave a Reply