The dark heart of the AI industry
Silicon Valley veteran Anil Dash explains what's going on inside the AI industry.
"Whenever I want to know what the United States is up to, I look into my own black heart."
—Gore Vidal
I'm quite sure an article about the artificial intelligence industry has never started this way, but I think you'll find it's appropriate. I recently spoke with someone who could say they just have to look into their own heart to know what Silicon Valley is up to.
Anil Dash is a tech entrepreneur and writer who's been running a blog since 1999 and working in tech for over two decades. He was the CEO of a software company known as Glitch for several years, until it was purchased in 2022. He was a technology advisor to the Obama White House's Office of Digital Strategy. Today, he is the principal and cofounder of the firm antitech, which strives to make tech products more ethical.
There's so much going on with AI right now that I thought talking to someone who cares about making tech products ethical and good would be helpful. We're seeing AI used in war, people using it in their daily lives and companies starting to figure out how to replace workers with AI. With all of that in mind, it's worth taking a look at what brought us here and where things are heading.
To start, let's talk about how AI is being used in wars. AI has been used to target bombs and in suicide drones in the war in Iran. It's been used in the conflicts in Gaza and Ukraine. This is only the beginning, and that was made clear when the Pentagon fought so hard against Anthropic's refusal to let its AI be used in autonomous weapons.
"The internet exists because of the Department of Defense. Silicon Valley exists because of defense funding. That is intrinsic and inescapable," Dash says. "You cannot sever these things. Once you sit with that, you have to evaluate everything through that lens. There has never been anything 'pure' about the industry, but we were able to do good things at times despite that reality."
AI being used in war is just the most recent way emerging technology has been used for awful purposes, of course. We've seen social media used to undermine elections, and it also contributed to a genocide.
"The majority of OpenAI's product managers are former Facebook employees who were there after the Rohingya genocide," Dash says. "If you belong to a cohort that goes to work at a place after it enabled a genocide that killed tens of thousands, you’ve already been told, 'Look, we’re going to break some eggs here.' When someone points out that ChatGPT is encouraging self-harm in children, that’s just a rounding error to them."

Dash says these companies have been allowed to get away with a lot, because they're barely regulated, and this has led their workers to believe that the negative effects of their products are normal. However, this is not something we would tolerate from other industries.
"In the 'normal' world, if you work for a company like John Deere and a kid loses a toe to a tractor, the production line stops," Dash says. "That is the difference. Silicon Valley has reached a point of such extreme desensitization that OpenAI can justify its actions by saying, 'Well, we're not as bad as Grok.' This 'defining down' of standards has been an intentional project for twenty years."
A lot of people are worried about how agentic AI might disrupt the economy in the not-too-distant future. Unlike the LLM chatbots most people know, agentic AI can take instructions from someone and then carry out tasks without supervision. Imagine an AI agent being told to do someone's taxes or read and summarize a bunch of medical research. It behaves kind of like an employee or intern, not just an interface.
While these AI systems are still prone to errors, and hallucinations, the companies building these tools are finding ways to reduce those problems. More importantly, corporate executives are often more than happy to adopt a system that's "good enough" if it allows them to lay some people off.
"The vast majority of corporate output is mediocre by design. If the task is something where you don’t have to invent, but simply execute, then a huge percentage of the time, doing it to a 'sufficient' level is okay," Dash says.
Dash says that instead of using these tools to make employees extremely productive and give them room to explore new ideas, companies are using the efficiency as an excuse to reduce headcount.
All of this is quite gloomy, but Dash says it doesn't have to be. He imagines a world where we have smaller-scale AI that isn't killing the environment and actually makes our lives better. It could run on renewable energy and not be trained on stolen content.
"Twenty years ago, people found it unimaginable that everyone wouldn't use Microsoft's browser. Open communities came together and created an alternative. Firefox—an open alternative," Dash says. "Millions of people used it, it kept the web open and it made the market possible for every other browser that followed. That kind of innovation could happen again."
