OpenAI is trying to protect itself from the harm it's causing
Sam Altman wants to make sure that OpenAI doesn't face any consequences for what it's doing.
There are many serious concerns people have about the future of artificial intelligence. People are worried about AI taking jobs, killer robots, the spread of AI-generated disinformation, the mental health effects of using AI and more. The AI industry is very aware of these fears, and it appears OpenAI—the company behind ChatGPT—is trying to protect itself from future backlash against the industry.
OpenAI recently put out a list of policy proposals that it believes could help address some of the potential negative impacts of AI. This document, titled "Industrial Policy for the Intelligence Age," focuses on how to deal with possible job losses, reductions in tax revenue, how AI should be regulated and the threat of "superintelligence."
Obviously, none of this should be taken at face value. This is not a benevolent tech organization offering up some nifty ideas. This is a highly influential company that, according to insiders who spoke to The New Yorker for a recent profile of Sam Altman, is run by a sociopath. Regardless, let's take a brief look at what's being proposed here.
One of the main proposals in this document looks at how the government could rethink our tax system if the deployment of AI systems leads to major job losses in the economy.
"As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes," the company writes.
A reduction in tax revenue caused by fewer workers collecting a paycheck could threaten Social Security, Medicaid, SNAP, housing assistance and other programs that rely on that revenue. The company suggests that the U.S. tax system could shift to collecting income from a capital gains tax and a robot tax.
The document also suggests that the U.S. could create a "Public Wealth Fund" that would invest in companies in the industry, and then the returns on those investments could be distributed to taxpayers. It says this would allow all citizens to "participate directly in the upside of AI-driven growth."
Other proposals in this document include a 32-hour workweek, a rapid expansion of the power grid, a "right to AI" access and more. To deal with the effects of the hypothetical development of superintelligence, it says America must start "building new institutions, technical safeguards, and governance frameworks so that advanced systems remain safe, controllable, and aligned."

These policy proposals are just part of how OpenAI seems to be thinking about the future and how the negative impacts of AI could affect the company. For instance, it is also backing a bill in Illinois that would shield the company from being held liable for how AI might contribute to mass casualty events and financial disasters.
"The bill, SB 3444, would shield frontier AI developers from liability for 'critical harms' caused by their frontier models as long as they did not intentionally or recklessly cause such an incident," according to Wired.
Experts say it doesn't seem likely that this bill will become law, considering Illinois Gov. JB Pritzker and other Democrats in the state will probably oppose it, but this situation does reveal what OpenAI is trying to accomplish legislatively.
While some of the policy proposals that OpenAI put out sound nice, OpenAI CEO Sam Altman has been known to say he supports one thing publicly while privately expressing support for something else entirely. I think it's safe to say the man cannot be trusted, and much of what he says is more likely PR than anything based on actual convictions.
I think AI will disrupt the economy. We're already seeing it used in warfare, and disinformation is worse than ever. There are many other problems that lawmakers need to be addressing. However, I wouldn't trust the companies behind this technology to be honest brokers here. I would say that about OpenAI, in particular.
Unfortunately, our legislators have been feckless when it comes to regulating the tech industry and addressing the harms caused by emerging technology for decades now. Maybe that will change if we elect younger representatives and try to reduce the impact of money on politics. In the meantime, AI is rapidly advancing, and few solutions to the problems it is creating seem to be going anywhere.