Thinking about Generative AI
A time capsule journal entry
Long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering. - Ted Chiang
I want to write out some thoughts to revisit later. It can be easy to get revisionist about what you believed and when, so here is a list of things I thought were true in late 2025 after ~2 years of pretty extensive experience with frontier models, especially Anthropic’s Claude.
Some takes
AI is alarmingly good at all sorts of things. It is good at writing code, dealing with legal documents, talking through household repairs, and many other tasks.
We are not in a bubble comparable to previous bubbles, but the market is overheated. Many participants will suffer when their bad bets fail. A sector-wide collapse followed by another “AI winter” is not likely, but a recession that affects regular American families feels very possible.
Environmental concerns about AI and data centers can be pretty misleading. Environmental harms are important to mitigate, but environmental advocates seem overly focused on data centers compared to their relative impact or the marginal cost of reducing that impact.
As the US government actively sabotages renewable energy infrastructure projects, power demands from AI companies is both applying positive pressure toward renewable energy transition and keeping fossil fuel plants online longer than they otherwise would be.
Using AI to produce art, especially that which supplants human artists, is bad. Artists have a unique and important role in a healthy society. It’s clear that this is already happening, and I’m very worried about it.
Playing around and making art for personal use or amusement is mostly benign, though.
AI boosters are pretty universally untrustworthy. Most of the experts on this technology have conflicts of interest, and their incentives are not aligned with public good. We need more independent experts outside the AI industry.
AI skeptics often confuse material debates about the merits and capabilities of AI systems with larger philosophical or moral debates about capitalism. These are both important, but conflating them can result in motivated reasoning.
AI will, in the near term, accelerate wealth inequality. Skeptics are correct about this. Longer term effects depend on what we decide to do as a society.
AI will cause (mostly psychological) harm to humans at higher rates than other digital technologies. It will make harms from social media or smartphone use seem quaint. Society will mostly tolerate these harms and ignore them, just as it ignores the 1 million annual worldwide deaths from automobiles (40,000 in the US).
Large language models are not on the cusp of becoming self-aware or developing the capacity to suffer, but I am still worried that we might cause machines to experience suffering in the future.
We have avoided nuclear holocaust thus far by luck. Nuclear disarmament should be an urgent priority. All sides of the AI discourse should agree on this.
Disclaimers
Qualifiers like “maybe” or “likely” were omitted because I am uncertain about everything, and otherwise I would have placed them everywhere. My goal was to stake out some positions, not hedge against being wrong.
I am neither a skeptic nor a booster. I think these technologies are exceedingly interesting. The producers of LLMs are mostly untrustworthy, but the nuance has been lost. We usually don’t blame Facebook users when Meta does something shitty.
We exist in a dynamic period where challengers across domains could leverage AI against more lethargic incumbents. Civic and social good might result if good people would be willing to selectively wield AI in service of their desired outcomes. There’s plenty of reason to be angry at Meta, for example, but it doesn’t make sense to stubbornly refuse to use Llama, PyTorch, sam3, or React in pursuit of a better future.