

Discover more from Jacob Silverman
Don’t Make Me Destroy the World
The tech industry’s warnings about artificial intelligence have become a new sales pitch.
[Strangelove-Altman collage by Jacob Silverman, who tried to use AI image generators but the output looked horrible]
Artificial intelligence might unleash untold amounts of wealth and transform commerce and human relations – or it might destroy the world. At least that’s the Manichean binary being pushed by the tech industry over the last few months. Alongside daily hype about ChatGPT and other generative text and image tools, NVIDIA’s booming stock price, chatbot girlfriends, and Adobe’s new image fill feature, there’s been a steady drumbeat of warnings that unchecked AGI – artificial general intelligence, a sentient AI that does what it wants – could lead to the end of humanity. Often these apocalyptic warnings are coming from the same people who in the next breath are touting the revolutionary possibilities of what so far seems like a fancy and wholly unreliable version of auto-complete.
OpenAI CEO Sam Altman, Twitter troll Elon Musk, and former Google scientist Geoffrey Hinton are among the tech bigwigs worried about AI’s apocalyptic potential. Some industry figures have proposed a moratorium on AI development. Yesterday, a few dozen AI luminaries published a statement that read, in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Besides some genuinely accomplished researchers and industry executives, the list of signatories included Grimes, Chris Anderson (whose title at TED is Dreamer-in-Chief), and podcaster Lex Fridman.
On tech Twitter, there is constant daily chatter between industry figures about AI risk and what to do about it. The concern often seems genuine, but it’s wrapped in so much snobbery and bloviating that outsiders are better off just observing. You’ll be dismissed as uninformed anyway. Played out in endless Twitter threads, these discussions involve an almost exclusively male cohort that, with its emphasis on “high IQ” individuals, seems fairly eugenicist. Some of them are well-credentialed, experienced, and civic-minded computer scientists. Others have written million-word Harry Potter fan fictions and have links to techno-fascists who don’t exhibit much concern about the fate of humanity except that their doomsday bunkers will be adequately secured.
Sam Altman, incidentally, is a prepper:
[Screenshot via Futurism]
There’s not much point in engaging with AI doomers on their terms. Continuing to pour money, intellectual energy, and hype into a project that they warn could end life on Earth, they resemble the Manhattan Project scientists who bet each other whether a nuclear test would set the atmosphere on fire. That didn’t stop them, of course, from exploding the bomb. The specter of apocalypse was almost titillating. For AI, it’s become part of the sales pitch. And the monetary upside is overwhelming, especially after the tech industry’s failed crypto push. Unlike crypto, AI has some potential legal use cases.
As more sober-minded critics have argued, the real concerns with AI are more immediate and material, especially concerning labor rights. Consumers and companies are rapidly embracing erratic AI tools that “hallucinate” false information. Rather than emphasizing their experimental nature – and their unsuitability for any work that requires factual information – AI companies are continuing to tout their spectacular potential. From Hollywood screenwriters to call center employees, workers are already contending with the specter of either being replaced by crummy AI apps or having to labor alongside them, fixing their mistakes. The downside is clear: layoffs, decreased worker autonomy, increased workplace surveillance, depressed wages. The upside remains murky.
The current AI tools are excellent at quickly generating a lot of text or detailed images that simulate truth but have no fidelity to reality. A lawyer recently found himself in trouble for using ChatGPT to generate a legal document; the app invented fake court cases and cited them in the text. I’ve found that ChatGPT will invent fake articles, with fake URLs, and attribute them to real journalists and real publications. As obvious as this might be to people who deemed themselves AI experts last month, most consumers don’t know that ChatGPT is not programmed to find facts. The obvious use case here is generating disinformation, whether carefully targeted or broadcast at a mass scale. It’s possible right now – not in some dystopian tomorrow – but AI leaders haven’t spent much time educating the public.
The tech industry’s call for AI governance deserves some skepticism. When Silicon Valley innovated an exploitative new labor model – the gig economy – no one in the industry called for a moratorium on Uber, Lyft, and Grubhub. The demands for critical thinking, labor activism, political governance, and reform came from outside tech, as they invariably have to. Corporate actors can’t be expected to govern themselves in the public interest. Their fiduciary interests take priority, even if, as the fossil fuel industry has shown, it means the end of life on Earth. OpenAI will keep cashing subscriber checks right up until a runaway AI turns all of us into paperclips.
AI may very well prove to be a dynamic innovation, although as many experts caution, the term “artificial intelligence” has become so broad and capacious as to have mostly lost meaning. Better to focus on automation, levels of autonomy, the role of human decision-making, labor, and how systems work. It’s a bit dry, but it’s more accurate and practical.
The tech utopia is always a few years away, and right now, unpredictable AI tools are being foisted on workers and consumers while the conversation around this shift is almost entirely dedicated to doomsday prophecies that may never come true. AI researchers, politicians, and regulators should be aware of catastrophic long tail risks. Yet they should focus on what’s happening now – the vast amounts of data sucked up – stolen, really – in the service of creating computer models that regurgitate bad versions of familiar images and texts. That’s the world that AI doomers are already making for us, right here.