The “technology" bucket error
We can’t use "tech=good" or "tech=bad" as a premise to figure out what’s going to happen with AI.
As AI x-risk goes mainstream, lines are being drawn in the broader AI safety debate. One through-line is the disposition toward technology in general. Some people are wary even of AI-gone-right because they are suspicious of societal change, and they fear that greater levels of convenience and artificiality will further alienate us from our humanity. People closer to my own camp often believe that it is bad to interfere with technological progress and that Ludditism has been proven wrong because of all of the positive technological developments of the past. “Everyone thinks this time is different”, I have been told with a pitying smile, as if it were long ago proven that technology=good and the matter is closed. But technology is not one thing, and therefore “all tech” is not a valid reference class from which to forecast the future. This use of “technology” is a bucket error.
What is a bucket error?
A bucket error is when multiple different concepts or variables are incorrectly lumped together in one's mind as a single concept/variable, potentially leading to distortions of one's thinking.
(Source)
The term was coined as part of a longer post by Anna Salamon that included an example of a little girl who thinks that being a writer entails spelling words correctly. To her, there’s only one bucket for “being a writer” and “being good at spelling”.
“I did not!” says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: “I did not misspell the word! I can too be a writer!”.
[…]
When in fact the different considerations in the little girl’s bucket are separable. A writer can misspell words.
Why is “technology” a false bucket?
Broadly, there are two versions of the false technology bucket out there: tech=bad and tech=good. Both are wrong.
Why? Simply put: “technology” is not one kind of thing.
The common thread across the set of all technology is highly abstract (“scientific knowledge”, “applied sciences”— in other worlds, pertaining to our knowledge of the entire natural world), whereas concrete technologies themselves do all manner of things and can have effects that counteract each other. A personal computer is technology. Controlled fire is technology. A butterfly hair clip is technology. A USB-charging vape is technology. A plow is technology. “Tech” today is often shorthand for electronics and software. Some of this kind of technology, like computer viruses, are made to cause harm and violate people’s boundaries. But continuous glucose monitors are made to keep people with diabetes alive and improve their quality of life. It’s not that there are no broad commonalities across technologies— for example, they tend to increase our abilities— but that there aren’t very useful trends in whether “technology” as a whole is good or bad.
People who fear technological development often see technological progress as a whole as a move toward convenience and away from human self-reliance (and possibly into the hands of fickle new regimes or overlords). And I don’t think they are wrong— new tech can screw up our attention spans or disperse communities or exacerbate concentrated power. I just think they aren’t appreciating or are taking for granted how older technologies that they are used to having enhanced our lives, so much, so far, on balance, that I think the false bucket of “tech progress as a whole” has been worth the costs. But that doesn’t mean that new tech will always be worth the costs.
In fact, we have plenty of examples of successfully banned or restricted technologies like nuclear bombs and chemical weapons whose use we had every reason to suspect would represent change for the worse. The boosters of tech progress often forget to include these technologies in their parade of Luddite-embarrassing technological successes. Have bans on weapons of mass destruction held the world back? If not, shouldn’t that give the lie to the “technology=good” bucket? Sadly, “weapons” seem to be in a falsely separate bucket from technology for many who think this way.
What does this have to do with AI?
We don’t know what to think about AI. We don’t know when AGI is coming. We don’t know what will happen. Out of that ignorance, we attempt to compare the situation to situations we understand better, and many are falling back on their conflated beliefs about “technology” in general. Those beliefs may be negative or positive. More importantly to me, those beliefs about technology just aren’t that relevant.
AI is, of course, technology. But I think it could just as accurately be called a “weapon” or, as AGI arrives, an “alien mind”.
Do those categories strike you as different buckets, with different implications? Does “weapon” or “alien mind” seem like a different reference class, leading to different predictions about how AI turns out for humanity?
If your instinct is to argue that, actually, AI is a technology and not a weapon or an alien mind (essentially, that the technology bucket is correct and AI belongs in it)— what does that move get you? Do you think it gets you a better reference class for forecasting? Some other predictive power?
Okay, now consider that AGI could well fit into all these references classes and more. Every time is a little bit different, but creating a new mind more intelligent than us could be very different indeed. There’s no rule that says “no time is actually different”, just like there’s no rule that says we’ll make it.
There is a place for looking at references classes, but I would argue that, in this case, that’s at a much finer level1, and we must accept that in many ways we are in new territory. In response to concerns about risks from AI, I am sometimes told, essentially, “the world has never ended before”, which is both substantially false (the world has ended for the majority of species before us and human civilizations have collapsed many times) and fallacious— if the planet had been destroyed, we wouldn’t be looking back saying “well, the world only ended once before, but it has happened”.
We aren’t restricted to reasoning about large categories here. We can think about the specifics of this situation. I’ve had my disagreements with the modest epistemology concept, but it clearly applies here. We can just reason (on the object level) about how a more intelligent entity could fuck us up, whether those things could really happen or not, and try to prevent them instead of second-guessing ourselves and worrying that people who worried about superficially similar situations in the past looked stupid to future generations.2
No matter what happens with AI, it won’t mean that technology was truly good or truly bad all along. That means we can’t use tech=good or tech=bad as a premise now to figure out what’s going to happen with AI just because it’s a kind of technology.
(Standard disclaimer.)
Some examples:
- When predicting capabilities: Other ML models, possibly mammalian cortex.
- When predicting benchmarks: Available supplies per time in previous years, precedents and case studies for various kinds of applicable regulations, looking at the fates of VC-funded tech start-ups.
As I looked for links, I found out that the Luddites have been getting some recognition and vindication lately for a change.