Wild animal welfare (WAW) is:
A paradigm that sees the wellbeing and experiences of individual animals as the core moral consideration in our interaction with nature
An interdisciplinary field of study, largely incubated by Effective Altruism (EA)
An EA cause area hoping to implement interventions
A crucial consideration is a consideration that warrants a major reassessment of a cause area or an intervention.
WAW is clueless or divided on a bevy of foundational and strategic crucial considerations.
The WAW account of nature
There are A LOT of wild animals
100 billion to 1 trillion mammals,
at least 10 trillion fish,
100 to 400 billion birds (Brian Tomasik,^^)
10 quintillion insects (Rethink Priorities)
Each year, there are 30 trillion wild-caught shrimp alone! (Rethink Priorities, unpublished work)
In contrast, there are 24 billion animals alive and being raised for meat at any time
WAW: Nature is not already optimized for welfare
Though humans do cause wild animals to suffer, like the suffering of polar bears as their ice drifts melt due to anthropogenic climate change, suffering is more fundamentally a part of many wild animal lives
There are many natural disasters, weather events, and hardships
Sometimes animals have positive sum relationships (like mutualistic symbiosis), but a lot of times animals are in zero sum conflict (like predation)
Nature is an equilibrium of every different lineage maximizing reproductive fitness. Given that they evolved, suffering and happiness presumably exist to serve the goal of maximizing fitness, not maximizing happiness
Therefore, nature could, in theory, be changed to improve the welfare of wild animals, which is expected to be less than what it could be.
Foundational Crucial Considerations
Should we try to affect WAW at all?
Is taking responsibility for wild animals supererogatory?
Do we have the right to intervene in nature?
Can we intervene competently, as we intend, and in ways that don’t ultimately cause more harm than good?
What constitutes “welfare” for wild animals?
What animals are sentient?
What constitutes welfare?
How much welfare makes life worth living?
Negative vs. classical utilitarianism
What are acceptable levels of abstraction?
Species-level generalizations?
“Worth living” on what time scale?
A second?
A lifetime?
The run of the species?
How to weigh intense states
Purely affective welfare or also preference satisfaction?
How much confidence do we need to intervene?
Should irreversible interventions be considered?
Is it okay to intervene if the good effects outweigh some negative effects?
Are we justified in not intervening?
Status quo bias
Naturalistic fallacy
Strategic Crucial Considerations
Emphasis on direct or indirect impact?
Theory of Change: Which effects will dominate in the long run?
Direct impact or values/moral circle expansion? (Direct impact for instrumental reasons?)
How to evaluate impact?
Is WAW competitive with other EA cause areas?
Should we work on WAW if there aren’t direct interventions now that are cost competitive with existing EA interventions?
How much should EA invest in developing welfare science cause areas vs exploiting existing research?
What is the risk of acting early vs. risk of acting late?
How long is the ideal WAW timeline?
How much time do we have before others act?
How long do we have before AI will take relevant actions?
How will artificial general intelligence (AGI) affect WAW? How should AI affect WAW?
AGI could be the only way we could implement complex solutions to WAW
AGI could also have perverse implementations of our values
WAW value alignment problem:
We don’t know/agree on our own values regarding wild animals
We don’t know how to communicate our values to an AGI
How do we hedge against different takeoff scenarios?
Convergence?
Most views converge… in the short term
“Field-building” for now
Alliances with conservationists, veterinarians, poison-free advocates, etc.
As with every other cause area, those with different suffering:happiness value ratios will want different things
WAW Value Alignment Problem is fundamental, especially troubling because we can get only limited input from the animals themselves.
How would we know if we got the wrong answer from their perspective?
The long term future of WAW is at stake!
WAW as a field is still young and highly malleable
Prevent value lock-in or pick good values to get locked in
Be transparent about sources of disagreement, separating values from empirical questions from practical questions
Acknowledgments
Thanks to the rest of the WAW team at Rethink Priorities, Will McAuliffe and Kim Cuddington, for help with brainstorming the talk this post was based on, to my practice audience at Rethink Priorities, and to subsequent audiences at University College London and the FTX Fellows office.
This blog practices post-publication editing and updating.
Some loose thoughts from a reader who is not involved in EA, philosophy, or the rationalist community...
EA seems like a self-evidently valuable exercise in clarifying our thinking about ethics. WAW seems like accelerating that process in a vertiginous way to where it's hard to have much confidence in anything. As the shrimp cartoon suggests.
We usually think of the interactions between non-human organisms, each other, and their environment as a space where our ethical principles are completely irrelevant, like a naive category error. Our own interactions with animals are a liminal space where most people feel confortable with being uncertain and inconsistent. As a vegetarian opposed to factory farming, I oppose this vagueness: we need to take responsibility and wind down our regime of cruelty. At that point, though, thinking about WAW becomes unavoidable. I’m glad you’re doing this work.
In trying to see and map the macro landscape of life, the experience of suffering begins to look like the overwhelming majority of experience, for organisms generally. Thinking too hard about this seems like a possible pathway to mental illness, or Buddhism.
In Stapledon's "Last and First Men," one of the successive human races learns how to obtain direct sensory access to the past. Their explorations in the human past, and its overwhelming sadness, lead to an epidemic of despair and psychosis that almost destroys them.
In one of her stories, James Tiptree Jr. describes a series of scenes in a man's life which each end in terrible disappointment. At the end, it emerges that his consciousness is somehow being held and farmed by an alien species which extracts and feeds on painful emotions.
From some angles, taking responsibility for the welfare of organisms quite different from us looks like hubris, because of the weakness of our understanding. We struggle to build a coherent account of our own “welfare,” but let’s stipulate that we can identify our own welfare in certain respects. Ethical reasoning involves modeling other people’s subjectivity on our own, a theory of mind. Given the variation of human culture and experience, this can be trickier than it seems. Members of previous generations, who tried really hard to identify and act on their ethical responsibilities, look very misguided to us: Victorian missionaries in the Pacific, etc. etc. The further we get from humans, the more uncertain the concept of welfare would seem to be.
In one possible scenario in which we do take responsibility for the welfare of other organisms, and intervene to improve it, we would evolve into something like gods. In that scenario, we would also take responsibility for the welfare of plants, fungi, microorganisms, and inanimate matter at various scales, and for improving the welfare of the past as well as the future.
As you point out, a possible near-future AGI may overtake us and assume responsibility for our welfare as well as other species’. I suspect that aligning an AI’s values with our own is intrinsically impossible.
> Each year, there are 30 trillion wild-caught shrimp alone! (Rethink Priorities,^)
This statistic both seems unrealistically high and doesn’t seem to match the source.
I’m assuming “wild-caught shrimp” means “shrimp caught in the wild by humans to feed humans”. That means that the average human eats 30 trillion/7 billion = ~4000 shrimp per year or about 10 shrimp per day per human. Given that many humans do not eat any shrimp (either for moral reasons or lack of access), and many people might only eat a few shrimp per month, this implies that some subgroups of people must be eating a huge number of shrimp. This isn’t impossible, but it does seem unlikely. Perhaps the statistic means that many of the shrimp are caught and fed to non-humans, or are caught accidentally in nets? If so, it’s probably worth mentioning that. (And if not, it’s also worth mentioning!)
Beyond that, the source doesn’t seem to mention this statistic at all. It does say “Fishcount.org estimates that between 220 billion and 526 billion decapod crustaceans were slaughtered in aquaculture production in 2015 alone.” However, that refers only to aquaculture of decapod crustaceans (a group that contains shrimp) and not to wild-caught shrimp. It’s not impossible that there are 60x-130x more wild-caught shrimp than aquaculture decapod crustaceans, but it does seem somewhat unlikely.
I agree that animal welfare is important though and this post was very informative about the cause. Thank you!