We notice that journalists aren't experts when we are experts in that field. But the narcissism of small differences can also make us overestimate of the incorrectness of reporting about our field.
And I suspect a lot of the time, people know their writing is not perfect and that an expert could pick holes in it.
Some common reasons being:
a) they're working to a deadline so don't care (so the options are "write about X now" or "don't write about X now". "Write in a 100% technically correct way about X now" is a False Option)
b) the experts are not their audience (as you mention)
c) they actually just don't care that much how correct their writing is for some other reason
I've been thinking a lot about the general tradeoffs around (a) with advocacy communication in AI Safety vs technical communications. There are a lot of blanket demands for rigor and precision that aren't responsive to the actual needs of the situation. It's wrong to assume that more precision is always better.
Interesting. I remember reading CEA's EA community building literature for the first time (like 6 years ago now), and there was such a heavy emphasis on fidelity and keeping EA small. I think that did make sense, but it's important to remember that we are limiting access to important ideas by doing this, and that is a *massive giant really-big* cost, even if we think it's one worth bearing.
In the constant battle for hearts and minds, sometimes you kinda have to go through the heart by being lower fidelity than you'd like, in order to get to the mind.
I suspect it may make sense (especially given short AI timelines??) to err towards being slightly less picky (in general) and consider how bad it would really be if people got slightly wrong impressions. I'm worried about overly nuanced but better ideas getting trodden on by people with worse ideas but less concern for detail, or just less specific ideas that are by their very nature easier to communicate.
I really like Duncan Sabien's writing on relevant things like Speaking of Stag Hunts and Concentration of Force.
I just think there are concepts that are more robust to less detail and harder to misinterpret in a bad way. Proselytizing alignment is hard (and I think EA has erred a lot by getting people involved with too much appetite for risk building toward a singularity). But “Pause AI” is a very fail safe message that works at low resolution. You can talk much more easily and broadly about not doing dangerous things until you know how to do them safely than you can about complicated game theory scenarios competing to build AGI first and they right way. EA/rationality has long made the mistake of thinking that there’s one correct level of nuance and it’s 100%.
Too much appetite for risk meaning not sufficiently risk-averse? Like e/acc types or?
I mean I'll read about all this myself but having been cut off for a long time I do need to update myself on what's been happening. Because I'm running largely on the models I had two years ago, some of which effectively hadn't updated much since two years before that, which hadn't updated since two years before that, and maybe the original models weren't that great anyway...
Always seemed to me the Venn diagram of 'being longtermist EA' and 'being risk averse about AI dev' was more or less a circle but clearly that's not the case any longer.
On nuance (this is actually all obvious stuff that I've realised totally just restates your point but I'll chuck it in anyway):
Obviously a key thing is that EA/AI in general is likely to attract nuance-obsessed people.
And then build enormous towers of thought based on foundations that themselves are nuanced, and zoom in closer and closer on the Mandelbrot set, and *wow look at this new spiral isn't it so cool and EA and rational*...
(Obviously this dynamic is clear to lots of people.)
It can isolate us from the real world where people are looking at the set as a whole, jumping around between things that seem interesting. And communicating fully about your own spiral when you're 1000 spirals down seems...hard. Especially when the random-searchers have actually been finding out about other cool spirals and now you're less good at modelling what they're thinking
And I suspect a lot of the time, people know their writing is not perfect and that an expert could pick holes in it.
Some common reasons being:
a) they're working to a deadline so don't care (so the options are "write about X now" or "don't write about X now". "Write in a 100% technically correct way about X now" is a False Option)
b) the experts are not their audience (as you mention)
c) they actually just don't care that much how correct their writing is for some other reason
I've been thinking a lot about the general tradeoffs around (a) with advocacy communication in AI Safety vs technical communications. There are a lot of blanket demands for rigor and precision that aren't responsive to the actual needs of the situation. It's wrong to assume that more precision is always better.
Interesting. I remember reading CEA's EA community building literature for the first time (like 6 years ago now), and there was such a heavy emphasis on fidelity and keeping EA small. I think that did make sense, but it's important to remember that we are limiting access to important ideas by doing this, and that is a *massive giant really-big* cost, even if we think it's one worth bearing.
In the constant battle for hearts and minds, sometimes you kinda have to go through the heart by being lower fidelity than you'd like, in order to get to the mind.
I suspect it may make sense (especially given short AI timelines??) to err towards being slightly less picky (in general) and consider how bad it would really be if people got slightly wrong impressions. I'm worried about overly nuanced but better ideas getting trodden on by people with worse ideas but less concern for detail, or just less specific ideas that are by their very nature easier to communicate.
I really like Duncan Sabien's writing on relevant things like Speaking of Stag Hunts and Concentration of Force.
This too: https://forum.effectivealtruism.org/posts/CrnFwpNYYSseb6Xt3/we-re-losing-creators-due-to-our-nitpicking-culture
I just think there are concepts that are more robust to less detail and harder to misinterpret in a bad way. Proselytizing alignment is hard (and I think EA has erred a lot by getting people involved with too much appetite for risk building toward a singularity). But “Pause AI” is a very fail safe message that works at low resolution. You can talk much more easily and broadly about not doing dangerous things until you know how to do them safely than you can about complicated game theory scenarios competing to build AGI first and they right way. EA/rationality has long made the mistake of thinking that there’s one correct level of nuance and it’s 100%.
Interesting.
Too much appetite for risk meaning not sufficiently risk-averse? Like e/acc types or?
I mean I'll read about all this myself but having been cut off for a long time I do need to update myself on what's been happening. Because I'm running largely on the models I had two years ago, some of which effectively hadn't updated much since two years before that, which hadn't updated since two years before that, and maybe the original models weren't that great anyway...
Always seemed to me the Venn diagram of 'being longtermist EA' and 'being risk averse about AI dev' was more or less a circle but clearly that's not the case any longer.
On nuance (this is actually all obvious stuff that I've realised totally just restates your point but I'll chuck it in anyway):
Obviously a key thing is that EA/AI in general is likely to attract nuance-obsessed people.
And then build enormous towers of thought based on foundations that themselves are nuanced, and zoom in closer and closer on the Mandelbrot set, and *wow look at this new spiral isn't it so cool and EA and rational*...
(Obviously this dynamic is clear to lots of people.)
It can isolate us from the real world where people are looking at the set as a whole, jumping around between things that seem interesting. And communicating fully about your own spiral when you're 1000 spirals down seems...hard. Especially when the random-searchers have actually been finding out about other cool spirals and now you're less good at modelling what they're thinking
a) and b) being probably more often admirable and justifiable than c).
I love this. I have had the same response to contemptuous expert reactions to popularized writing without being able to put it into words.