Discussion about this post

User's avatar
The Column Space's avatar

> You know who is really insane? You and your prissy Frodo obsession. You piece of shit.

🥺👉👈

Honestly though, I'm kind of proud to be part of goading you into writing this. It's great!!

> This seems like a big pickle!

> It seems like, the funders value people ending up with some answers to ‘how to do the most good’ over others.

I agree, this seems bad if EA as an open question is valuable and worth protecting.

To me another symptom of EA becoming a vehicle for coordination, rather than a truth-seeking community, is that 80000h really demoted "global priorities research" as a top cause area (see https://80000hours.org and C-f "global priorities")

Do you think Lightcone are doing a better job at fostering a good epistemic environment? Any things from them EA should copy?

Perhaps the real problem here is the shared assumption of elitism about EA elites. By that I mean the belief that impact is extremely heavy tailed, so the overwhelming majority of arguments worth taking into account will come from a tiny amount of people. Hence believing that it's not worth putting significant resources into having random college students discuss how to do the most good, because overwhelmingly if they don't arrive at the 'EA elite consensus' then they're just wrong. So the thinking of "how to do the most good" should be concentrated among the few global priorities researchers that have proved themselves exceptional.

I do too believe in extremely heavy tailed impact distributions. But without the right environment and culture, the global priorities thinkers just won't emerge. How likely is it really that we don't need any further updates? Even if we're certain AI is the most important point of intervention, nobody had written about https://gradual-disempowerment.ai/ before 2025.

Expand full comment

No posts

Ready for more?