Thoughts on CEA's Strategy [Part 1]
I make one small joke about skipping the Frodo bits when watching LOTR and then receive approximately 4000 messages asking if I’m ok and saying I sound unhinged. What is it with you people.
So now I have to write a boring post with no LOTR, no psychosexual development stages and no dogs, just to demonstrate my sanity.
I can do sane just fine. Look at me, sane, sane, sane. You know who is really insane? You and your prissy Frodo obsession. You piece of shit.
[also please don’t cross post this to the EA Forum thanks bye]
CEA will continue to take a “principles-first” approach to EA
Rather than recommending a single, fixed answer to the question of how we can best help others, I think the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.
I agree! Hurrah!
Four core principles that I and others think characterize the EA approach to doing good are[2]:
Scope sensitivity: Saving ten lives is more important than saving one, and saving a thousand lives is a lot more important than saving ten.
Scout mindset: We can better help others and understand the world if we think clearly and orient towards finding the truth, rather than trying to defend our own ideas and being unaware of our biases.
Impartiality: With the resources we choose to devote to helping others, we strive to help those who need it the most without being partial to those who are similar to us or immediately visible to us. (In practice, this often means focusing on structurally neglected and disenfranchised groups, like people in low-income countries, animals, and future generations[3].)
Recognition of tradeoffs: Because we have limited time and money, we need to prioritize when deciding how we might improve the world.
I think my list would be slightly different, and characterise them slightly differently, but I broadly agree. Hurrah again!
Though, I think it broadly makes sense to think of scope sensitivity, impartiality and recognition of trade offs (maybe) as principles. But I don’t think it makes sense to think of scout mindset as one - I see scout mindset as more of a … mindset.
I think the key difference here is that the other three seem chiefly about believing in a thing - eg. one should be impartial. But with scout mindset, it’s less about belief in a thing, and more how one orients to the world.
The important thing here is that believing in a principle of scout mindset doesn’t guarantee that one will actually approach things with a scout mindset (hmmm… now that I think of it this also seems true of the other three, but anyway). I think scout mindset is something one can develop over time, and also, importantly, the degree of scout mindsettyness one has is going to be heavily influenced by one’s social environment.
We currently enact our mission via five main efforts [Events, Groups, Online, Community Health, Communications]. We think these programs all promote EA principles more than they promote any specific answer to how to do the most good, and we’ll continue to prioritize this approach in the future.
It seems right to me that these programs promote EA principles more than they promote any specific answer on how to do the most good.
But, it also seems that these programs promote ‘EA’ cause areas and interventions more than they focus on the principles.
For example, my guess is most of the sessions at EAG are within one of AI Safety, Global Health and Development and Animal Welfare.
The post gives ‘Examples that demonstrate a commitment to principles’ for each program, and these mostly seem to be examples of being agnostic between EA cause areas.
To me it seems more accurate to characterise this approach as ‘EA causes (but not any particular one)-first’ rather than ‘Principles-first’.
The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future. I’m interested in doing more to support a broader array of causes (e.g. running more events targeted at animal welfare or global health and development), though I expect there to be some barriers in terms of different willingness-to-pay for community building from different funders, team bandwidth, and in some cases staff interest.
This seems like a big pickle!
It seems like, the funders value people ending up with some answers to ‘how to do the most good’ over others.
But as above ‘the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.’
It’s not clear to me that one can simultaneously aim to a) meet the requirements of funders who value people taking specific actions (eg. working in AI Safety perhaps) and b) supporting a community where ‘how to do the most good’ is a genuinely open question.
And if these aims are pursued concurrently, it seems like what will happen is the community is nominally about answering an open question on how to do the most good, and in practice there will be strong incentives for people to reach specific answers (which does not sound like a great place for scout mindset).
Does this happen in practice?
I think yes. I think that often community programmes (events, EA groups and so on) are evaluated and funded on the basis of how well they do at causing people to work in ‘high impact’ careers, which I expect in practice will round to something like roles on 80,000 Hours Jobs Board or organisations related to and endorsed by the EA community. And if funding is involved, this sets pretty strong incentives for people running the events and groups etc.
Does this matter?
I think yes, actually quite a lot. I think it’s reasonable for a community that’s aiming to answer a question like ‘how do most good’ to end up forming specific views (I mean, it’s hard to see how it could avoid that).
But once we incentivise community organisers to produce people with specific answers to the question, well, then it’s no longer a question really. And the community isn’t about answering a question but about recruiting for ‘EA recommended’ organisations and areas.
And if the community describes itself as an open question but subtly pushes towards specific answers, this seem pretty bad for the epistemic culture, and broadly a bit naught imo.
Stewardship: CEA’s 2025-26 strategy to reach and raise EA’s ceiling
The cornerstone of momentum will be growth: this is Community Building 101, but our renewed focus on it will represent the first concerted effort to grow EA post-FTX.
This seems wrong to me. When I google ‘Community Building 101’ I see things like ‘shared purpose and values’, ‘trust’, ‘interconnection’. But I don’t see ‘growth’.
In general it doesn’t seem to me that growth is community buiding 101, and most non-EA communities I’m part of aren’t strongly oriented to growth.
We believe that without actors taking responsibility for setting and steering towards common goals (e.g. growing the community), providing public goods (e.g. improving the brand), and facilitating community-wide coordination (e.g. diversifying funding), EA won’t live up to its potential - and we’re not alone in thinking this.
Growing the community isn’t one of my goals, and it’s not clear to me that it’s a common goal across EA.
The word “stewardship” is chosen carefully:
It’s purposefully distinct from “ownership.” I don’t think we’re owners in the sense that we’re the sole people with responsibility, or that we get to make decisions that everyone else has to follow.
I think this seems good. And, as far as I can tell, basically all effective altruism infrastructure is owned by CEA.
Impact is our north star, and we exist to serve the world, not the EA community.
I’m a bit unsure about this - I think stewardship kind of does imply serving the EA community. This doesn’t necessarily mean doing what the community thinks is best, but rather what’s best for the community as a whole. It’s plausible this is actually what is meant here.
Also, I do think that being a steward for the EA community can come apart from serving the world. If CEA thought that funneling people to AI Safety were the best thing for the world, and that it would also harm the community’s social fabric or something, I think doing so would be counter to their responsibilities as a steward.


> You know who is really insane? You and your prissy Frodo obsession. You piece of shit.
🥺👉👈
Honestly though, I'm kind of proud to be part of goading you into writing this. It's great!!
> This seems like a big pickle!
> It seems like, the funders value people ending up with some answers to ‘how to do the most good’ over others.
I agree, this seems bad if EA as an open question is valuable and worth protecting.
To me another symptom of EA becoming a vehicle for coordination, rather than a truth-seeking community, is that 80000h really demoted "global priorities research" as a top cause area (see https://80000hours.org and C-f "global priorities")
Do you think Lightcone are doing a better job at fostering a good epistemic environment? Any things from them EA should copy?
Perhaps the real problem here is the shared assumption of elitism about EA elites. By that I mean the belief that impact is extremely heavy tailed, so the overwhelming majority of arguments worth taking into account will come from a tiny amount of people. Hence believing that it's not worth putting significant resources into having random college students discuss how to do the most good, because overwhelmingly if they don't arrive at the 'EA elite consensus' then they're just wrong. So the thinking of "how to do the most good" should be concentrated among the few global priorities researchers that have proved themselves exceptional.
I do too believe in extremely heavy tailed impact distributions. But without the right environment and culture, the global priorities thinkers just won't emerge. How likely is it really that we don't need any further updates? Even if we're certain AI is the most important point of intervention, nobody had written about https://gradual-disempowerment.ai/ before 2025.