Promises
[I think there are a lot of important points here, and I think this is pretty important to how EA operates. However, I am very tired and want to go to bed, so you will have to work pretty hard if you want to find those important points]
What are the promises that effective altruism makes?
What’s the deal? What does it whisper in your ear?
You will find belonging here
A common theme for people encountering EA is something of the form of ‘I’ve found my people’.
To find your people is a pretty powerful thing. Especially when your people are people who share your values, and your worldview.
EA is often described as a community - effectivealtruism.org gives a list of ‘what the community has achieved’. We are a we.
It is something you can belong to, and be part of.
Though in practice, EA values people based on their prospective impact, or prospective impact according to EA. It’s common for group organisers to describe particular members of their group as promising, and prioritise engaging with those people. This makes sense on the EA worldview, after all - some people will have much more of an impact than others.
It is straightforward to have a big impact
effectivealtruism.org links to ‘The best charities to donate to in 2025’. EA knows the best way to do good.
And it’s easy to do good - you can save a life for $5000.
Though in practice, well, does EA know? Is it simple?
There’s big disagreements across cause areas. Even within cause areas there’s big disagreement, it’s relatively common in AI safety for people to think that others’ approaches are actively harmful.
EA values your opinion
After all, EA is a question not an answer. It practices scout mindset and searching for the truth.
Rachel Glennester is quoted on effectivealtruism.org
“Many of the concepts in effective altruism will be familiar to economists. What is unusual is to see these tools used to develop a practical guide on how to live an ethical life. It doesn’t tell you what choices to make; instead it sets out a simple framework for how to think through decisions.”
But again, the website also includes a link telling you ‘The best charities to donate to in 2025’
You already agree with EA
The website lists ‘four ideas you probably already agree with’ ‘that could mean you’re already on board with effective altruism.
It describes EA as a philosophy ‘using reason and evidence to find the most effective ways to help others’.
But in practice, EA has a specific worldview and culture. Libertarian, utilitarian, analytic philosophy and so on.
In principle a communist could be an EA, after all it’s just asking an open question about how to do the most good. They just need to make the case for it using reason and evidence.
In practice noone would listen, and they would find little of value in being part of the community.
EA presents in a certain way: EA is a community where you can find belonging, it values your opinion, you can have a big impact, it’s common sense, and so on.
But in practice these promises are never made good on, as the goal posts shift and shift.
Once you join the community, you realise that it is only once you start working for an EA org that you will really be able to have an impact, it’s only once you have climbed the heirarchy that you will really be listened to.


> Once you join the community, you realise that it is only once you start working for an EA org that you will really be able to have an impact
fyi i don’t feel this way, never felt pressure to work at an ea org, & always felt it would be better (impact-wise / through ea lights & my own) if i found something else to do.
this in part because ea jobs oversubscribed, so not neglected. also can earn to give better or just do things other eas are not doing elsewhere.
instead, i felt a pull to uncapped-upside paths: research, entrepreneurship, …
best places to do these things generally outside ea