Effective Altruism Should Seek Less Criticism

This is a submission to the Effective Altruism Criticism Contest. It was originally posted here.

Introduction

I do not read the Effective Altruism Forum (although that might change in the future), and so it’s somewhat surprising that I found out about this ongoing Effective Altruism Criticism Contest. I learned about it from a hierarchy of criticisms by Zvi Mowshowitz and Scott Alexander.

I largely agree with Mowshowitz that, even though it overtly requests outsiders’ opinions, there are hints in the contest description that suggest criticism within the paradigm will be better received than criticism of the paradigm.[1]Although I’d call it a research program instead of a paradigm, for reasons you’ll see below. I disagree with Mowshowitz in that I think that this is a good thing. Effective Altruist organizations should not be asking and rewarding paradigmatic criticism as much as they do. 

 I should make it clear that I do support fact checking and other forms of criticism within the research program. I am instead criticizing Effective Altruism’s request for “philosophical critiques”, while being “actively excited to bring in external ideas and expertise”. I think that Effective Altruism should seek less criticism about the core of its research program.

Full Disclosure: I don’t expect myself to win the contest. If I expected I might, I would put more time into writing this, use a less casual tone, and put more effort into following the criteria listed. The criterion I am most likely to miss is Aware, because it is much easier to be Aware if you are and have been a member of the community than if you are not. But it sounds like you want to hear my opinion, so here it is.

Key Ideas

  • Paradigmatic criticism leads to value drift.
  • The virtues for individuals, organizations, and societies are not exactly the same.
  • Even if you want to have value drift for individuals and for society, you might not want to have value drift for organizations.
  • A society with narrowly focused & mostly inflexible organizations, and a culture of individuals moving between them as their values shift, could be better than a society with organizations continually looking for paradigmatic criticism.
  • Effective Altruism will probably never be scientific in Kuhn’s sense – and it shouldn’t try to be. It should instead try to be scientific in Lakatos’s sense.

Paradigmatic Criticism Leads to Value Drift

I mean, that’s the point of paradigmatic criticism.

If you continually request and listen to criticism of your core beliefs, eventually you are likely to find some arguments that are persuasive. You then shift your value system in the direction of these new arguments. If your reasoning is good, then your values should improve over time as the result of value drift.

My Present Self thinks that this is moral progress. My values have become increasingly aligned with what I currently think that they should be. But my Past Self might disagree and instead see this as a sign of moral decay. This disagreement is the core challenge of value drift.

This seems to have already happened some for Effective Altruism. My (very limited) understanding of the history is that the Effective Altruism movement began with a focus on Global Poverty and Health. Since then, the movement has been convinced of the importance of other issues, like Existential Risk and Animal Welfare. These are not just extensions of the previous work – they correspond to new value systems that have been incorporated into the movement.

I am aware that there is some concern about value drift that occurs as a result of a community becoming more insular.[2]I’m not entirely convinced of this as an empirical claim. Do insular communities have more or less value drift than more open communities? If the community becomes more inward looking, then it can lose touch with what does the most good for the most people. This is potentially a problem, but actively seeking value drift is also a major potential source of value drift.

I’m guessing that there has been plenty of discussion about value drift in this community. If this were my only argument, I would not have bothered writing this. It gets more interesting as you look at how value drift occurs at different levels of human organization.

Virtues for Individuals, Organizations, and Societies

What does it mean for an individual to do or be good? What does it mean for an organization to do or be good? What does it mean for a society to do or be good? Are these the same question?

While there is certainly a significant amount of overlap between the good for individuals, organizations, and societies, I think that there are also some important differences.

One example is how narrow of a focus you have. A society has lots of things that it wants to do. An organization is likely to narrow its focus to one or a few things. An individual will likely develop specialized skills to be even more narrowly focused on a few tasks. While the society may want to be generalist, the best way to achieve this is often to have most organizations and individuals in the society be specialists, so the society can gain the most from people’s comparative advantages.

Let’s take a more specific example: research to reduce the mortality rate for cancer. The most common type of cancer in the US is lung cancer, which accounts for about a quarter of the cancer deaths. Some other cancers, like bone cancer, account for an order of magnitude fewer deaths. Society as a whole would want to prioritize lung cancer research over bone cancer research. This does not mean that organizations like the Anti-Bone-Cancer Triumvirate are wrong for focusing on the less important cancer, as long as the Anti-Lung-Cancer Conglomeration has more resources. Society ought to distribute resources in accordance with the cancer’s mortality rate, but that does not mean that organizations should too. If there are significant benefits to be gained from having extremely specialized expertise, it would be better for most organizations to research a single type of cancer. There should also be some organizations working on more general anti-cancer strategies, but there doesn’t have to be any particular organization that distributes its resources in line with the actual mortality rates. The perspective of the individual is even more complicated. Ideally, everyone would have an accurate model of the mortality of different types of cancer, and would understand the benefits of specialization. However, I could also see benefits from having the members of the Anti-Bone-Cancer Triumvirate believe that bone caner is the biggest problem.

I don’t think it’s too surprising to claim that individuals and organizations should have a narrower focus and be more specialized than society as a whole. My next claim might be more surprising:

Openness to value drift is more of a virtue for individuals and society than for organizations.

Let’s continue with our example for cancer research organizations. The number of people who smoke now is much less than in earlier decades. The lung cancer rate is falling, and may cease to be the leading cause of mortality. Society wants to shift resources away from lung cancer and towards other kinds of cancer. What should the Anti-Lung-Cancer Conglomeration do? Should they use their resources to branch out into other cancers? Or should they continue to focus on lung cancer, while their donors and employees move to other cancer research organizations? I don’t think that this is an obvious choice. As the mortality rate from lung cancer declines, it is clear that individuals should be concerned about it less and that society should shift its resources more towards more important things, but you might still want every organization to remain equally committed to its original mission.

Now that we (hopefully) have some intuition, let’s look at some reasons why this would be the case.

If there are large gains from specialization and lots of institutional knowledge, then organizations should maintain their narrow focus more. When they try to move into a new field, they won’t do a very good job of it. It would be better for their employees and their resources to move to a different organization that has the institutional knowledge and can incorporate them in a more productive way.

Organizations have a brand, that includes their reputation and what people expect them to do. When the organization chooses to change focus, there will be some people who disagree with the choice. They chose to join or support the organization because they valued the things that this organization was doing, but now they’re doing other things. Some of the people involved in the organization will change their values at the same time as the organization itself, but not everyone will – some will change their values sooner and some later or not at all. When organizations change their values, they assume that everyone involved with the organizations changes their values simultaneously. This never happens, so there will be some people upset by the change.

This is not to say that organizations should never change their values. There certainly are situations where it’s worth upsetting the people who like the way things are. But we should be more cautious about changing the values of our institutions than changing our own values, especially if there are alternative institutions with different values that people can move to instead.

A Potential Alternative Model

I’m going to pick on Open Philanthropy for a bit, simply because they have a nice website. I have nothing against them in particular, and they’re probably doing great work.

In the upper left corner of their website, there is a place where you can look through the 14 focus areas that Open Philanthropy cares about. My guess is that they started with a focus on Global Health and Development and that the other focus areas have been added since then. Some of these seem like natural developments from their original values, like South Asian air quality. Others do not seem like natural developments from the original values, like Potential Risks from Advanced AI. I am not saying that these new values aren’t important – just that they are significantly different.

Having a list of 14 focus areas implies that they have the resources, both financial and employee, to deal with all of these areas. There is also an implication that there is some similarity in their approach to all of them, because they are all shown together in the same organization.

What am I suggesting instead?

The leading Effective Altruistic organizations should have a narrow focus. Each organization should have a specific purpose statement in its charter, and it should be difficult for that organization to change its values.

When donors decide to support one of these organizations, or when employees decide to work for one if these organizations, it should be clear what the goals of that organization are. Those goals shouldn’t change in the longer term, so donors or employees who build long term relationships with the organization know what they’re getting into.

There should be multiple effective altruist organizations with different mission statements. When people change their mind as to which is most important to support (either as a donor or as an employee), it should be easy for them switch to another organization that more closely supports their views.

When someone wants to add a new focus area to the effective altruist community, they should create a new organization which pursues that goal using the techniques of effective altruism. This new organization can become part of the community. This would be better than having them try to convince the leading effective altruist organizations to include their goals as additional projects for them to work on.

Looking at a list of EA related organizations, maybe this advice is already being followed? There are a lot of effective altruist organizations which have a narrow focus. Maybe my impression of the movement is simply because I’m more aware of the meta-EA organizations, which try to coordinate other organizations, instead of the effective altruist organizations themselves. Or maybe what I’m describing is a real tendency in a lot of the effective altruist movement.

Kuhn’s Science and Lakatos’s Science

( Quotes from the Stanford Encyclopedia of Philosophy articles on Kuhn and Lakatos. )

I think that it is fair to say that effective altruism is attempting to make altruism more scientific. The overarching goal is promote empirical evidence of the consequences of our altruism.

It’s worth looking at what understanding of science we’re attempting to introduce into altruism.

Kuhn seems to be the most widely read of the philosophers of science from the later twentieth and twenty-first centuries. Kuhnian terminology was used by both Mowshowitz and Alexander in their discussion of the contest.

I don’t think that effective altruism should be looking primarily at Kuhn’s description of science. We shouldn’t be calling this a paradigm.

To see why, let’s look at how Kuhn distinguishes science from non-science:

 Kuhn describes an immature science, in what he sometimes calls its ‘pre-paradigm’ period, as lacking consensus. Competing schools of thought possess differing procedures, theories, even metaphysical presuppositions. Consequently there is little opportunity for collective progress. Even localized progress by a particular school is made difficult, since much intellectual energy is put into arguing over the fundamentals with other schools instead of developing a research tradition. However, progress is not impossible, and one school may make a breakthrough whereby the shared problems of the competing schools are solved in a particularly impressive fashion. This success draws away adherents from the other schools, and a widespread consensus is formed around the new puzzle-solutions.

This widespread consensus now permits agreement on fundamentals. For a problem-solution will embody particular theories, procedures and instrumentation, scientific language, metaphysics, and so forth. Consensus on the puzzle-solution will thus bring consensus on these other aspects of a disciplinary matrix also. The successful puzzle-solution, now a paradigm puzzle-solution, will not solve all problems. Indeed, it will probably raise new puzzles.

Altruism is currently in a pre-paradigm period. There is very little consensus between different altruistic organizations. Different schools (arts organizations, universities, foreign aid, etc.) have very different worldviews and so have a hard time building on each others’ progress.

Effective altruism seems to be trying to create a science in a Kuhnian sense. The breakthrough proposed is consequentialist calculations of the impact of charities. This provides a solution to the shared problem of what causes are more urgent and so what causes people should preferentially donate to.

Is the hope that effective altruism will draw in adherents from the other schools to form a widespread consensus around this type of puzzle solution?[3]I legitimately don’t know the answer to this question, but I think it’s worth thinking about, even for people who do.

If this happens, then there will be a widespread consensus among altruistic organizations on how to solve puzzles in their fields. This would help to align the worldviews of various altruistic organizations, which allows them to focus more on making progress in solving the puzzles proposed by the paradigm.

Is this the direction we want altruism to move towards?

No. We should not want altruism to form a consensus around a single paradigm.

Diversity is one of the key strengths of philanthropy. Different organizations have different goals they’re trying to pursue, different methods of pursuing their goals, and different ways of determining if their efforts are successful. Success or failure of one part of the program does not impact other parts. In a pluralistic society, it is good to have multiple different value systems being promoted. Even if the single paradigm is a values handshake between multiple different ethical systems, it still undermines pluralism, and decreases the number of failure points before the entire community becomes misguided.

This is especially valuable if philanthropy has elements of Extremistan[4]Taleb’s term. to it. In this case, having a few philanthropic organizations getting things very right is more important than having most philanthropic organizations get most things mostly right.

If we reject the Kuhnian idea of science as a paradigm that forms a universally accepted consensus, how should we understand effective altruism as a science?

Lakatos’s idea of science is a much better goal.

Lakatos described science in terms of research programs instead of paradigms – or Popperian theories. A research program is a sequence of theories build from the same core of ideas. It is scientific if it is progressing, and not scientific if it is stagnating or degenerating. A research program is progressing if the theoretical predictions and empirical content grows with each theory in the sequence.

 The unit of scientific evaluation is no longer the individual theory (as with Popper), but the sequence of theories, the research programme. We don’t ask ourselves whether this or that theory is scientific or not, or whether it constitutes good or bad science. Rather we ask ourselves whether the sequence of theories, the research programme, is scientific or non-scientific or constitutes good or bad science. Lakatos’s basic idea is that a research programme constitutes good science – the sort of science it is rational to stick with and rational to work on – if it is progressive, and bad science – the kind of science that is, at least, intellectually suspect – if it is degenerating. What is it for a research programme to be progressive? It must meet two conditions. Firstly it must be theoretically progressive. That is, each new theory in the sequence must have excess empirical content over its predecessor; it must predict novel and hitherto unexpected facts. Secondly it must be empirically progressive. Some of that novel content has to be corroborated, that is, some of the new “facts” that the theory predicts must turn out to be true. 

What is notably missing is any idea of consensus. A research program does not have to dominate its intellectual neighborhood. Instead, it has to be progressing.

Effective altruism should strive for something similar. Instead of wanting to become the paradigm for altruism, the movement should think of itself as one research program, or a collection of related research programs, that are progressing in theoretical and empirical content. They don’t have to try to solve the problems of other research programs, and they shouldn’t try to draw everyone else in, because the goal is progress in each research program, and not solving ethics.

So how should a research program seek criticism?

Each individual theory in the research program should be falsifiable, and the research program as a whole should provide conditions for how to falsify the theory. However, the research program should have some core of ideas that resists falsification. When a discrepancy between experiment and theory occurs, it should falsify the theory in a particular way, while still preserving the core ideas. Commitment to the core ideas makes progress easier by providing a shared foundation to build on. 

Instead of an individual falsifiable theory which ought to be rejected as soon as it is refuted, we have a sequence of falsifiable theories characterized by shared a hard core of central theses that are deemed irrefutable – or, at least, refutation-resistant – by methodological fiat. This sequence of theories constitutes a research programme.

This is what I’m hoping for for effective altruism, but this contest explicitly asks for challenges to the hard core of the research program.

Lakatos suggests that you should not be trying to challenge the hard core. It should be assumed to be close to irrefutable. Criticism should falsify individual theories, using methods described by the research program itself. But frequently revisiting your core ideas slows the progress that could be built assuming those ideas.

This approach is unlikely to lead to widespread consensus. This is a feature, not a bug. If you’re just going to be one movement in the broader field of altruism, you don’t have to make sure that you get the values handshake right.[5]One example of a values handshake: A broad effective altruist organization has to divide the money it receives among its different focus areas. How much money should each receive? If you have … Continue reading If you’re going to be multiple allied movements, you may not have to worry much about the values handshakes at all. Instead, each organization can focus on making progress within its research program.

Conclusion

You probably shouldn’t listen to my advice.

I would hate for my advice to be persuasive but wrong. Effective altruism is making the world a better place, and I definitely do not want to change anything that has made effective altruism good and great. If you can use any piece of what I’ve written here to improve what you’re doing, great ! If you can’t, then I would much rather you completely ignore me than you change something because you feel like you should respond to my criticism.

You’re the people who’ve created an extremely successful altruistic movement. You’ve probably done more good in the world than I ever will. Why would you listen to me about how to make altruism more effective?

References

References
1 Although I’d call it a research program instead of a paradigm, for reasons you’ll see below.
2 I’m not entirely convinced of this as an empirical claim. Do insular communities have more or less value drift than more open communities?
3 I legitimately don’t know the answer to this question, but I think it’s worth thinking about, even for people who do.
4 Taleb’s term.
5 One example of a values handshake: A broad effective altruist organization has to divide the money it receives among its different focus areas. How much money should each receive? If you have narrower organizations, none of them have to decide how important global poverty vs. animal suffering vs. existential risk is.

Thoughts?