Tuesday, June 13, 2017

Remonstration

A recent conversation with some friends has me thinking about roles we can fruitfully play as philosophers of science. I just thought I'd write up on a blog post my thoughts on something that came out of that, which is a role we sometimes play that I feel is not often enough highlighted.

In philosophy we learn about tools and methods of critical thinking and argument construction and evaluation. For instance, a standard part of philosophical training is going through some basic logic. You should learn therein what it takes for an argument to be valid, and, going in the other direction, how one can demonstrate the invalidity of an argument by constructing counter-models. (If this doesn't mean anything to you, I will be going through an example later in this post!) That is just part of basic philosopher training. If you go into philosophy of science you will further specialise, perhaps learning about experimental technique, statistical methods, or theories of confirmation along the way. All of these can put somebody in a decent enough position to evaluate the cogency of arguments that scientists put forward, providing one familiarises oneself with the particular theoretical background the scientists one is evaluating are working within.

And it matters that scientists are making cogent arguments! Science has a lot of social cachet; with some well noted exceptions, folk trust scientists and will tend to believe claims that scientists put forward about the world. What scientists conclude is therefore deeply significant to our worldview and senses of self. Further, in many spheres of life we base policies on recommendations from scientific experts. Just the enterprise of science itself involves moving huge amounts of people and resources around, and the opportunity cost of having all these smart folk spend their time in this way rather than on other socially valuable tasks is itself huge. We want scientists to be basing their claims, recommendations, and activities, on sound argumentation and good reasoning, so as to ensure that this cachet and those resources are used as best we can.

So then putting these two together we get a natural thought about how philosophers of science should use our skills. We should monitor the arguments scientists make, and where we find that their methods or modes of argument are not capable of supporting the conclusions or recommendations they are making in light of those arguments, we should bring to bear our expertise in the evaluation of inferences or arguments (broadly construed) on calling this out and suggesting better practice for the future. (I recall reading, but do not recall where I was reading, E.O. Wilson once write that this is exactly what he thought of as the point of philosophers of science, people looking over his shoulder saying `Oh no I don't think this is good enough, what about such and such counter argument, eh?' He noted that while this could be pretty irritating in the moment, on reflection thought it valuable to him.) I call this kind of thing `remonstration', it's a kind of `speak truth to power!' norm, and I think we should see it as a valuable part of our mission as philosophers of science.

I am going to go through an example from my own work in a bit of detail below, but for some more illustrious examples one might want to check out: Clark Glymour's critique of the statistical reasoning that underlay the famous Bell Curve book and much of the rest of social psychology at the time, Nancy Cartwright's long running project critically evaluating the limitations of randomised control trials for medical or social research, or Roman Frigg's work (discussed, say, at the end of this excellent episode of the generally excellent Sci Phi podcast) on over-confident and over-specific claims made on the basis of models of climate change. 

But for an example of remonstration I am most familiar with (and also to allow me to explain and slightly reframe this previously published work of mine) I'd like to go through my paper On Fraud. One of the motivations for that paper was thinking about claims currently being made about how we should deal with the replication crisis in social psychology. Broadly, lots of claims in social psychology that were thought to have been securely established are being found not to stand up to sustained scrutiny when people attempt to replicate the initial experiments which led to their acceptance, or redo the statistical analyses with bigger/better data sets. In thinking about why this is occurring, a number of scientists have come to conclude that one (but not the only) source of the problem is -- scientists are not just seeking the truth for its own sake, but instead being encouraged to pursue credit (esteem, reward, glory, social recognition by their peers in the scientific community) by various features of the incentive structure of science. This pursuit of credit itself incentivises bad research practices, ranging from the careless to the outright fraudulent. If only we could remove these rival incentives which are causing the misconduct, and instead encourage pure pursuit of the truth, we'd have removed the incentive to involve oneself in such research misconduct. Since I had seen some very similar arguments come up before in my more historical scholarship on W.E.B. Du Bois, my interest was very much piqued and I got to thinking about whether this argument should be accepted as a sound basis of science policy.

I came to conclude that the psychologists and sociologists of science making these arguments were making a subtle mistake in how they reasoned about policy in light of scientific evidence. They were doing good empirical work tracing out the causes of much of the research malpractice we witness in science. But on the basis of this they were concluding that if we removed the actual causes of fraud we'd see less fraud. That is to say, they were establishing premises about the causes of fraud in the actual world, and concluding that a policy which intervened on (in fact removed or greatly lessened) these causes would mean that there would be less fraud after our intervention. After all, it's a natural thought; if X was what was causing the fraud and now there's no more (or much less) X, well you've removed the cause and so you should remove the effect, right?  Not so. Such arguments are not valid -- their premises can all be true, while their conclusion is false. So I constructed a counter-model, which is to say a model which shows that all of their premises can be true while their conclusion is false.

Without going into too much detail, I produced a model of people gathering evidence and deciding whether not to honestly reveal what evidence they received when they go to publish. Fraud is an extreme form of malpractice, of course, but it would do no harm to my arguments to interpret the agents as deciding whether or not to engage in milder forms of data fudging or other research malpractice. We can model the agents as pure credit seekers, they just want to gain the glory of being seen to make a discovery. Or we can model them as pure truth seekers, they just want the community to believe the truth about nature. (We can also consider mixed agents in the model, but set that aside.) In the model credit seeking can indeed incentivise fraud, and for the sakes of the counter-model we may grant that in the actual world all fraud is incentivised in this way. But what I show is that in this model, even if suppose that there were some policy that could successfully turn all scientists into pure truth seekers, it does not guarantee that there is less fraud -- in fact truth seeking can, in some especially worrying circumstances, actually lead to more fraud!

There is a general lesson here, in fact, that I wish I had done more to bring out in the paper. The point is: if you are basing policy on empirical research, it is tempting to think that what you need to know is whether the policy would be effective in the actual world. That, after all, is where you will be implementing the policy! But that's the wrong causal system for evaluating the effects of your proposed policy. What you need to know is whether the policy would be effective in the world (or causal system) that will exist after the policy is implemented. In the actual world -- sure, credit seeking is causing malpractice. But the fact that you remove that incentive to commit fraud does not by itself mean you've removed the incentive to commit fraud. It may be that in the world that exists after this intervention there are new temptations to commit fraud. Truth seeking itself may be one of them. Policy relevant causal information must include counter-factual information, information about the world that will exist after a not-yet-implemented policy has been carried out.

If you want the real details of my argument -- read the paper! But what I want to note here is how this is me trying to be the change I want to see in philosophy of science. I found some scientists making policy recommendations in virtue of their empirical research (in this case it was policy affecting science itself). I thought about the structure of their arguments, and realised they were making implicit assumptions about counter-factual reasoning. A general philosophy education gives you tools for reasoning about counter-factuals, so I could bring that to bear. What is more, general critical thinking (or logic) training that is part of being a philosopher points the way to counter-model construction as a means of critiquing arguments. Finally, disciplinary specific training in the philosophy of the social sciences gave me training in tools for building models of social groups, which was what was of particular relevance here. I was therefore able to remonstrate, to bring to bear my training in calling attention to an error in scientists' reasoning, and what's more an error that (since it was supposed to be the basis of policy) has the potential to be of some social and opportunity cost. I don't claim, of course, that this is the best example of remonstration in the literature (c.f. my illustrious colleagues above!) -- but I hope going through an example I am intimately familiar with in depth gives people a better example of how philosophy of science as remonstration is a good use of our disciplinary tools and expertises.

Now, it is certainly not my claim that only philosophers of science engage in this kind of remonstration. Statisticians very often engage in a very similar activity -- Andrew Gelman's blog alone is full of it. There is also a fine tradition of scientific whistleblowers who call foul when misconduct is afoot. Remonstrating with scientists whose reasoning has, for one reason or another, gone astray, ought not be, and fortunately is not in fact, left to philosophers alone. And, in case it needs to be said, nor is this (or nor ought this be) all of what philosophers of science get up to. Most of my own work, for instance, is not remonstration.

But when I see accounts of the tasks of philosophy of science they typically fall into one of three categories. Concept construction or clarification, where the goal is something like producing or improving a tool that it might help scientists do their job better. Scientific interpretation, where the goal is to do something like provide an understanding of scientific work that would make sense of the results of scientific activity, and tell us what the world would be like if our best evidenced theories were to be true. And meta-science, where the goal is to do something like provide an explanatory theory which tells us why it is that scientists reason (or ought to reason) in some ways rather than others. All of these can be valuable and I hope philosophers of science keep doing them. And I can even understand why people aren't keen to advertise the disciplinary mission of remonstration: it makes us into the stern humourless prigs of science, somewhat akin to Roosevelt's critic on the sideline hating on the folk actually getting stuff done. But, since I think it can be good and necessary, I hope that, even if it doesn't win us friends, and along with our comrades elsewhere in the academy and with our eye on the social good, we hold true to the mission of remonstrating against scientific overreach, malpractice, or just plain old error, where-ever we should see these arise.

Thursday, June 1, 2017

The Diversity of Formal Philosophy

I've just come back from the Formal Epistemology Workshop! It was a lovely conference, and I highly recommend it to up and coming formal epistemology folk who want to get a sense of what's going on across the field. I was struck by the diversity of projects, and also by the interesting fact that multiple people said something like ``I feel like I am the least-formal formal-epistemologist here.'' So! I invented a taxonomy of projects in Formal Philosophy, which I'll present here with examples then comment on below.

About -- some formal system that touches upon matters of prior philosophical interest is either itself the object of study, or some feature of it or result therein is, or it is useful for stating/reformulating a prior philosophical problem. One does not reason within the system, but rather one either reflects on it, or some aspect of it, or draws out morals from it and reflects upon how they bear upon another problem.

Examples of work of this sort: Bertrand Russell's On Denoting, Plato's Meno, Audrey Yap's Idealization, Epistemic Logic, and Epistemology, Jason Stanley's Know How, David Lewis' On The Plurality of Worlds, Kenny Easwaran's Why Physics Uses Second Derivatives, Beall and Restall's Logical Pluralism, Danielle Wenner's The Social Value of Knowledge and the Responsiveness Requirement for Biomedical Research, Michael Weisberg's Who Is A Modeller?

Within -- the author(s) themselves use an established formal framework to prove results which are of philosophical interest. Perhaps they are taken to be interesting because of what they tell us about the formal system which is itself taken to be philosophically interesting, or how various such systems can be related, or perhaps because the result is itself intrinsically interesting or part of a family of results which collectively are taken to be interesting. The point is that Within projects gain whatever philosophical interest they have because of the relationship between a result the author has proven and something which is taken to be of philosophical interest.

Examples of work of this sort: Ruth C. Barcan's The Identity of Individuals in a Strict Functional Calculus of Second Order, Christian List and Philip Pettit's Aggregating Sets of Judgments: An Impossibility Result, Robert Stalnaker's On Logics of Knowledge and Belief, Catrin Campell-Moore's How To Express Self-Referential Probability, Cailin O'Connor's The Evolution of Guilt, Bertrand Russell's eponymous Paradox, the Marquis de Condorcet's Jury Theorem (and related results), Amartya Sen's The Impossibility of a Paretian Liberal, David Lewis' Probability of Conditionals and Conditional Probabilities, I. J. Good's On The Principle of Total Evidence, Harsanyi's Utilitarian Theorem (and related results).

Without -- the author invents or constructs a novel formal system that allows us to generate results or extract information about a new area of discourse not previously amenable to formal analysis, or which if there was a previous formal theory it took a markedly different form.

Examples of work of this sort: Aristotle's syllogistic, Russell and Whitehead's Principia Mathematica, Turing and Post on computability, Ruth Barcan's A Functional Calculus of First Order Based on Strict Implication, Kripke's A Completeness Theorem in Modal Logic, Carlos E. Alchourrón, Peter Gärdenfors and David Makinson's On The Logic of Theory Change, David Lewis' Convention, Frank P. Ramsey's Truth and Probability. Peter Spirtes, Clark Glymour, and Richard Scheines' Causation, Prediction, and Search.

Some comments on this, starting with remarks about the examples.

1 ) While I didn't put too much thought into constructing the example lists (which probably resulted in a demographic skew, alas, in what I highlighted -- on this see point (7) below) I did want to highlight a couple of points. Formal philosophy as a whole interacts with very diverse areas of philosophy and very diverse sets of formal tools. As has recently been discussed, logic gets the bulk of pedagogical attention in philosophy graduate programmes. But at a glance the above list contains work in ethics, social and political philosophy, metaphysics, philosophy of language, philosophy of physical, social, and biological sciences and mathematics, and epistemology. (I don't know of any formal aesthetics, but I would have liked to have been able to include that.) And the formal theories touched upon or deployed do indeed include logic, but also include probability theory, game and decision theory, statistical reasoning, calculus, social choice theory, and geometry.

2) I also wanted to use the examples to highlight that each of the sections contains work that would presumably be thought of as classical or canonical, as well as recent work by younger scholars.... This latter was a bit harder for the third section, since (for, I guess, reason discussed in (4) below) one hears about such work less. I decided that in the grand scheme of intellectual history though late 20th century is extremely recent philosophy, so it suffices to make my point. Which is that each of these modes of formal philosophy has shown itself both capable of making classic contributions and generating novel work. This is not a hierarchy of value, and none of these streams are yet dry. (When I reflected on my own work, I think I have some papers in the About category, and some papers in the Within section.) On the flip side, each of these is part of the grand tradition of formal philosophy, and there's no reason to think that some is more properly formal philosophy than the rest.

3) These are of course fuzzy categories. Gödel's incompleteness theorems are to some extent Within, but (if I understand the history correctly) the technique of Gödel-numbering was developed during these proofs and that was probably a significantly novel enough contribution to be its own Without work. For more recent work, I wanted to include more ethics, but couldn't decide whether this was About or Within. Is this Within or Without? Nothing, I think, really turns on subtleties here, I just wanted to acknowledge that there are plenty of edge cases. However, just to give people something to disagree with me about: I hereby claim that while this is fuzzy in the sense that some work can plausibly be in multiple categories, anything that could be called Formal Philosophy will recognisably fit into at least one of the categories.

4) While I don't think this is a hierarchy of value, my sense is that in terms of credit or repute the Without category is the high-risk high-reward category. It's the kind of work that is most likely to fail, but most likely to secure one's lasting glory if one can pull it off.

5) Work in the About category is probably the easiest sell to philosophers who don't work in formal philosophy. When formal philosophers are designing introductory lectures, outreach-y summer programmes, presentations for conferences in which there will be mixed company, or just in general interacting with a field that can be territorial and sceptical about things which fall outside the recognised boundaries, I think there is some reason to be cognisant of the distinction between About and Within work, and opt for About work. Nobody is going to argue that Plato's Meno isn't real philosophy.

6) I am less confident here, but I know there are metaphilosophical debates about what counts as experimental philosophy. I feel like a similar taxonomy would work there, with experimental philosophy being work that either is centrally based upon reflections on empirical work, is founded upon novel discoveries made by the authors, or comes up with a new way of testing things or generating results.

7) I didn't max out on demographic diversity in constructing the example lists, since it wasn't really to my point here. But I did find when making the lists that white blokes came to mind much more easily in all of the categories, and I guess especially dear to my heart given previous work -- I could scarcely think of any black folk! On reflection I can think think of more I didn't include -- for instance Kwasi Wiredu's Logic and Ontology (I cannae find a link!) could have gone in the About section, and I just met Lisa Cassell at the Formal Epistemology Workshop that sparked this very post. Still, even as I try hard not many brothers and sisters come to mind. This does not tell one much about the actual demographics of the field -- maybe I am just bad at remembering people, and I am myself very much trained in a certain tradition. But it is what it is.