Science for Subjectivists

I read this fascinating paper by Greg Gandenberger. It's an argument to the effect that one can give a good Bayesian rationale for, in some circumstances, paying attention to stopping rules. These are rules for data collection, in particular rules which tell you when you have gathered enough data and should stop gathering more. One might compare (and Greg does compare) rules which tell you to stop after some fixed point - say, after one has performed the test on 300 subjects - versus rules which say something like `Keep going until you either run out of money or your data supports your favourite hypothesis to some given degree.' Classical statisticians typically say that for epistemic reasons one needs to pay attention to these rules, whereas Bayesian statisticians argue that one very often need not pay any attention to them (in these cases they are "noninformative"). GG is here to argue that there is a particular class of cases where Bayesians will indeed say you need to pay attention to noninformative stopping rules, and that this is just the class where one really ought, so the Bayesian has successfully rationalised the classical practice.

What sparks this blog post is that GG’s point seemed to almost rely on the decision maker having a brute preference for data gathered one kinda way. It's following through the ramifications of this that gives you the central qualitative result. Here's the idea, illustrated through one way (I think!) the situation illustrated by Greg's results could come about. Suppose I am a decision maker, and I would prefer to do a if H and b if ¬H. If I control funding or publication, or there is otherwise social interest in the decision I make regarding a or b, I know that people will conform their behaviour to policies I announce. I might reason thus to myself: somebody who wants a and thus resolves to stop gathering data the moment the likelihood of H is sufficient may well “stop on a random high". If I am very concerned about the possibility of only doing a because of a false-positive, this possibility of noise in the data being exploited may be very troubling to me indeed. Whereas if I announce a preference for some fixed-n halting rule, this source of false-positives at least cannot come about. It may hence be in my interest to announce a preference for particular stopping rules over others before the experiments are carried out. This is true even though I am a good Bayesian, and so even though after-the-deed had two people done the same experiment and presented me with the same data but under the different stopping rules I would have thought both equally informative as to whether or not H and thus whether or not I should a.

The point is that my prior announcement of a preference affects what experiments people are actually likely to do and so what data they are actually likely to gather, and hence what sort of errors they might have made. The decision maker thus cares about the stopping rule in order to get the researchers to provide them with certain sorts of data,  ones that avoid particular sources of errors they are very concerned about, and so implements some policy which makes it easier for people with the fixed point stopping rule to publish. The experimenters then care about the differences among noninformative stopping rules in order to get whatever it is they want from the decision maker. Hence a community of Bayesians may indeed come to care which stopping rule one uses even though the differences among the options are non-informative.

I hope that illustrates the sort of thing GG had in mind. He summarises the general moral as such:
[T]he stopping rule affects the expected-utility calculations through the regulator’s utilities rather than his or her probabilities, reflecting the fact that he or she has preferences about what experiments scientists will perform in the future. There are many kinds of decision-makers who engage in repeated interactions with scientists whose interests may not align with their own, including not only government regulators, but also other scientists, journal editors, science journalists, evidence-based practitioners, and even the general public. Any of these agents may need to attend to differences among noninformative stopping rules in their decisions in order to avoid incentivizing the use of stopping rules that they regard as undesirable. Thus, the fact that Bayesians sometimes need to attend to differences among noninformative stopping rules in making decisions is not an idle curiosity, but a key to understanding how Bayesian methods should be used in science.
This is a good paper and a clever idea, so I want to point out some of its troubling consequences.

Consider the following argument that a Bayesian can explain why a rational community of scientists could indeed come to learn - as in, exhibit consensus on the purported fact that - that the future will resemble the past, and thus solve the problem of induction. Suppose the community was set up such that I am editor of the only journal, and I resolve to publish things only if the author reports priors and an update rule that models the future as resembling the past in appropriate ways. Gradually the community comes to accommodate itself to me; say, because only people with credences I approve of can get publications and only published people can get jobs and teach the next generation of students. Before long scientists have come to a consensus that the problem of induction is solved!

I take it this doesn’t count as a real solution to the problem of induction in any interesting sense. I’ve just brute forced a particular epistemic conclusion through my gatekeeper position and an arbitrary preference for one kinda epistemic outcome. The difference between this scenario and GG's is, I think, meant to be that the utilities of the decision maker in GG's scenario are much more reasonable. We can think of good moral or social reasons why it is that a decision maker might be very concerned about acting on the basis of some errors. This is, basically, Rudner's problem of inductive risk applied to the social decision maker. But note that this is a fortunate contingency - GG's argument could just as easily justify a community coming to prefer halting rules of just the sort classical statisticians would say makes inquiry less reliable, after all, if the gatekeeper had more idiosyncratic tastes.

Compare and contrast with the simulation study found here. The nub of the results therein (stated very very very loosely - read the actual paper!) is that if scientists reason on the basis of "priors" that don't reflect their actual beliefs about the underlying phenomena, and have halting rules based upon these pseudo-priors, their research can end up being much less reliable than it might otherwise have been. To establish this they run simulations of various kinds of situations of inquiry. To loosely describe some of the simpler scenarios, suppose that some decide they will halt inquiry when their posterior odds in favour of one of two competing hypotheses reaches a certain ratio, and as they test may peek at the data to decide whether they've reached that point yet. These scientists' posterior odds are formed on the basis of a prior that did not encode what they actually knew about what is occurring. In this situation then they can end up being less reliable than if they had bound themselves not to halt until they had done a certain fixed amount of inquiry. (Seriously, read the actual paper, a lot more is going on, and the sense of `reliability' used is worth taking note of.) As the authors note, this is basically to be expected - one might say that people who throw away information can end up doing worse science, of course. Also it's not like there haven't been people pointing out problems for Bayesians with adopting priors that don't really reflect one's beliefs before; the authors cite this paper by CMU philosopher Teddy Seidenfeld, for instance. But they also note that in fact scientists, even the self-professed Bayesians in their acquaintance (and even even, they note, some of the founders of subjective Bayesianism) rarely actually use their real priors and in fact do something like this problematic process. So scientists, even Bayesian ones, ought to be worrying about the optional stopping rules used, given our actual beliefs and preferences.

I am reminded of the Tractatus (propositions 6.371-2) wherein Wittgenstein says:
At the basis of the whole modern view of the world lies the illusion that the so-called laws of nature are the explanations of natural phenomena.... So people stop short at natural laws as at something unassailable, as did the ancients at God and Fate. And they both are right and wrong. But the ancients were clearer, in so far as they recognized one clear conclusion, whereas in the modern system it should appear as though everything were explained.
The rationalisation of epistemic concern for halting rules GG offers does indeed rest on something, but it is in the end just sheer preference. If this is the sort of rationalisation of scientific practices one is inclined to prefer I think that one shall find that this goes all the way down. There is something groundless about the way we carry on, we just happen to prefer things one way rather than another. Our formal machinery to regiment and rationalise this is necessary, but in the end because (as the simulation work shows) it's just so easy to get in a muddle by our own lights unless we are careful to keep track of precisely what we're up to. That guards us against some error, but it does not seem to me to justify in any properly epistemic sense. Rather, if anything its the justifiability of our preferences that decides whether or not we are doing the right thing, and I suppose that is a matter for ethics and theology more than epistemology and philosophy of science. For a certain kind of subjectivist, I thus think part of what one learns by investigating the foundations of science is that it rests on caprice, and the conventions that codify that caprice.

The choice of how to inquire is ours. Of course it should be a reasoned choice that we make after due consideration, in concert with others. But there's a sense in which we must each alone make our choice.

Automat - Edward Hopper
This painting has always for me evoked the feeling of someone contemplating
an extraordinarily weighty matter, at least judged by its personal significance,
  but having to do so utterly alone and in the most mundane of circumstances.

Comments

Popular posts from this blog

How I Am A Marxist

On Not Believing In One's Work

Arguments in Philosophy