Method in the Humanities

Back when I was in grad school I kept a previous blog that I ended up replacing with this one. The other day I wanted to share Reichenbach's soliloquy that I had previously posted there, so I searched through my old archives to repost it. Along the way I found this old piece, which considering that I was a first or second year grad student when I posted it kind of shocked me by its effrontery. I reproduce below, then follow up with some reflections on what I now think of young me's presuming to give methodological advice to a field he'd made precisely 0 contributions to.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

The humanities are often attacked for fairly silly reasons. People don't really understand what goes on therein and they say ignorant things about the scholarship that workers in the humanities produce. What is more, the "How will you get a job with that?" line is popular. This despite the fact that, while a degree in the humanities isn't the profit maximising degree, it still makes you (significantly) more likely than others in your age group to be employed, and raises average lifetime earnings. But there is, I feel, a grain of truth in some of the critiques.

A Mug - "If I could speak, I would
assure you that Liam really does
heart the humanities. But I can't.
Because I'm a mug"

What is the grain of truth? When it comes to scholarship in the humanities, it is often unnecessarily different to tell what counts as success or failure. You read an essay or a book, and it's not clear whether this is good scholarship or bad scholarship, because it's not actually clear what the essay or book was trying to achieve at all. Without knowing what the aim of the work was, you can't know if it has achieved its aim, and even whether or not you actually share that aim. This is, I think, easily fixable - scholars should get in the habit of explicitly stating at some point in their work "This work will count as a success if..." and "This work will count as a failure if..." - perhaps with some words of explanation as to why those success and failure criteria have been chosen. Further, the success and failure criteria should be such that we can feasibly check whether or not those criteria have been met

This wouldn't be an issue if scholars in the humanities didn't debate with each other. If ours were just a game wherein everyone states whatever is on their mind and goes on their way then who'd care what counts as the success or failure of a hypothesis, or how to check whether a thesis is on the mark or not? But it is precisely because we disagree with each other without having any generally acceptable criteria of success in the humanities that being explicit in this way is necessary. For these debates won't be productive, and will be highly likely to involve us just talking past each other, if we don't make it clear what we hope to achieve.

Note that I am not requiring scholars to come up with a general standard of adequacy for the entire humanities (or their little subfield). That is far too grand a task. I'm just asking people to be clear and explicit about what they intend to achieve, in such a way that we can check whether or not they have achieved it. This could feasibly be different with each new piece, let alone be different for various workers and fields in the humanities. Further, this means that I am not suggesting that you attempt to issue diktats to the rest of your field about what they should count as good research. Indeed, an important part of why I think this is a good idea is that it allows other people to see whether or not they actually think what you are trying to do is worth doing. Finally, I am not suggesting that the reader has to agree with what the author takes their aim to be (death of the author and all that) - again, one advantage of this is that it will let us pinpoint disagreements with the author with greater ease.

Whereas we've seen this explicitness work to produce progressive debate, even on very controversial topics, in other fields. In mathematical logic, for instance, there are more widely agreed upon criteria about what counts as a proof and what a refutation, so authors rarely need to be fully explicit. But sometimes logicians try to analyse more controversial concepts where there are no such agreed conventions. In this scenario logicians do just what I say here: they announce "material adequacy" conditions for their work, which make it clear what they hope to achieve in their analysis and allow us to judge whether the work has succeeded, failed, or was aiming at the wrong target in the first place. Mathematical logic seems to be getting things done in a productive fashion, so I say we borrow this trick in the rest of the humanities.

I find that humanities scholars tend to get antsy about comparisons to the sciences and suggestions that we borrow methodological tricks from there, so perhaps that little point about logic was rhetorically unwise. And I can't deny it, some points from the philosophy of science are in the back of my mind as I talk about explicit success and failure criteria, where those criteria can be feasibly checked up on. So two points on this. Firstly; sure, the sciences really are pretty damned successful when it comes to getting things done, and I don't mind saying that I think we should try and learn from that. People in the humanities (and social sciences) sneer about "physics envy" - but, well, physics really is pretty cool, and I really do wish my field were as good at progressively generating knowledge as that. I don't see any reason to be ashamed of this. Secondly, I am not saying here that people in the humanities must make explicit empirical predictions and declare the level of statistical significance they will require in order to reject the null hypothesis.

 (Although I don't share the attitude that if somebody does these things they should therefore be discounted from the humanities. The snobbish disdain for rolling up one's sleeves and getting down to mathematical or empirical work that many in the humanities have seems to me exactly parallel to the snobbish disdain the British aristocracy have for trade.)

I am just saying that: whatever it is you think your work is good for you should share that with us, and let us know how we can see if it is getting that good thing done. Assuming you really do believe your work will achieve its stated purpose (and that the purpose is worth achieving), I don't see what in principle objection there could be to this. It seems almost to be a requirement of sincerity. Further, this won't just be good for the readers, but it will also be good for authors too. Having to think hard about what would prove you wrong is often an extremely useful didactic process, and helps you realise where you need to target your arguments. Finally, I suspect it would be good for university administrators too. In that, based on discussion with many people, I think that a lot of humanities scholars would say their work is aimed at increasing cultural appreciation for certain art forms. Once this was realised through being made explicit, I hope that it would lead university administrators (and tenure committees!) to give a lot more support (and respect) to teaching, and to work aimed at reaching a popular audience.
Rudolf Carnap - "Apparently I was one of
those people who looked progressively
more badass as I grew older"

In my own thought, I adhere to a generalised version of the principle of tolerance. Formulated by Rudolf Carnap, it initially only applied to logic. To quote him:
In logic there are no morals. Everyone is at liberty to build up their own logic, i.e. their own form of language, as they wish. All that is required of them is that, if they wish to discuss it, they must state their methods clearly
Later on he became even more liberal, giving a more generalised version which doesn't apply only to logic. As he put it:
Let us learn from the lessons of history. Let us grant to those who work in any special field of investigation the freedom to use any form of expression which seems useful to them; the work in the field will sooner or later lead to the elimination of those forms which have no useful function. Let us be cautious in making assertions and critical in examining them, but tolerant in permitting linguistic forms.
There is much wisdom in this attitude, and I hope that I evince it in my own scholarly interactions. But while I think Carnap was right to generalise the principle of tolerance beyond logic, I think he was wrong to drop the requirement that people "state their methods clearly". For what I worry occurs in the humanities is precisely that, because people are not doing this, it is unnecessarily difficult for work in the field to eliminate styles and theories which are not serving a useful function.

I strongly suspect that many of those who write attack pieces on the humanities do not share this tolerant attitude - they have rigid ideals of what Serious Scholarship should look like, and will accept nothing else. Many in the humanities, though, do have something like the tolerant attitude. So when they perceive that their attackers don't share this tolerance, they come to believe they can safely dismiss their critiques as those of a needlessly dogmatic thinker. But for the reasons stated above, I think tolerance does require this level of clarity and explicitness from us. I hope scholars in the humanities come to adopt this practice, and thus let us better reap the rewards of our tolerance.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1) Ok credit to young me where it's due, while the fake-quote gags are cheesy I do actually miss them, I might bring that bit back. I feel that they actually embody being at ease with being an unfunny nerd and maybe that's a good thing for me to model. 

2) The natural thing to wonder is whether I, in the decade or so since this was posted, actually did the thing I said we should all be doing. As far as I can tell the answer is: sometimes but not never. Of my published papers, I think the following do:

Causally Interpreting Intersectionality Theory - if I had to guess (based on the fact that this blogpost was from 2015 and CIIT came out in Jan 2016) I was probably writing this paper at the same time as making this blog post. Because section 2 is a very explicit "let me lay out the criterion for a good explication, starting with clarifying the explicanda" in good Carnappian fashion - and that was one of the sections of this paper I had the biggest hand in. So to some extent this blog post was just me trying to render as a universal norm what I happened to be doing in my first original research paper.

Is Peer Review A Good Idea? - the introductory section is basically just announcing in advance how we propose to resolve our question and what sort of scope of conclusion could be drawn from our method if successful. We return to and reiterate this point in the end.

Why Do Scientists Lie? - is an instance of the "intended to be public edification" genre favourably mentioned in the blog post and I think makes that fairly clear throughout, albeit probably not as explicitly as a strict reading of my own blog post would have required of me!

Publication Bias is Bad for Scientists If Not Necessarily Science - similar to Is Peer Review a Good Idea? the introduction is very largely just trying to be very clear about what the argument will achieve if it succeeds. And even this, Remco reminded me post publication of the blog, was actually only really given anything like the level of detail and length it now has as a result of reviewer-2 insisting upon it - oh the irony! Maybe notable that this is a paper whose success condition is basically undermining an argument we have seen proffered, it is meant to be a defeater or a counter-model to block an inference we believe fallacious. I have another paper which does this, wherein I ended up writing a blog post on here after it was published precisely to clarify that it was this, because I realised I had not actually made that clear in the paper!  Maybe only Remco Heesen keeps me honest.

And as far as I can tell that's it. So in a decade I have only really managed four papers that actually do the thing I said everyone should do all the time; five if you allow me the do-over of a subsequent clarificatory blog post. 

It is perhaps worth noting that (with the exception of the one only just published) these are among my most popular papers, as judged by citations and/or the vague sense I have from people talking to me. 

3) Do I think I learned something new which convinces me the blog post's argument was false, such that my later behaviour embodies greater wisdom? No. I simply think I am a worse scholar than I set out to be 

I think it often happens that grad students hit a stage wherein they are well trained enough to be able to see flaws in the published literature but haven't yet had the soul crushing experience of trying to do better and failing. You eventually realise that you are not gonna fix much at all. You're just as bad, if not worse, than the villains of your educational youth. Instead, your efforts to be the change you wanted to see in the world will just be fodder for the next generation of grad students to get unreasonably angry at; because clearly they would never make the foolish errors you are making -- yet here you are with a permanent job while they languish in obscurity! Of those that end up published authors themselves, they will inevitably find their own work enters the churn, and the great cycle perpetuates itself. This blog post was, I think, a reflection of me being at the early stage of this dialectic.

For what its worth, I do think this is a progressive cycle, more akin to Condorcet's sketch than the great wheel of the saṃsāra. The next generation are never as good as they think they are going to be, they are always a disappointment to themselves. But part of aging with grace in academia is, I think, realising that they are also usually correct in their critiques of you, and by striving to show you up they will at least do a bit better than you did in turn. And, crucially, there's some institutional memory here - some of the corrections get "locked in" to the field, as the youth at least get some papers/books/ideas out there embodying the corrections they wished to make, and these become a resource for others to learn from.

So while it is not nice to find yourself the object of the Young Turks' corrective ambitions, you can at least be reconciled to it. Just remember that by crushing you and showing you up, they incrementally advance a project you were once passionate enough about to want to do the same to your elders in their turn.

Comments

Popular posts from this blog

Comparisons Between Life in the UK and the USA

More Stuff and Fewer People To Share It With

Learning From Four Analytic Philosophy Wins