This chapter is a detailed case study of epistemology articles in the 12 journals. I’m including this partially for self-interested reasons: I work in epistemology and I wanted to see what the field looked like. But I’m also including it because the data here makes a striking point.
- Main Thesis
- The Gettier Problem literature is not that big.
Now I don’t mean to say it’s small. There is a plausible case that it is (or at least was circa 2013), the largest sub-literature within epistemology. And for a while it was a huge proportion of what goes on in epistemology. But that time has passed, and I suspect a lot of people haven’t updated their view of the field.
Within these 12 journals, the literature on the Gettier Problem is round about 100 articles. That’s a third of a percent of all the articles in those journals. And given the importance of the question it addresses, What is Knowledge?, a third of a percent seems fine to me. I think it’s a widespread view in philosophy that the Gettier Problem literature was much bigger than it should have been. And I think that’s false; a third of a percent is a perfectly reasonable proportion of the available journal space.
To analyse the epistemology literature, it would help to know what the epistemology articles are. Since I’ve already said that I’m treating 55.1 - Arguments, 74 - Knowledge, 76- Justification, and 84 - Formal Epistemology, as the epistemology topics, you might think the thing to do would be to just take articles from those topics. But this doesn’t work for a couple of reasons. In theory, which topic has the highest probability isn’t that significant. Whether an article’s highest probability is in one of the epistemology topics depends on just how the model carves up the space of non-epistemology topics, and that seems wrong. In practice, this method declares that Is Knowledge Justified True Belief? (Gettier 1963) is not an epistemology article, and that isn’t something we can live with.
A better approach is to sum the probabilities that the model gives to an article being in one of the four epistemology topics, call that its epistemology probability, and then say that an article is an epistemology article if its epistemology probability is above a threshold. But where should the threshold go? Since I want to convince you that the Gettier Problem literature isn’t that big, I want to have a very inclusive definition of epistemology, so I capture all the articles in that literature. So that militates in favour of a low threshold.
There is also the fact that epistemology articles tend to naturall slide into a lot of different topics. If they discuss scepticism at all, the model thinks they might be talking about Hume. If there is any probability talk, the model thinks they might be doing theory of confirmation or theory of chance. If they talk about which propositions a subject does or doesn’t know, the model thinks they might be talking about propositions. If they talk about values, or norms, or obligations, or permissions, the model thinks they might be doing ethics. And the house style of Anglophone epistemology is close enough to the style of the ordinary language philosophers that the model constantly thinks they might be just ordinary language philosophers.
Which is all to say that the cut-off ended up being much lower than I expected. I ended up setting it at just 0.2. This seems absurd, but rather than doing the pure theory, let’s look at what this looks like in practice. Here are the last eight articles that are classified as epistemology under this measure - i.e., the 8 articles with a probability of being epistemology just above 0.2.
- Kirk Ludwig (1992) “Skepticism And Interpretation” Philosophy and Phenomenological Research 52:317-339.
- Cindy D. Stern (1990) “On Justification Conditional Models Of Linguistic Competence” Mind 99:441-445.
- Catherine J.l. Talmage and Mark Mercer (1991) “Meaning Holism And Interpretability” The Philosophical Quarterly 41:301-315.
- David E. Nelson (1996) “Confirmation, Explanation, And Logical Strength” British Journal for the Philosophy of Science 47:399-413.
- Richard Schantz (2001) “The Given Regained. Reflections On The Sensuous Content Of Experience” Philosophy and Phenomenological Research 62:167-180.
- N. M. L. Nathan (2004) “Stoics And Sceptics: A Reply To Brueckner” Analysis 64:264-268.
- Hastings Berkeley (1912) “The Kernel Of Pragmatism” Mind 21:84-88.
- Michael Martin (1973) “The Objectivity Of A Methodology” Philosophy of Science 40:447-450.
Those aren’t all epistemology articles, but some of them are. The Shope clearly is, and the Nathan, and plausibly several others. What about the 8 that just missed the cut, i.e., the 10 with a probability of being in epistemology just below 0.2.
- Irving Thalberg (1974) “Evidence And Causes Of Emotion” Mind 83:108-110.
- Sanford C. Goldberg (2000) “Word-Ambiguity, World-Switching, And Semantic Intentions” Analysis 60:260-264.
- James Van Cleve (1992) “Semantic Supervenience And Referential Indeterminacy” Journal of Philosophy 89:344-361.
- Alvin Plantinga (1998) “Degenerate Evidence And Rowe’s New Evidential Argument From Evil” Noûs 32:531-544.
- Curtis Brown (1992) “Direct And Indirect Belief” Philosophy and Phenomenological Research 52:289-316.
- Elazar Weinryb (1978) “Construction Vs. Discovery In History” Philosophy and Phenomenological Research 39:227-239.
- Robert K. Shope (1973) “Remembering, Knowledge, And Memory Traces” Philosophy and Phenomenological Research 33:303-322.
- David E. Cooper (1977) “Lewis On Our Knowledge Of Conventions” Mind 86:256-261.
That’s pretty good - we don’t seem to be excluding any articles that should be included. So the threshold is at 0.2.
The result of all this is that we get 1842 articles to work with. They are not distributed evenly across the years, to put it mildly. Here is how many articles there are in each year.
The gaps are for the cases where there are 0 papers. Through 1945, there are only 24 papers. So from now on I’m going to start graphs after WWII. And I’m not going to divide things up into journals; you don’t need a graph to know that Philosophy and Public Affairs doesn’t publish much epistemology. But those 24 papers will stay in the analysis; I just won’t present them on graphs.
So the next step was to take the 1842 articles and, as you might have guessed, build an LDA model for them. After a little bit of trial and error, I decided to set the number of topics to 40. I wanted to get the number to be as small as possible, while still having a topic that in some plausible sense had only Gettier Problem papers.
It divided the 1842 papers up into the following 40 subjects. The subject column is my subjective description of what area the papers there seem to be about. The ‘count’ column is how many articles are in that subject; i.e., the model gives more probability to them being in that subject than it gives to any other subject. The ‘weighted count’ is the expected number of articles in that subject; i.e., the sum across all articles of the probability of the article being in that subject. And the ‘year’ column is the average publication date of the papers that are ‘in’ the subject. As you can see, I’ve renumbered the topics so they are arranged by year.
|1||Knowledge of Mind||82||77.53||1968.0|
|12||Degree of Belief||29||32.54||1989.3|
|13||Logic and Paradoxes||49||50.60||1990.4|
|17||Infinity and Regresses||30||32.63||1992.3|
|22||Ethics of Belief||52||50.77||1994.4|
|29||Aim of Belief||43||40.57||1997.2|
There are several weirdnesses here that are worth listing. (I don’t think this eliminates the usefulness of the model, but we should be up front about the shortcomings.)
- The larger LDA had put epistemic modals in with epistemology (reasonably enough), then followed that up by putting indicative conditionals in as well. Indicative conditionals are a tricky subject to classify, and different models treated them differently. But given their links to work on probability, and to epistemic modals, it isn’t surprising they end up in epistemology. Still, it means topic 7, and topic 36, are more philosophy of language topics than epistemology.
- Similarly you could easily put topic 19 (Perception), and even topic 1 (Knowledge of Minds), in philosophy of mind. As you can see, I’m working in this chapter with a fairly broad conception of epistemology.
- I don’t know why this model split topics 14 and 38, which both look like norms of assertion. I think what’s going on is topic 14 is pre-Williamsonian and topic 38 is Williamsonian. But it does look a bit like a pretty arbitrary split.
- I do know what’s going on with topics 8 and 18, and it’s a little hilarious. The model just does string recognition, so it doesn’t know that ‘sceptic’ and ‘skeptic’ are stylistic variants. But it does know they are super important words. So each of them gets its own topic.
- Splitting off Topic 16 (Experts) from Topic 32 (Testimony) was a bit weird.
- Topic 24 includes both process reliabilism work, and work from recent cognitive science on mental processes. This isn’t terrible, but it’s not how I would have carved things up.
- I’ve called Topic 4 the Surprise Exam, but there is also a lot of work here on the Sly Pete example. I’m not entirely sure what the model saw that put these puzzles together.
In the remaining sections of this chapter I include (automatically generated) statistics, graphs, keywords and key papers from these topics, so you can investigate them more at leisure. For now I want to talk about the broad trends, some highlights, and then especially Topic 6 (Gettier).
First the overall graphs of the raw count and the weighted count. I’ve included trend lines for the raw count because otherwise there are a lot of overlapping dots. And I’ve capped the graph at 6 to make everything clear.
The ‘missing’ data points are:
These points are not shown, but they are influencing the curves.
The graph is a lot of stuff, but the basic picture is fairly straightforward.
The Gettier Problem was a big deal through the late-1970s and early-1980s. It’s perhaps worth noting here that the model treats work on Nozick’s theory of knowledge as part of the Gettier Problem literature, which is fair enough, and explains a bit of its longievity. Then there is a bunch of work on conditionals. Then a lot of modern topics become significant, and several of them seem to be as significant to the philosophical literature in the 2010s as the Gettier Problem was in the 1970s.
The picture doesn’t change enough if we use weighted counts rather than counts, though this does let us remove the trendlines.
In this case there are just two missing data points.
It’s much easier to see what’s happening here with the subjects separated out. Again, I’ve left off those two data points so they don’t throw off the scale of the whole graph.
The model makes these three weird divisions: splitting experts from testimony, having two assertion categories, and dividing scepticism and scepticism. Let’s put those together, alongside the two big topics from theory of knowledge, contextualism and Gettier.
I think that gives you a pretty good sense of what the central parts of epistemology have looked like over the last fifty or so years. The Gettier Problem was the central question, for a while by far the central question. (Note that the loess curve here is well under some of the dots, so it understates the trend.) But scepticism keeps being taken more and more seriously, even if still as something haunting the land. But issues about language, and about social epistemology, are now as important as the Gettier Problem ever was.
So why was the Gettier Problem so widely thought to be dominating epistemology? The following four graphs might help explain this perception. I’ll eventually do these for each of the 40 topics, though I won’t include any commentary on any of them other than these. First, here’s the graph (with trendline) of the raw number of articles about the Gettier Problem each year.
There were a few articles, especially in the late 1970s and early 1980s, but it doesn’t look so huge. It’s even less dramatic if we use weighted counts.
We can also look at that as a percentage of all philosophy articles published in that year.
At its height, it’s about 1.3% of all the philosophy being done in a year. That’s not a small number, but there are only four years where it is above 1%, and only four more between 0.75% and 1%. So why is it remembered as taking over everything? This graph I think is part of the explanation. Now we’ll express these articles as a percentage of all epistemology being published.
For several years it was 15-20% of all epistemology that was being published. And remember that we have a very inclusive conception of epistemology, so there’s a decent case that these numbers are on the low side, and it’s really more like 20-25%. And that does seem excessive. So there’s a reasonable case that for a while the Gettier Problem literature was a rather excessive proportion of the epistemology literature. And maybe that’s why it looms so large in a lot of people’s impression of what happens in epistemology.
The rest of this chapter is automatically generated. Every section covers 1 of these 40 topics. It displays:
- The keywords for each topic.
- The raw and weighted counts of articles in that topic.
- The four graphs I just showed.
- The 10 articles that have the highest probability of being in that category. (These aren’t weighted by length, because so many of the significant articles are short.)
- If there are any of the 600 highly cited articles, it includes those as well.