1.8 The Output
The result of all this is a model with two giant probability functions. In this section I’ll talk through what those functions look like with a worked example, and then some graphs about how well the models perform at their intended task.
The worked example involves David Makinson’s article “The Paradox of the Preface” (Makinson 1965). The input to the model looks like this.
That is, the word rational appears fourteen times, beliefs appears eleven times, and so on. This is a list of all of the words in the article, excluding the various stop words described above and the words that appear one to three times.
The model gives a probability to the article being in each of ninety topics. For this article, as for most articles, it just gives a residual probability to the vast majority of topics. For eighty-three topics, the probability it gives to the article being in that topic is about 0.0003. The seven topics it gives a serious probability to are:
I’m going to spend a lot of time in the next chapter on what these topics are. For now, I’ll just refer to them by number.
The model also gives a probability to each word turning up in a paradigm article for each of the topics. For those nineteen words that the model saw as input, we can look at how frequently the model thinks a word should turn up in each of these seven topics.
|Word||Topic 4||Topic 15||Topic 37||Topic 39||Topic 59||Topic 76||Topic 81|
But the model doesn’t think that “The Paradox of the Preface” is a paradigm case of any one of these topics; it thinks it is a mix of seven. Therefore, what it thinks the word frequencies in that article should be can be worked out by taking weighted means of these columns, with the weights given by the topic probabilities. And that gives the following results:
|Word||Wordcount||Measured Frequency||Modeled Frequency|
The modeled frequency of rational is given by multiplying, across seven topics, the probability of the article being in that topic, by the expected frequency of the word given it is in that topic. And the same goes for the other words. What I’m giving here as the measured frequency of a word is not its frequency in the original article; it is its frequency among the words that survive the various filters I described above. In general that will be two to three times as large as its original frequency.
The aim is that the two columns here would line up. And, of course they don’t. In fact, the model doesn’t end up doing very well with this article; it is still a long way from equilibrium.
On that graph, every dot is a word type. The x axis represents the frequency of that word type in the article (after excluding the stop words and so on), and the y axis represents how frequently the model thinks the word ‘should’ appear, given its classification of the article into ninety topics, and the frequency of words in those topics. Ideally, all the dots would be on the forty-five degree line coming northeast out of the origin. Obviously, that doesn’t happen. It can’t really, because, to a very rough approximation, I’ve only given the model ninety degrees of freedom, and I’ve asked it to approximate over 32,000 data points.
Actually, this is one of the least impressive jobs the model does. I measured the correlations between measured and modeled word frequency, i.e., what this graph represents, for six hundred highly cited articles. Among those six hundred, this was the twenty-third lowest correlation between measured and modeled frequency. But in many cases, that correlation was very strong. For example, here are the graphs for three more articles where the model manages to understand what’s happening.
There are some articles that it doesn’t manage as well—typically articles with unusual words. (It also does poorly with short articles, like “The Paradox of the Preface”.)
A few different things are going on here. In Elster’s article, the model doesn’t expect any philosophy article to use the word revenge as much as the author does. In Fara’s article, the model lumps articles about modality (especially possible worlds) in with articles on dispositions. (This ends up being topic 80.) And so it expected that Fara will talk about worlds, given he is also talking about dispositions, but he doesn’t. Thomson’s article has both of these features. The model is surprised that anyone is talking about clay so much. And it expects that a metaphysics article like Thomson’s will talk about properties more than Thomson does.
It isn’t perfect, but as seen above, it does pretty well with some cases. The papers I’ve shown so far are pretty much outliers though; here are some more typical examples:
It’s not perfect, but the general picture is that the model does a pretty good job of modeling 32000 articles given the tools it has. And, more importantly from the perspective of this book, the way it models them ends up grouping like articles together. And that’s what I’ll use for describing trends in the journals over their first 138 years.