Not seeing a result you expected?
Learn how you can add new datasets to our index.
We provide you with a training corpus that comprises several different common attribution and clustering scenarios.
In last year’s competition, the corpus consisted of several thousand relatively small documents, with distractor sets consisting of hundreds of authors. This was considered to create impracticalities for many participants, especially those that relied upon machine-aided instead of fully automatic analysis. We have instead focused on a smaller group of larger documents, perhaps more typical of the type of cases usually analyzed by “traditional” close reading.
Last year’s corpus was taken from the Enron email corpus; this years instead was collected from the free fiction collection published by Feedbooks.com, including both classic fiction that is now out-of-copyright as well as (fiction, represented by the Feedbooks.com site). This of course introduces the standard issue of analysis-by-Google, but that’s a very difficult problem to avoid short of generating content to order.