Facebook
TwitterAuthorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.
Facebook
TwitterMany approaches have been proposed recently to identify the author of a given document. Thereby, one fact is often silently assumed: i.e., that the given document is indeed written by only author. For a realistic author identification system it is therefore crucial to at first determine whether a document is single- or multiauthored.
To this end, previous PAN editions aimed to analyze multi-authored documents. As it has been shown that it is a hard problem to reliably identify individual authors and their contribution within a single document (Author Diarization, 2016; Style Breach Detection, 2017), last year's task substantially relaxed the problem by asking only for binary decision (single- or multi-authored). Considering the promising results achieved by the submitted approaches (see the overview paper for details), we continue last year's task and additionally ask participants to predict the number of involved authors.
Given a document, participants thus should apply intrinsic style analyses to hierarchically answer the following questions:
Is the document written by one or more authors, i.e., do style changes exist or not?
If it is multi-authored, how many authors have collaborated?
All documents are provided in English and may contain zero up to arbitrarily many style changes, resulting from arbitrarily many authors.
The training set: contains 50% of the whole dataset and includes solutions. Use this set to feed/train your models.
Like last year, the whole data set is based on user posts from various sites of the StackExchange network, covering different topics and containing approximately 300 to 2000 tokens per document.
For each problem instance X, two files are provided:
problem-X.txt contains the actual text
problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": number_of_authors, "structure": [author_segment_1, ..., author_segment_3], "switches": [ character_pos_switch_segment_1, ..., character_pos_switch_segment_n, ] }
An example for a multi-author document could look as follows:
{ "authors": 4, "structure": ["A1", "A2", "A4", "A2", "A4", "A2", "A3", "A2", "A4"], "switches": [805, 1552, 2827, 3584, 4340, 5489, 7564, 8714] }
whereas a single-author document would have exactly the following form:
{ "authors": 1, "structure": ["A1"], "switches": [] }
Note that authors within the structure correspond only to the respective document, i.e., they are not the same over the whole dataset. For example, author A1 in document 1 is most likely not the same author as A1 in document 2 (it could be, but as there are hundreds of authors the chances are very small that this is the case). Further, please consider that the structure and the switches are provided only as additional resources for the development of your algorithms, i.e., they are not expected to be predicted.
To tackle the problem, you can develop novel approaches, extend existing algorithms from last year's task or adapt approaches from related problems such as intrinsic plagiarism detection or text segmentation. You are also free to additionally evaluate your approaches on last year's training/validation/test dataset (for the number of authors use the corresponding meta data).
Facebook
TwitterSocial media bots pose as humans to influence users with commercial, political or ideological purposes. For example, bots could artificially inflate the popularity of a product by promoting it and/or writing positive ratings, as well as undermine the reputation of competitive products through negative valuations. The threat is even greater when the purpose is political or ideological (see Brexit referendum or US Presidential elections). Fearing the effect of this influence, the German political parties have rejected the use of bots in their electoral campaign for the general elections. Furthermore, bots are commonly related to fake news spreading. Therefore, to approach the identification of bots from an author profiling perspective is of high importance from the point of view of marketing, forensics and security.
After having addressed several aspects of author profiling in social media from 2013 to 2018 (age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating whether the author of a Twitter feed is a bot or a human. Furthermore, in case of human, to profile the gender of the author.
The uncompressed dataset consists in a folder per language (en, es). Each folder contains:
A XML file per author (Twitter user) with 100 tweets. The name of the XML file correspond to the unique author id.
A truth.txt file with the list of authors and the ground truth.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAuthorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.