This is the data set for the Style Change Detection task of PAN 2020.
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.
Tasks
Given a document, we ask participants to answer the following two questions:
Was the given document written by multiple authors? (task 1)
For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)
In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).
All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).
Data
To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).
Both of those data sets are split into three parts:
training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation (see later).
Input Format
Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.
The structure of the provided datasets is as follows:
train/ dataset-narrow/ dataset-wide/ validation/ dataset-narrow/ dataset-wide/ test/ dataset-narrow/ dataset-wide/
For each problem instance X (i.e., each input document), two files are provided:
problem-X.txt contains the actual text, where paragraphs are denoted by
.
truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": NUMBER_OF_AUTHORS, "structure": ORDER_OF_AUTHORS, "site": SOURCE_SITE, "multi-author": RESULT_TASK1, "changes": RESULT_ARRAY_TASK2 }
The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).
An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):
{ "multi-author": 1, "changes": [0,0,1,...] }
A single-author document would have the following form (again, only listing the two relevant key/value pairs):
{ "multi-author": 0, "changes": [0,0,0,...] }
This is the dataset for the Style Change Detection task of PAN 2022.
Task
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors have written a text together, can we find evidence for this fact; i.e., do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.
Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021) and where the actual style changes were located (2021). Based on the progress made towards this goal in previous years, we again extend the set of challenges to likewise entice novices and experts:
Given a document, we ask participants to solve the following three tasks:
[Task1] Style Change Basic: for a text written by two authors that contains a single style change only, find the position of this change (i.e., cut the text into the two authors’ texts on the paragraph-level),
[Task2] Style Change Advanced: for a text written by two or more authors, find all positions of writing style change (i.e., assign all paragraphs of the text uniquely to some author out of the number of authors assumed for the multi-author document)
[Task3] Style Change Real-World: for a text written by two or more authors, find all positions of writing style change, where style changes now not only occur between paragraphs, but at the sentence level.
All documents are provided in English and may contain an arbitrary number of style changes, resulting from at most five different authors.
Data
To develop and then test your algorithms, three datasets including ground truth information are provided (dataset1 for task 1, dataset2 for task 2, and dataset3 for task 3).
Each dataset is split into three parts:
training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation (see later).
You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.
Input Format
The datasets are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data for each dataset, respectively.
For each problem instance X (i.e., each input document), two files are provided:
problem-X.txt contains the actual text, where paragraphs are denoted by for tasks 1 and 2. For task 3, we provide one sentence per paragraph (again, split by ).
truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format. An example file is listed in the following (note that we list keys for the three tasks here):
{ "authors": NUMBER_OF_AUTHORS, "site": SOURCE_SITE, "changes": RESULT_ARRAY_TASK1 or RESULT_ARRAY_TASK3, "paragraph-authors": RESULT_ARRAY_TASK2 }
The result for task 1 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). For task 2 (key "paragraph-authors"), the result is the order of authors contained in the document (e.g., [1, 2, 1] for a two-author document), where the first author is "1", the second author appearing in the document is referred to as "2", etc. Furthermore, we provide the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic). The result for task 3 (key "changes") is similarly structured as the results array for task 1. However, for task 3, the changes array holds a binary for each pair of consecutive sentences and they may be multiple style changes in the document.
An example of a multi-author document with a style change between the third and fourth paragraph (or sentence for task 3) could be described as follows (we only list the relevant key/value pairs here):
{ "changes": [0,0,1,...], "paragraph-authors": [1,1,1,2,...] }
Output Format
To evaluate the solutions for the tasks, the results have to be stored in a single file for each of the input documents and each of the datasets. Please note that we require a solution file to be generated for each input problem for each dataset. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.
For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json, containing a JSON object holding the solution to the respective task. The solution for tasks 1 and 3 is an array containing a binary value for each pair of consecutive paragraphs (task 1) or sentences (task 3). For task 2, the solution is an array containing the order of authors contained in the document (as in the truth files).
An example solution file for tasks 1 and 3 is featured in the following (note again that for task 1, changes are captured on the paragraph level, whereas for task 3, changes are captured on the sentence level):
{ "changes": [0,0,1,0,0,...] }
For task 2, the solution file looks as follows:
{ "paragraph-authors": [1,1,2,2,3,2,...] }
This is the dataset for the Style Change Detection task of PAN 2021.
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches.
Tasks
Given a document, we ask participants to answer the following three questions:
Single vs. Multiple. Given a text, find out whether the text is written by a single author or by multiple authors (task 1).
Style Change Basic. Given a text written by two or more authors and that contains a number of style changes, find the position of the changes (task 2).
Style Change Real-World. Given a text written by two or more authors, find all positions of writing style change, i.e., assign all paragraphs of the text uniquely to some author out of the number of authors you assume for the multi-author document (task 3).
All documents are provided in English and may contain an arbitrary number of style changes, resulting from at most five different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).
Data
The dataset is split into three parts:
training set: Contains 70% of the whole data set and includes ground truth data. Use this set to develop and train your models.
validation set: Contains 15% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
test set: Contains 15% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation.
The dataset is based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data.
For each problem instance X (i.e., each input document), two files are provided:
problem-X.txt contains the actual text, where paragraphs are denoted by
.
truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": NUMBER_OF_AUTHORS, "site": SOURCE_SITE, "multi-author": RESULT_TASK1, "changes": RESULT_ARRAY_TASK2, "paragraph-authors": RESULT_ARRAY_TASK3 } The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. For task 3 (key "paragraph-authors"), the result is the order of authors contained in the document (e.g., [1, 2, 1] for a two-author document), where the first author is "1", the second author appearing in the document is referred to as "2", etc. Furthermore, we provide the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).
An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the relevant key/value pairs here):
{ "multi-author": 1, "changes": [0,0,1,...], "paragraph-authors": [1,1,1,2,...] } A single-author document would have the following form (again, only listing the relevant key/value pairs):
{ "multi-author": 0, "changes": [0,0,0,...], "paragraph-authors": [1,1,1,...] }
This is the data set for the Style Change Detection task of PAN 2020.
Tasks
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.
Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018) or the actual number of authors within a document (2019). Considering the promising results achieved by the submitted approaches, we aim to steer the task back to its original goal: detecting the exact position of authorship changes. Therefore, the task for PAN'20 is to detect whether a document was authored by one or multiple authors and to find the positions of style changes at the paragraph-level. For each pair of consecutive paragraphs of a document, we ask participants to estimate whether there is indeed a style change between those two paragraphs.
Tasks
Given a document, we ask participants to answer the following two questions:
In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).
All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).
Data
To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).
Both of those data sets are split into three parts:
You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.
Please cite the following paper when using the provided dataset:
@InProceedings{zangerle:2020,
author = {Eva Zangerle, Maximilian Mayerl, G{\"u}nther Specht, Martin Potthast, Benno Stein},
booktitle = {{CLEF 2020 Labs and Workshops, Notebook Papers}},
editor = {Linda Cappellato and Carsten Eickhoff and Nicola Ferro and Aur{\'e}lie N{\'e}v{\'e}ol},
month = sep,
publisher = {CEUR-WS.org},
title = {{Overview of the Style Change Detection Task at PAN 2020}},
year = 2020
}
Input Format
Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.
The structure of the provided datasets is as follows:
train/
dataset-narrow/
dataset-wide/
validation/
dataset-narrow/
dataset-wide/
test/
dataset-narrow/
dataset-wide/
For each problem instance X
(i.e., each input document), two files are provided:
problem-X.txt
contains the actual text, where paragraphs are denoted by
.truth-problem-X.json
contains the ground truth, i.e., the correct solution in JSON format:
{
"authors": NUMBER_OF_AUTHORS,
"structure": ORDER_OF_AUTHORS,
"site": SOURCE_SITE,
"multi-author": RESULT_TASK1,
"changes": RESULT_ARRAY_TASK2
}
The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1]
for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).
An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):
{
"multi-author": 1,
"changes": [0,0,1,...]
}
A single-author document would have the following form (again, only listing the two relevant key/value pairs):
{
"multi-author": 0,
"changes": [0,0,0,...]
}
Output Format
To evaluate the solutions for the two tasks, the classification results have to be stored in a single file for each of the input documents. Please note that we require a solution file to be generated for each input problem. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.
For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json
, containing a JSON object with two properties, one for each task. The actual solution for task 1 is a binary value (0 or 1). For task 2, the solution is an array containing a binary value for each pair of consecutive paragraphs.
An example solution file for a multi-authored document is featured in the following:
{
"multi-author": 1,
"changes": [0,0,1,...]
}
For a single-authored document the solution file may look as follows:
{
"multi-author": 0,
"changes": [0,0,0,...]
}
We provide you with a script to check the validity of the solution files [verifier][tests].
Evaluation
Submissions are evaluated by the F1-score measure for each document. The two tasks are evaluated independently based on the obtained accuracy measures. For task 1, we compute the average F1-score value across all documents and for task 2, we use the micro-averaged F1-score across all documents. The submissions for the two datasets will be evaluated independently and the resulting F1-scores for the two tasks will be averaged across the two datasets.
We provide you with a script to compute those measures based on the produced output-files [code][tests].
Submission
Once you finished tuning your approach on the validation set, your software will be tested on the test set. During the competition, the test set will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.
We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the test corpus and (ii) an absolute path to an empty output directory:
mySoftware -i INPUT-DIRECTORY -o OUTPUT-DIRECTORY
Within OUTPUT-DIRECTORY
, we require two subfolders: dataset-narrow
and dataset-wide
, holding the solutions for the two datasets, respectively. As the provided output directory is guaranteed to be empty, your software needs to create those subfolders.
Within INPUT-DIRECTORY
, you will find one folder for each dataset, holding a set of problem instances (i.e., problem-[id].txt
files). For each problem instance you should produce the solution file solution-problem-[id].json
in the OUTPUT-DIRECTORY
. For instance, you read INPUT-DIRECTORY/dataset-narrow/problem-12.txt
, process it and write your results to OUTPUT-DIRECTORY/dataset-narrow/solution-problem-12.json
.
In
Not seeing a result you expected?
Learn how you can add new datasets to our index.
This is the data set for the Style Change Detection task of PAN 2020.
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.
Tasks
Given a document, we ask participants to answer the following two questions:
Was the given document written by multiple authors? (task 1)
For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)
In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).
All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).
Data
To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).
Both of those data sets are split into three parts:
training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation (see later).
Input Format
Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.
The structure of the provided datasets is as follows:
train/ dataset-narrow/ dataset-wide/ validation/ dataset-narrow/ dataset-wide/ test/ dataset-narrow/ dataset-wide/
For each problem instance X (i.e., each input document), two files are provided:
problem-X.txt contains the actual text, where paragraphs are denoted by
.
truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": NUMBER_OF_AUTHORS, "structure": ORDER_OF_AUTHORS, "site": SOURCE_SITE, "multi-author": RESULT_TASK1, "changes": RESULT_ARRAY_TASK2 }
The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).
An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):
{ "multi-author": 1, "changes": [0,0,1,...] }
A single-author document would have the following form (again, only listing the two relevant key/value pairs):
{ "multi-author": 0, "changes": [0,0,0,...] }