The Webis Crowd Paraphrase Corpus 2011 (Webis-CPC-11) contains 7,859 candidate paraphrases obtained from Mechanical Turk crowdsourcing. The corpus is made up of 4,067 accepted paraphrases, 3,792 rejected non-paraphrases, and the original texts. These samples have formed part of PAN 2010 international plagiarism detection competition, but were not previously available separate to rest of the competition data. We provide the dataset as a single folder in a Zip archive. Each paraphrase is represented by three files, containing the original text (e.g.: "1-original.txt"), the paraphrase text (e.g.: "1-paraphrase.txt"), and a file containing metadata (e.g.: "1-metadata.txt"), with information about the task identifier, task author identifier, time taken, and whether the paraphrase was accepted or rejected. {"references": ["teven Burrows, Martin Potthast, and Benno Stein. Paraphrase Acquisition via Crowdsourcing and Machine Learning. Transactions on Intelligent Systems and Technology (ACM TIST), 4 (3) : 43:1-43:21, June 2013"]}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Webis Crowd Paraphrase Corpus 2011 (Webis-CPC-11) contains 7,859 candidate paraphrases obtained from Mechanical Turk crowdsourcing. The corpus is made up of 4,067 accepted paraphrases, 3,792 rejected non-paraphrases, and the original texts. These samples have formed part of PAN 2010 international plagiarism detection competition, but were not previously available separate to rest of the competition data.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The Webis Crowd Paraphrase Corpus 2011 (Webis-CPC-11) contains 7,859 candidate paraphrases obtained from Mechanical Turk crowdsourcing. The corpus is made up of 4,067 accepted paraphrases, 3,792 rejected non-paraphrases, and the original texts. These samples have formed part of PAN 2010 international plagiarism detection competition, but were not previously available separate to rest of the competition data. We provide the dataset as a single folder in a Zip archive. Each paraphrase is represented by three files, containing the original text (e.g.: "1-original.txt"), the paraphrase text (e.g.: "1-paraphrase.txt"), and a file containing metadata (e.g.: "1-metadata.txt"), with information about the task identifier, task author identifier, time taken, and whether the paraphrase was accepted or rejected. {"references": ["teven Burrows, Martin Potthast, and Benno Stein. Paraphrase Acquisition via Crowdsourcing and Machine Learning. Transactions on Intelligent Systems and Technology (ACM TIST), 4 (3) : 43:1-43:21, June 2013"]}