Search
Clear search
Close search
Main menu
Google apps
2 datasets found
  1. W

    Webis-QSeC-10

    • anthology.aicmu.ac.cn
    • webis.de
    3256198
    Updated 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthias Hagen; Martin Potthast; Benno Stein (2010). Webis-QSeC-10 [Dataset]. http://doi.org/10.5281/zenodo.3256198
    Explore at:
    3256198Available download formats
    Dataset updated
    2010
    Dataset provided by
    Leipzig University
    Bauhaus-Universität Weimar
    The Web Technology & Information Systems Network
    Friedrich Schiller University Jena
    Authors
    Matthias Hagen; Martin Potthast; Benno Stein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis Query Segmentation Corpus 2010 (Webis-QSeC-10) contains segmentations for 53,437 web queries obtained from Mechanical Turk crowdsourcing (4,850 used for training in our CIKM 2012 paper). For each query, at least 10 MTurk workers were asked to segment the query. The corpus represents the distribution of their decisions.

  2. o

    Webis Query Segmentation Corpus 2010 (Webis-QSeC-10)

    • explore.openaire.eu
    • live.european-language-grid.eu
    • +2more
    Updated Jul 23, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthias Hagen; Martin Potthast; Benno Stein; Christof Bräutigam; Anna Beyer (2010). Webis Query Segmentation Corpus 2010 (Webis-QSeC-10) [Dataset]. http://doi.org/10.5281/zenodo.3256197
    Explore at:
    Dataset updated
    Jul 23, 2010
    Authors
    Matthias Hagen; Martin Potthast; Benno Stein; Christof Bräutigam; Anna Beyer
    Description

    The Webis Query Segmentation Corpus 2010 (Webis-QSeC-10) contains segmentations for 53,437 web queries obtained from Mechanical Turk crowdsourcing (4,850 used for training in our CIKM 2012 paper). For each query, at least 10 MTurk workers were asked to segment the query. The corpus represents the distribution of their decisions. We provide the training and test sets as single folders in Zip archives containing several files. The files "...-queries.txt" contain the query strings and a unique ID for each query. The files "...-segmentations-crowdsourced.txt" contain the crowdsourced segmentations with their number of votes per query ID (see below for an example). The "data" folders contain all the data (n-gram frequencies, PMI values, POS tags, etc.) needed to replicate the evaluation results of our proposed segmentation algorithms. For convenience reasons, the folder "segmentations-of-algorithms" contain the segmentations that our proposed algorithms compute. The original queries were extracted from the AOL query log, and range from 3 to 10 keywords in length. For each query at least 10 MTurk workers were asked to segment the query and their decisions are accumulated in the corpus. The examples below demonstrate two different cases. Sample queries with internal ID (as in "Webis-QSeC-10-training-set-queries.txt"): 2315313155 harvard community credit union 1858084875 women's cycling tops Sample segmentations (as in "webis-qsec-10-training-set-segmentations-crowdsourced.txt"): 2315313155 [(6, 'harvard community credit union'), (2, 'harvard community|credit union'), (1, 'harvard|community|credit union'), (1, 'harvard|community credit union')] 1858084875 [(5, "women's|cycling tops"), (2, "women's|cycling|tops"), (2, "women's cycling|tops"), (1, "women's cycling tops")] Each query has a unique internal ID (e.g., 2315313155 in the first example) and the segmentations file contains at least 10 different decisions the MTurk workers made for that query. In the first example, 6 workers have all 4 keywords in one segment, 2 workers decided to break after the second word (denoted by a |) etc. Note that apostrophe in the second example (query ID 1858084875) is escaped by double quotes around the segmentation strings. {"references": ["Matthias Hagen, Martin Potthast, Benno Stein, and Christof Br\u00e4utigam. The Power of Na\u00efve Query Segmentation. In Fabio Crestani et al, editors, 33rd International ACM Conference on Research and Development in Information Retrieval (SIGIR 10), pages 797-798, July 2010. ACM. ISBN 978-1-4503-0153-4."]}

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Matthias Hagen; Martin Potthast; Benno Stein (2010). Webis-QSeC-10 [Dataset]. http://doi.org/10.5281/zenodo.3256198

Webis-QSeC-10

Explore at:
7 scholarly articles cite this dataset (View in Google Scholar)
3256198Available download formats
Dataset updated
2010
Dataset provided by
Leipzig University
Bauhaus-Universität Weimar
The Web Technology & Information Systems Network
Friedrich Schiller University Jena
Authors
Matthias Hagen; Martin Potthast; Benno Stein
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The Webis Query Segmentation Corpus 2010 (Webis-QSeC-10) contains segmentations for 53,437 web queries obtained from Mechanical Turk crowdsourcing (4,850 used for training in our CIKM 2012 paper). For each query, at least 10 MTurk workers were asked to segment the query. The corpus represents the distribution of their decisions.