9 datasets found
  1. W

    Data from: Webis-Web-Archive-17

    • webis.de
    • zenodo.org
    1002203
    Updated 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Kiesel; Martin Potthast; Matthias Hagen; Benno Stein; Florian Kneist (2017). Webis-Web-Archive-17 [Dataset]. http://doi.org/10.5281/zenodo.1002203
    Explore at:
    1002203Available download formats
    Dataset updated
    2017
    Dataset provided by
    GESIS - Leibniz Institute for the Social Sciences
    University of Kassel, hessian.AI, and ScaDS.AI
    The Web Technology & Information Systems Network
    Friedrich Schiller University Jena
    Bauhaus-Universität Weimar
    Authors
    Johannes Kiesel; Martin Potthast; Matthias Hagen; Benno Stein; Florian Kneist
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis-Web-Archive-17 comprises a total of 10,000 web page archives from mid-2017 that were carefully sampled from the Common Crawl to involve a mixture of high-ranking and low-ranking web pages. The dataset contains the web archive files, HTML DOM, and screenshots of each web page, as well as per-page annotations of visual web archive quality.

  2. W

    Webis-Web-Archive-Quality-22

    • anthology.aicmu.ac.cn
    • webis.de
    6881334
    Updated 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Potthast; Johannes Kiesel; Benno Stein (2022). Webis-Web-Archive-Quality-22 [Dataset]. http://doi.org/10.5281/zenodo.6881334
    Explore at:
    6881334Available download formats
    Dataset updated
    2022
    Dataset provided by
    Leipzig University
    The Web Technology & Information Systems Network
    Bauhaus-Universität Weimar
    Authors
    Martin Potthast; Johannes Kiesel; Benno Stein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis-Web-Archive-Quality-22 comprises a total of 6,500 pairs of screenshots from web pages as they were archived and as they were reproduced from that archive, along with archive quality annotations and information of DOM elements on the screenshot.

  3. Z

    Webis-Web-Errors-19

    • data.niaid.nih.gov
    • webis.de
    Updated Jul 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kiesel, Johannes (2024). Webis-Web-Errors-19 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_2549837
    Explore at:
    Dataset updated
    Jul 24, 2024
    Dataset provided by
    Stein, Benno
    Potthast, Martin
    Hubricht, Fabienne
    Kiesel, Johannes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis-Web-Errors-19 comprises various annotations for the 10,000 web page archives of the Webis-Web-Archive-17. The annotations are whether the page is (1) mostly advertisement, (2) cut off, (3) still loading, (4) pornographic; and whether it shows (not/a bit/ very) (5) pop-ups, (6) CAPTCHAs, or (7) error messages. If you use this dataset in your research, please cite it using this paper.

  4. Z

    Webis-Web-Segments-20

    • data.niaid.nih.gov
    Updated Feb 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meyer, Lars (2023). Webis-Web-Segments-20 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3354902
    Explore at:
    Dataset updated
    Feb 16, 2023
    Dataset provided by
    Komlossy, Kristof
    Kneist, Florian
    Meyer, Lars
    Stein, Benno
    Potthast, Martin
    Kiesel, Johannes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset of crowdsourced annotations for web page segmentations.

    Web pages are taken from the webis-web-archive-17.

  5. W

    Webis-WebSeg-20

    • webis.de
    • anthology.aicmu.ac.cn
    3354902
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Kiesel; Lars Meyer; Benno Stein; Martin Potthast (2020). Webis-WebSeg-20 [Dataset]. http://doi.org/10.5281/zenodo.3354902
    Explore at:
    3354902Available download formats
    Dataset updated
    2020
    Dataset provided by
    Enginsight GmbH
    GESIS - Leibniz Institute for the Social Sciences
    University of Kassel, hessian.AI, and ScaDS.AI
    The Web Technology & Information Systems Network
    Bauhaus-Universität Weimar
    Authors
    Johannes Kiesel; Lars Meyer; Benno Stein; Martin Potthast
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis-WebSeg-20 dataset comprises 42,450 crowdsourced segmentations for 8,490 web pages from the Webis-Web-Archive-17. Segmentations were fused from the segmentations of five crowd workers each.

  6. E

    Webis Clickbait Corpus 2017 (Webis-Clickbait-17)

    • live.european-language-grid.eu
    • data.niaid.nih.gov
    html
    Updated May 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Webis Clickbait Corpus 2017 (Webis-Clickbait-17) [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/7559
    Explore at:
    htmlAvailable download formats
    Dataset updated
    May 29, 2022
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Webis Clickbait Corpus 2017 (Webis-Clickbait-17) comprises a total of 38,517 Twitter posts from 27 major US news publishers. In addition to the posts, information about the articles linked in the posts are included. The posts had been published between November 2016 and June 2017. To avoid publisher and topical biases, a maximum of ten posts per day and publisher were sampled. All posts were annotated on a 4-point scale [not click baiting (0.0), slightly click baiting (0.33), considerably click baiting (0.66), heavily click baiting (1.0)] by five annotators from Amazon Mechanical Turk. A total of 9,276 posts are considered clickbait by the majority of annotators. In terms of its size, this corpus outranges the Webis Clickbait Corpus 2016 by one order of magnitude. The corpus is divided into two logical parts, a training and a test dataset. The training dataset has been released in the course of the Clickbait Challenge and a download link is provided below. To allow for an objective evaulatuion of clickbait detection systems, the test dataset is available only through the Evaluation-as-a-Service platform TIRA at the moment. On TIRA, developers can deploy clickbait detection systems and execute them against the test dataset. The performance of the submitted systems can be viewed on the TIRA page of the Clickbait Challenge.To make working with the Webis Clickbait Corpus 2017 convenient, and to allow for its validation and replication, we are developing and sharing a number of software tags:

    Corpus Viewer. Our Django web service for exploring corpora. For importing the Webis Clickbait Corpus 2017 into the corpus viewer, we provide an appropriate configuration file.MTurk Manager. Our Django web service for conducting sophisticated crowd sourcing tasks on Amazon Mechanical Turk. The service allows to manage projects, upload batches of HITS, apply custom reviewing interfaces, and more. To make the clickbait crowd-sourcing task replicable, we share the worker template that we used to instruct the workers and to display the tweets. Also shared is a reviewing template that can be used to accept/reject assignments and to assess the quality of the received annotations quickly.Web Archiver. Software for archiving web pages as WARC files and reproducing them later on. This software can be used to open the WARC archives provided above.

    In addition to the corpus ""clickbait17-train-170630.zip"", we provide the original WARC archives of the articles that are linked in the posts. They are split in 5 archives that can be extracted separately.

  7. d

    geohist.ca website files/fichiers du site web geohist.ca

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fortin, Marcel (2023). geohist.ca website files/fichiers du site web geohist.ca [Dataset]. http://doi.org/10.5683/SP2/OWEBOJ
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Fortin, Marcel
    Description

    Archive of the Geohistory/Géohistoire website and related files. Captured July 17, 2020.

  8. COCO 2014 Dataset (for YOLOv3)

    • kaggle.com
    Updated Sep 9, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeff Faudi (2021). COCO 2014 Dataset (for YOLOv3) [Dataset]. https://www.kaggle.com/datasets/jeffaudi/coco-2014-dataset-for-yolov3
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 9, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jeff Faudi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context

    The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 164K images.

    This is the original version from 2014 made available here for easy access in Kaggle and because it does not seem to be still available on the COCO Dataset website. This has been retrieved from the mirror that Joseph Redmon has setup on this own website.

    Content

    The 2014 version of the COCO dataset is an excellent object detection dataset with 80 classes, 82,783 training images and 40,504 validation images. This dataset contains all this imagery on two folders as well as the annotation with the class and location (bounding box) of the objects contained in each image.

    The initial split provides training (83K), validation (41K) and test (41K) sets. Since the split between training and validation was not optimal in the original dataset, there is also two text (.part) files with a new split with only 5,000 images for validation and the rest for training. The test set has no labels and can be used for visual validation or pseudo-labelling.

    Acknowledgements

    This is mostly inspired by Erik Linder-Norén and [Joseph Redmon](https://pjreddie.com/darknet/yolo

  9. w

    Data from Transnational Mod Languages (09-2018)/05 TML Website/TML Website...

    • data.wu.ac.at
    docx, jpeg, pdf, png
    Updated Oct 2, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arts (2018). Data from Transnational Mod Languages (09-2018)/05 TML Website/TML Website Storage/5 TML NEWS/TML is involved in the new archive exhibition, Edinburgh December 2015 – January 2016) [Dataset]. https://data.wu.ac.at/schema/data_bris_ac_uk_data_/YWRmMzUzMWYtZjM1OS00OWRkLWE2YjUtYjRmOWQ5YWZjNzM3
    Explore at:
    jpeg(31534.0), pdf(829537.0), jpeg(77833.0), docx(79806.0), png(666062.0)Available download formats
    Dataset updated
    Oct 2, 2018
    Dataset provided by
    Arts
    License

    http://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/non-commercial-government-licence.htmhttp://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/non-commercial-government-licence.htm

    Description

    Data from Transnationalizing Modern Languages (09-2018)

    Transnationalizing Modern Languages: Mobility, Identity and Translation in Modern Italian Cultures (TML) (funded by the AHRC under the ‘Translating Cultures’ theme, 2014-17)

    PI Charles Burdett, University of Bristol. CIs Jenny Burns (Warwick), Loredana Polezzi (Warwick/Cardiff), Derek Duncan (St Andrews), Margaret Hills de Zarate (QMU)

    RAs: Barbara Spadaro (Bristol), Carlo Pirozzi (St Andrews), Marco Santello (Warwick), Naomi Wells (Warwick), Luisa Percopo (Cardiff)

    PhD students: Iacopo Colombini (St Andrews), Georgia Wall (Warwick)

    Below is a short description of the project. Within the repository, there is a longer description of TML and each folder is accompanied by an explanatory text.

    The project investigates practices of linguistic and cultural interchange within communities and individuals and explores the ways in which cultural translation intersects with linguistic translation in the everyday lives of people. The project has used as its primary object of enquiry the 150-year history of Italy as a nation state and its patterns of emigration and immigration. TML has concentrated on a series of exemplary cases, representative of the geographic, historical and linguistic map of Italian mobility. Focussing on the cultural associations that each community has formed, it examines the wealth of publications and materials that are associated with these organizations.

    Working closely with researchers from across Modern Languages, the project has sought to demonstrate the principle that language is most productively apprehended in the frame of translation and the national in the frame of the transnational. TML is contributing to the development of a new framework for the disciplinary field of MLs, one which puts the interaction of languages and cultures at its core.

    The principles of co-production and co-research lie at the core of the project and TML has worked closely with a very extensive range of partners. It has worked closely with Castlebrae and Drummond Community High Schools and with cultural associations across the world. The project exhibition, featuring the research of the project and including the work of photographer Mario Badagliacca, was curated by Viviana Gravano and Giulia Grechi of Routes Agency. Project events in the UK have drawn on the expertise of Rita Wilson (Monash), the writer Shirin Ramzanali Fazel and all members of the Advisory Board. The project, in close collaboration with the University of Namibia (UNAM) and the Phoenix Project (Cardiff), has been followed by ‘TML: Global Challenges’.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Johannes Kiesel; Martin Potthast; Matthias Hagen; Benno Stein; Florian Kneist (2017). Webis-Web-Archive-17 [Dataset]. http://doi.org/10.5281/zenodo.1002203

Data from: Webis-Web-Archive-17

Related Article
Explore at:
1002203Available download formats
Dataset updated
2017
Dataset provided by
GESIS - Leibniz Institute for the Social Sciences
University of Kassel, hessian.AI, and ScaDS.AI
The Web Technology & Information Systems Network
Friedrich Schiller University Jena
Bauhaus-Universität Weimar
Authors
Johannes Kiesel; Martin Potthast; Matthias Hagen; Benno Stein; Florian Kneist
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The Webis-Web-Archive-17 comprises a total of 10,000 web page archives from mid-2017 that were carefully sampled from the Common Crawl to involve a mixture of high-ranking and low-ranking web pages. The dataset contains the web archive files, HTML DOM, and screenshots of each web page, as well as per-page annotations of visual web archive quality.

Search
Clear search
Close search
Google apps
Main menu