4 datasets found
  1. GoogleNews-vectors-negative300

    • figshare.com
    application/gzip
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gleb Sokolov (2023). GoogleNews-vectors-negative300 [Dataset]. http://doi.org/10.6084/m9.figshare.23601195.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Gleb Sokolov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GoogleNews-vectors-negative300

  2. Z

    Data from: Negative Sampling Improves Hypernymy Extraction Based on...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arefyev, Nikolay (2020). Negative Sampling Improves Hypernymy Extraction Based on Projection Learning [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_290524
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Panchenko, Alexander
    Ustalov, Dmitry
    Arefyev, Nikolay
    Biemann, Chris
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the state-of-the-art approach on three datasets from different languages.

    The russian model.

    $ python -V; pip show tensorflow numpy scipy scikit-learn gensim | egrep -i '(name|version)' Python 3.5.2 :: Continuum Analytics, Inc. Name: tensorflow Version: 0.12.1 Name: numpy Version: 1.12.0 Name: scipy Version: 0.18.1 Name: scikit-learn Version: 0.18.1 Name: gensim Version: 0.13.4.1

    The english-combined model has been trained using the well-known word embeddings dataset based on Google News: GoogleNews-vectors-negative300.bin on EVALution, BLESS, K&H+N, ROOT09 combined. The english-evalution model is traned on EVALution only.

    $ python -V; pip show tensorflow numpy scipy scikit-learn gensim | egrep -i '(name|version)' Python 3.5.2 :: Anaconda custom (64-bit) Name: tensorflow Version: 0.12.1 Name: numpy Version: 1.11.3 Name: scipy Version: 0.18.1 Name: scikit-learn Version: 0.18.1 Name: gensim Version: 0.13.4.1

  3. d

    Data from: Agricultural Research Word Vectors

    • datasets.ai
    • catalog.data.gov
    57
    Updated Mar 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Agriculture (2024). Agricultural Research Word Vectors [Dataset]. https://datasets.ai/datasets/agricultural-research-word-vectors-e02a7
    Explore at:
    57Available download formats
    Dataset updated
    Mar 30, 2024
    Dataset authored and provided by
    Department of Agriculture
    Description

    This model was originally trained for use in a recommendation system to the Ag Data Commons that will automatically link viewers of one dataset to other directly relevant datasets and research papers that they may be interested in. It was also used to determine the similarities and differences between projects within ARS’ National Programs and create a visualization layer to allow leaders to explore and manage their programs easily.

    This model was generated using the Word2Vec model, starting with a set of word vectors trained on Google News articles, and further training it on the titles+abstracts from PubAg and the titles+descriptions from Ag Data Commons. This model was trained using a vector length of 300 and the Continuous Bag of Words version of the algorithm with negative sampling.

    This word vector model could be used for any Natural-Language Processing applications involving text with a large amount of agricultural research vocabulary.


    Resources in this dataset:

    • Resource Title: Agricultural Word Vectors.

      File Name: AgWordVectors-300.zip

      Resource Description: Word vectors trained on the full titles/abstracts in PubAg and titles/abstracts in Ag Data Commons. (Part A)


    • Resource Title: Agricultural Word Vectors Trainables.

      File Name: AgWordVectors-300.model.trainables.syn1neg.zip

      Resource Description: Word vectors trained on the full titles/abstracts in PubAg and titles/abstracts in Ag Data Commons. (Part B)


    • Resource Title: Agricultural Word Vector Model.

      File Name: AgWordVectors-300.model.wv_.vectors.zip

      Resource Description: Word vectors trained on the full titles/abstracts in PubAg and titles/abstracts in Ag Data Commons. (Part C)

  4. n

    Data from: Wide range screening of algorithmic bias in word embedding models...

    • data.niaid.nih.gov
    • datasetcatalog.nlm.nih.gov
    • +1more
    zip
    Updated Apr 7, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Rozado (2020). Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types [Dataset]. http://doi.org/10.5061/dryad.rbnzs7h7w
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 7, 2020
    Dataset provided by
    Otago Polytechnic
    Authors
    David Rozado
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also along the less frequently studied dimensions of socioeconomic status, age, physical appearance, sexual orientation, religious sentiment and political leanings. Consistent with previous scholarly literature, this work has found systemic bias against given names popular among African-Americans in most embedding models examined. Gender bias in embedding models however appears to be multifaceted and often reversed in polarity to what has been regularly reported. Interestingly, using the common operationalization of the term bias in the fairness literature, novel types of so far unreported bias types in word embedding models have also been identified. Specifically, the popular embedding models analyzed here display negative biases against middle and working-class socioeconomic status, male children, senior citizens, plain physical appearance and intellectual phenomena such as Islamic religious faith, non-religiosity and conservative political orientation. Reasons for the paradoxical underreporting of these bias types in the relevant literature are probably manifold but widely held blind spots when searching for algorithmic bias and a lack of widespread technical jargon to unambiguously describe a variety of algorithmic associations could conceivably be playing a role. The causal origins for the multiplicity of loaded associations attached to distinct demographic groups within embedding models are often unclear but the heterogeneity of said associations and their potential multifactorial roots raises doubts about the validity of grouping them all under the umbrella term bias. Richer and more fine-grained terminology as well as a more comprehensive exploration of the bias landscape could help the fairness epistemic community to characterize and neutralize algorithmic discrimination more efficiently.

    Methods This data set has collected several popular pre-trained word embedding models.

    -Word2vec Skip-Gram trained on Google News corpus (100B tokens) https://code.google.com/archive/p/word2vec/

    -Glove trained on Wikipedia 2014 + Gigaword 5 (6B tokens) http://nlp.stanford.edu/data/glove.6B.zip

    -Glove trained on 2B tweets Twitter corpus (27B tokens) http://nlp.stanford.edu/data/glove.twitter.27B.zip

    -Glove trained on Common Crawl (42B tokens) http://nlp.stanford.edu/data/glove.42B.300d.zip

    -Glove trained on Common Crawl (840B tokens) http://nlp.stanford.edu/data/glove.840B.300d.zip

    -FastText trained with subword infomation on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens) https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.vec.zip

    -Fastext trained with subword infomation on Common Crawl (600B tokens) https://dl.fbaipublicfiles.com/fasttext/vectors-english/crawl-300d-2M-subword.zip"

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Gleb Sokolov (2023). GoogleNews-vectors-negative300 [Dataset]. http://doi.org/10.6084/m9.figshare.23601195.v1
Organization logo

GoogleNews-vectors-negative300

Explore at:
313 scholarly articles cite this dataset (View in Google Scholar)
application/gzipAvailable download formats
Dataset updated
Jun 29, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
Gleb Sokolov
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

GoogleNews-vectors-negative300

Search
Clear search
Close search
Google apps
Main menu