Facebook
TwitterThis dataset was created by Megha Kapoor
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Description
This is the dataset repository used in the pyiqa toolbox. Please refer to Awesome Image Quality Assessment for details of each dataset Example commandline script with huggingface-cli: huggingface-cli download chaofengc/IQA-PyTorch-Datasets live.tgz --local-dir ./datasets --repo-type dataset cd datasets tar -xzvf live.tgz
Disclaimer for This Dataset Collection
This collection of datasets is compiled and maintained for academic, research, and educationalโฆ See the full description on the dataset page: https://huggingface.co/datasets/chaofengc/IQA-PyTorch-Datasets.
Facebook
Twitterforked from https://github.com/huggingface/transformers
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
pytorch-image-models metrics
This dataset contains metrics about the huggingface/pytorch-image-models package. Number of repositories in the dataset: 3615 Number of packages in the dataset: 89
Package dependents
This contains the data available in the used-by tab on GitHub.
Package & Repository star count
This section shows the package and repository star count, individually.
Package Repository
There are 18 packages that have more than 1000โฆ See the full description on the dataset page: https://huggingface.co/datasets/open-source-metrics/pytorch-image-models-dependents.
Facebook
Twitter!python -m pip install --upgrade /kaggle/input/pytorchhuggingface-wheels-cuda-116/*.whl
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The SloNER is a model for Slovenian Named Entity Recognition. It is is a PyTorch neural network model, intended for usage with the HuggingFace transformers library (https://github.com/huggingface/transformers).
The model is based on the Slovenian RoBERTa contextual embeddings model SloBERTa 2.0 (http://hdl.handle.net/11356/1397). The model was trained on the SUK 1.0 training corpus (http://hdl.handle.net/11356/1747).The source code of the model is available on GitHub repository https://github.com/clarinsi/SloNER.
Facebook
Twitter!python -m pip install --upgrade /kaggle/input/pytorchhuggingface-wheels/*.whl
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
kye/all-pytorch-code dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Deberta v3 PyTorch model from huggingface:
https://huggingface.co/microsoft/deberta-v3-base/tree/main
Meant to be used for competitions where internet access is disallowed.
Facebook
TwitterAccelerate is a Python library that allows running raw PyTorch training scripts on any kind of device very easily. It allows easy integration into your code. More details are here - https://huggingface.co/blog/accelerate-library
Facebook
Twitterpytorch-survival/gene_annotations dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterThis dataset was created by RAHUL BAJAJ
Facebook
TwitterTo import pretrained transformer weights, simply specify the path in the corresponding function:
model_path = '../input/transformers/roberta-base'
model = AutoModel.from_pretrained(model_path)
See this notebook for a more detailed example.
The dataset includes the following weights, configs and tokenizers:
- albert-large-v2
- bert-base-uncased
- bert-large-uncased
- distilroberta-base
- distilbert-base-uncased
- google/electra-base-discriminator
- facebook/bart-base
- facebook/bart-large
- funnel-transformer/small
- funnel-transformer/large
- roberta-base
- roberta-large
- t5-base
- t5-large
- xlnet-base-cased
- xlnet-large-cased
All files are downloaded from Huggingface Model Hub at https://huggingface.co/models. License: Apache License 2.0
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Facebook
TwitterCrayon2023/pytorch-llama dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Pretrained language models for detecting and classifying the presence of sex education concepts in Slovene curriculum documents. The models are PyTorch neural network models, intended for usage with the HuggingFace transformers library (https://github.com/huggingface/transformers).
The models are based on the Slovenian RoBERTa contextual embeddings model SloBERTa 2.0 (http://hdl.handle.net/11356/1397) and on the CroSloEngual BERT model (http://hdl.handle.net/11356/1330). The source code of the model and example usage is available in GitHub repository https://github.com/TimotejK/SemSex. The models and tokenizers can be loaded using the AutoModelForSequenceClassification.from_pretrained() and the AutoTokenizer.from_pretrained() functions from the transformers library. An example of such usage is available at https://github.com/TimotejK/SemSex/blob/main/Concept%20detection/Classifiers/full_pipeline.py.
The corpus on which these models have been trained is available at http://hdl.handle.net/11356/1895.
Facebook
TwitterThis dataset was created by Hari Prasath V
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
kye/pytorch-repo-code dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterCrayon2023/pytorch-Qwen-7B dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterThis dataset was created by Megha Kapoor