Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Card for aeroBERT-NER
Dataset Summary
This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.
There are a total of 1432 sentences. The creation of this dataset is aimed at -
(1) Making available an open-source dataset for aerospace requirements which are often proprietary
(2) Fine-tuning language models for token identification… See the full description on the dataset page: https://huggingface.co/datasets/archanatikayatray/aeroBERT-NER.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Annotated dataset for training named entities recognition models for medieval charters in Latin, French and Spanish.
The original raw texts for all charters were collected from four charters collections
- HOME-ALCAR corpus : https://zenodo.org/record/5600884
- CBMA : http://www.cbma-project.eu
- Diplomata Belgica : https://www.diplomata-belgica.be
- CODEA corpus : https://corpuscodea.es/
We include (i) the annotated training datasets, (ii) the contextual and static embeddings trained on medieval multilingual texts and (iii) the named entity recognition models trained using two architectures: Bi-LSTM-CRF + stacked embeddings and fine-tuning on Bert-based models (mBert and RoBERTa)
Codes, datasets and notebooks used to train models can be consulted in our gitlab repository: https://gitlab.com/magistermilitum/ner_medieval_multilingual
Our best RoBERTa model is also available in the HuggingFace library: https://huggingface.co/magistermilitum/roberta-multilingual-medieval-ner
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset Card for FDA CDRH Device Recalls NER Dataset
This is a FDA Medical Device Recalls Dataset Created for Medical Device Named Entity Recognition (NER)
Dataset Details
Dataset Description
This dataset was created for the purpose of performing NER tasks. It utilizes the OpenFDA Device Recalls dataset, which has been processed and annotated for performing NER. The Device Recalls dataset has been further processed to extract the recall action element, which… See the full description on the dataset page: https://huggingface.co/datasets/mfarrington/biobert-ner-fda-recalls-dataset.
naorm/dnrti-securebert-ner-512 dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This contains the merged dataset as described in the work "Multi-head CRF classifier for biomedical multi-class Named Entity Recognition on Spanish clinical notes".
This dataset consists of 4 seperate datasets:
The dataset contains two tasks:
Task 1: This task is related to multi-class Named Entity Recognition. This dataset contains 5 possible classes: SYMPTOM, PROCEDURE, DISEASE, CHEMICAL and PROTEIN.
Task 2: This task is related to Named Entity Linking, where each code corresponds to a code within the SNOMED-CT corpus. The exact corpus used can be obtained here. Further for the MedProcNER, SympTEMIST and DisTEMIST datasets, a gazetteer is provided in the original datasets.
For more information on the construction of the dataset, aswell as dataloaders, we refer you to our GitHub repository.
Further this also contains the embeddings from the SapBERT model.
Please, cite:
@article{jonker2024a, title = {Multi-head {{CRF}} classifier for biomedical multi-class named entity recognition on {{Spanish}} clinical notes}, author = {Jonker, Richard A. A. and Almeida, Tiago and Antunes, Rui and Almeida, Jo{\~a}o R. and Matos, S{\'e}rgio}, year = {2024}, journal = {Database}, publisher = {Oxford University Press} }
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
naorm/malware-text-db-securebert-ner-512 dataset hosted on Hugging Face and contributed by the HF Datasets community
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Babelscape (From Huggingface) [source]
The Babelscape/wikineural NER Dataset is a comprehensive and diverse collection of multilingual text data specifically designed for the task of Named Entity Recognition (NER). It offers an extensive range of labeled sentences in nine different languages: French, German, Portuguese, Spanish, Polish, Dutch, Russian, English, and Italian.
Each sentence in the dataset contains tokens (words or characters) that have been labeled with named entity recognition tags. These tags provide valuable information about the type of named entity each token represents. The dataset also includes a language column to indicate the language in which each sentence is written.
This dataset serves as an invaluable resource for developing and evaluating NER models across multiple languages. It encompasses various domains and contexts to ensure diversity and representativeness. Researchers and practitioners can utilize this dataset to train and test their NER models in real-world scenarios.
By using this dataset for NER tasks, users can enhance their understanding of how named entities are recognized across different languages. Furthermore, it enables benchmarking performance comparisons between various NER models developed for specific languages or trained on multiple languages simultaneously.
Whether you are an experienced researcher or a beginner exploring multilingual NER tasks, the Babelscape/wikineural NER Dataset provides a highly informative and versatile resource that can contribute to advancements in natural language processing and information extraction applications on a global scale
Understand the Data Structure:
- The dataset consists of labeled sentences in nine different languages: French (fr), German (de), Portuguese (pt), Spanish (es), Polish (pl), Dutch (nl), Russian (ru), English (en), and Italian (it).
- Each sentence is represented by three columns: tokens, ner_tags, and lang.
- The tokens column contains the individual words or characters in each labeled sentence.
- The ner_tags column provides named entity recognition tags for each token, indicating their entity types.
- The lang column specifies the language of each sentence.
Explore Different Languages:
- Since this dataset covers multiple languages, you can choose to focus on a specific language or perform cross-lingual analysis.
- Analyzing multiple languages can help uncover patterns and differences in named entities across various linguistic contexts.
Preprocessing and Cleaning:
- Before training your NER models or applying any NLP techniques to this dataset, it's essential to preprocess and clean the data.
- Consider removing any unnecessary punctuation marks or special characters unless they carry significant meaning in certain languages.
Training Named Entity Recognition Models: 4a. Data Splitting: Divide the dataset into training, validation, and testing sets based on your requirements using appropriate ratios. 4b. Feature Extraction: Prepare input features from tokenized text data such as word embeddings or character-level representations depending on your model choice. 4c. Model Training: Utilize state-of-the-art NER models (e.g., LSTM-CRF, Transformer-based models) to train on the labeled sentences and ner_tags columns. 4d. Evaluation: Evaluate your trained model's performance using the provided validation dataset or test datasets specific to each language.
Applying Pretrained Models:
- Instead of training a model from scratch, you can leverage existing pretrained NER models like BERT, GPT-2, or SpaCy's named entity recognition capabilities.
- Fine-tune these pre-trained models on your specific NER task using the labeled
- Training NER models: This dataset can be used to train NER models in multiple languages. By providing labeled sentences and their corresponding named entity recognition tags, the dataset can help train models to accurately identify and classify named entities in different languages.
- Evaluating NER performance: The dataset can be used as a benchmark to evaluate the performance of pre-trained or custom-built NER models. By using the labeled sentences as test data, developers and researchers can measure the accuracy, precision, recall, and F1-score of their models across multiple languages.
- Cross-lingual analysis: With labeled sentences available in nine different languages, researchers can perform cross-lingual analysis...
nabin2004/location-ner-4-BERT dataset hosted on Hugging Face and contributed by the HF Datasets community
juliadollis/stf_ner_pierreguillou-ner-bert-large-cased-pt-lenerbr dataset hosted on Hugging Face and contributed by the HF Datasets community
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Overview
This data was used to train model: https://huggingface.co/mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2 There are 19 different entity types in this dataset: "chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant", "oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name", "residue_name_number","residue_number", "residue_range", "site", "species", "structure_element", "taxonomy_domain" The data prepared as IOB… See the full description on the dataset page: https://huggingface.co/datasets/mevol/protein_structure_NER_model_v1.2.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Overview
This data was used to train model: https://huggingface.co/PDBEurope/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2 There are 19 different entity types in this dataset: "chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant", "oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name", "residue_name_number","residue_number", "residue_range", "site", "species", "structure_element", "taxonomy_domain" The data prepared as… See the full description on the dataset page: https://huggingface.co/datasets/PDBEurope/protein_structure_NER_model_v1.2.
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
Task: Token Classification Model: Ravindra001/bert-finetuned-ner Dataset: wikiann
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
Contributions
Thanks to @lewtun for evaluating this model.
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
Task: Token Classification Model: AJGP/bert-finetuned-ner Dataset: conll2003 Config: conll2003 Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
Contributions
Thanks to @hrezaeim for evaluating this model.
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
Task: Token Classification Model: Abelll/bert-finetuned-ner Dataset: conll2003 Config: conll2003 Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
Contributions
Thanks to @chnlyi for evaluating this model.
https://choosealicense.com/licenses/bsd/https://choosealicense.com/licenses/bsd/
中文 resume ner 数据集, 来源: https://github.com/luopeixiang/named_entity_recognition 。 数据的格式如下,它的每一行由一个字及其对应的标注组成,标注集采用BIOES,句子之间用一个空行隔开。 美 B-LOC 国 E-LOC 的 O 华 B-PER 莱 I-PER 士 E-PER
我 O 跟 O 他 O 谈 O 笑 O 风 O 生 O
效果
不同模型的效果对比:
Bert-tiny 结果
model precision recall f1-score support
BERT-tiny 0.9490 0.9538 0.9447 全部
BERT-tiny 0.9278 0.9251 0.9313 使用 100 train
注:
后面再测试,BERT-tiny(softmax) + 100 训练样本,暂时没有复现 0.9313 的结果,最好结果 0.8612 BERT-tiny +… See the full description on the dataset page: https://huggingface.co/datasets/ttxy/resume_ner.
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
Task: Token Classification Model: pierreguillou/ner-bert-large-cased-pt-lenerbr Dataset: lener_br Config: lener_br Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
Contributions
Thanks to @Luciano for evaluating this model.
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
Task: Token Classification Model: siddharthtumre/biobert-ner Dataset: jnlpba Config: jnlpba Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
Contributions
Thanks to @siddharthtumre for evaluating this model.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Card for aeroBERT-NER
Dataset Summary
This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.
There are a total of 1432 sentences. The creation of this dataset is aimed at -
(1) Making available an open-source dataset for aerospace requirements which are often proprietary
(2) Fine-tuning language models for token identification… See the full description on the dataset page: https://huggingface.co/datasets/archanatikayatray/aeroBERT-NER.