Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data sets are used in a controlled experiment, where two classifiers should be compared. train_a.csv and explain.csv are slices from the original data set. train_b.csv contains the same instances as in train_a.csv, but with feature x1 set to 0 to make it unusable to classifier B.
The original data set was created and split using this Python code:
from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_samples=300, n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1, class_sep=0.75, random_state=0) X *= 100
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) lm = LogisticRegression() lm.fit(X_train, y_train) clf_a = lm
clf_b = LogisticRegression() X2 = X.copy() X2[:, 0] = 0 X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y, test_size=0.5, random_state=0) clf_b.fit(X2_train, y2_train)
X_explain = X_test y_explain = y_test
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the Zip, spectral. npy was the average spectral data of red ginseng, mycotoxins and interference impurities, and label. npy was the corresponding label. Spectral data format was [1200,510] and label data format was [1200,1]. The example of data usage (sklearn in Python database was used to establish the classification model) was as follows:
import numpy as np
from sklearn. model_selection import train_test_split
from sklearn. preprocessing import StandardScaler
from sklearn. neighbors import KNeighborsClassifier
from sklearn. metrics import classification_report, accuracy_score
# Load spectral data and labels
x = np.load('.../spectral.npy')[:,1:-1]
y = np.load('.../label.npy')
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
# Data standardization
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
# Train the KNN model
knn_model = KNeighborsClassifier(n_neighbors=5)
knn_model. fit(x_train, y_train)
# Predict
y_pred = knn_model.predict(x_test)
# Print classification reports and accuracy rates
print("Classification Report:")
print(classification_report(y_test, y_pred))
print("Accuracy Score:")
print(accuracy_score(y_test, y_pred))
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Example of usage: from datasets import load_dataset
dataset = load_dataset("Andron00e/CIFAR100-custom") splitted_dataset = dataset["train"].train_test_split(test_size=0.2)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
🔒 Collection of Privacy-Sensitive Conversations between Care Workers and Care Home Residents in an Residential Care Home 🔒
The dataset is useful to train and evaluate models to identify and classify privacy-sensitive parts of conversations from text, especially in the context of AI assistants and LLMs.
The provided data format is .jsonl
, the JSON Lines text format, also called newline-delimited JSON. An example entry looks as follows.
{ "text": "CW: Have you ever been to Italy? CR: Oh, yes... many years ago.", "taxonomy": 0, "category": 0, "affected_speaker": 1, "language": "en", "locale": "US", "data_type": 1, "uid": 16, "split": "train" }
The data fields are:
text
: a string
feature. The abbreviaton of the speakers refer to the care worker (CW) and the care recipient (CR).taxonomy
: a classification label, with possible values including informational
(0), invasion
(1), collection
(2), processing
(3), dissemination
(4), physical
(5), personal-space
(6), territoriality
(7), intrusion
(8), obtrusion
(9), contamination
(10), modesty
(11), psychological
(12), interrogation
(13), psychological-distance
(14), social
(15), association
(16), crowding-isolation
(17), public-gaze
(18), solitude
(19), intimacy
(20), anonymity
(21), reserve
(22). The taxonomy is derived from Rueben et al. (2017). The classifications were manually labeled by an expert.category
: a classification label, with possible values including personal-information
(0), family
(1), health
(2), thoughts
(3), values
(4), acquaintance
(5), appointment
(6). The privacy category affected in the conversation. The classifications were manually labeled by an expert.affected_speaker
: a classification label, with possible values including care-worker
(0), care-recipient
(1), other
(2), both
(3). The speaker whose privacy is impacted during the conversation. The classifications were manually labeled by an expert.language
: a string
feature. Language code as defined by ISO 639.locale
: a string
feature. Regional code as defined by ISO 3166-1 alpha-2.data_type
: a string
a classification label, with possible values including real
(0), synthetic
(1).uid
: a int64
feature. A unique identifier within the dataset.split
: a string
feature. Either train
, validation
or test
.The dataset has 2 subsets:
split
: with a total of 95 examples split into train
, validation
and test
(70%-15%-15%)unsplit
: with a total of 95 examples in a single train splitname | train | validation | test |
---|---|---|---|
split | 66 | 14 | 15 |
unsplit | 95 | n/a | n/a |
The files follow the naming convention subset-split-language.jsonl
. The following files are contained in the dataset:
split-train-en.jsonl
split-validation-en.jsonl
split-test-en.jsonl
unsplit-train-en.jsonl
Recording audio of care workers and residents during care interactions, which includes partial and full body washing, giving of medication, as well as wound care, is a highly privacy-sensitive use case. Therefore, a dataset is created, which includes privacy-sensitive parts of conversations, synthesized from real-world data. This dataset serves as a basis for fine-tuning a local LLM to highlight and classify privacy-sensitive sections of transcripts created in care interactions, to further mask them to protect privacy.
The intial data was collected in the project Caring Robots of TU Wien in cooperation with Caritas Wien. One project track aims to facilitate Large Languge Models (LLM) to support documentation of care workers, with LLM-generated summaries of audio recordings of interactions between care workers and care home residents. The initial data are the transcriptions of those care interactions.
The transcriptions were thoroughly reviewed, and sections containing privacy-sensitive information were identified and marked using qualitative data analysis software by two experts. Subsequently, the accessible portions of the interviews were translated from German to US English using the locally executed LLM icky/translate. In the next step, another llama3.1:70b was used locally to synthesize the conversation segments. This process involved generating similar, yet distinct and new, conversations that are not linked to the original data. The dataset was split using the train_test_split
function from the <a href="https://scikit-learn.org/1.5/modules/generated/sklearn.model_selection.train_test_split.html" target="_blank"
This dataset contains the bounding box annotations of the Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset for object detection in XML format.
It has been adapted from 200 Bird Species with 11,788 Images.
The train/test split is according to the information provided in the original train_test_split.txt
file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Authors: Hongxin Li, Jingfan Chen, Jingran Su, Yuntao Chen, Qing Li, Zhaoxiang Zhang Authors' Paper: arXiv Authors' Project Website: AutoGUI: Scaling GUI Grounding with Autonomous Functionality Annotations from LLMs Original dataset: AutoGUI/AutoGUI-v1-702k
changes made:
excluded data with with image resolution of 427x745 and 590x1008 by using Dataset.filter(lambda e: e["image_size"] not in ["427x745","590x1008"]) sliced 40% of total dataset by using Dataset.train_test_split(train_size=0.4… See the full description on the dataset page: https://huggingface.co/datasets/4lspace/AutoGUI-v1-280k.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
MIDIstral
MIDI images with detailed captions for MIDI description and classification
Install and use
from PIL import Image import io
from datasets import load_dataset
def deserialize_image(byte_data): img_byte_arr = io.BytesIO(byte_data) img = Image.open(img_byte_arr) return img
dataset = load_dataset("asigalov61/MIDIstral", split='train').train_test_split(test_size=0.01)
dataset_split = 'train'… See the full description on the dataset page: https://huggingface.co/datasets/asigalov61/MIDIstral.
Kakao에서 공개한 NLI Task를 위한 학습용 데이터셋입니다. 2개의 Sentence와 그 관계를 서술하는 총 3개의 컬럼으로 되어있습니다. 총 7,128쌍의 데이터가 존재합니다. Reference: (https://github.com/kakaobrain/kor-nlu-datasets)
사용예시
dataset_repo = "x2bee/Korean_STS_all" dataset = load_dataset(dataset_repo) dataset = dataset['train']
test_size = 0.1 test_split_seed = 42
split_dataset = dataset.train_test_split(test_size=test_size, seed=test_split_seed) train_dataset = split_dataset["train"] test_dataset = split_dataset["test"]
def… See the full description on the dataset page: https://huggingface.co/datasets/CocoRoF/Korean_STS_all.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data sets are used in a controlled experiment, where two classifiers should be compared. train_a.csv and explain.csv are slices from the original data set. train_b.csv contains the same instances as in train_a.csv, but with feature x1 set to 0 to make it unusable to classifier B.
The original data set was created and split using this Python code:
from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_samples=300, n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1, class_sep=0.75, random_state=0) X *= 100
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) lm = LogisticRegression() lm.fit(X_train, y_train) clf_a = lm
clf_b = LogisticRegression() X2 = X.copy() X2[:, 0] = 0 X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y, test_size=0.5, random_state=0) clf_b.fit(X2_train, y2_train)
X_explain = X_test y_explain = y_test