Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was collected to support the development of intelligent digital media art systems enhanced by multimodal perception fusion. It contains synchronized physiological, visual, and audio data from 250 participants interacting with various digital artworks in a controlled environment.
Modalities Included:
Physiological Signals:
Heart Rate Variability (HRV)
Electrodermal Activity (EDA)
Visual Data:
Facial expression video frames
Audio Data:
Voice tone and emotional speech cues
Data Collection Environment: Participants were exposed to immersive digital art pieces while their reactions were recorded using wearable biosensors, HD video cameras, and high-quality microphones. Each session lasted approximately 5–7 minutes, and all signals were time-synchronized.
This csv dataset contains 1,000 rows capturing multimodal indicators of English teaching effectiveness in university courses. It integrates textual, audio, visual, and behavioral features to provide a comprehensive view of student engagement and instructional quality.
Key Features (Columns):
Text_Feedback_Score: Student feedback on teaching quality (1–5 scale)
Audio_Clarity_Score: Speech clarity and pronunciation score (0–1 scale)
Visual_Attention_Score: Visual engagement score (0–1 scale)
Behavior_Participation: Student participation in class (0–10 scale)
Behavior_Homework_Completion: Homework completion rate (0–1 scale)
Teaching_Effectiveness: Overall teaching effectiveness score (0–5 scale)
Rows and Columns:
Rows: 1,000
Columns: 6
Usability: The dataset is suitable for research in educational data mining, multimodal evaluation of teaching effectiveness, and intelligent assessment systems.