Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Card for the AMI dataset for speaker diarization
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings… See the full description on the dataset page: https://huggingface.co/datasets/diarizers-community/ami.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Card for "AMIsum"
Dataset Summary
AMIsum is meeting summaryzation dataset based on the AMI Meeting Corpus (https://groups.inf.ed.ac.uk/ami/corpus/). The dataset utilizes the transcripts as the source data and abstract summaries as the target data.
Supported Tasks and Leaderboards
More Information Needed
Languages
English
Dataset Structure
Data Instances
{'transcript': '
ICSI Meeting Corpus in JSON format.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
DSFL Dataset - AMI Disfluency Laughter Events
This dataset contains segmented audio and video clips from AMI Meeting Corpus, which only consisted of disfluencies and laughter events, segmented in both audio and visual modality. This dataset, along with hhoangphuoc/ami-av is created for my research related to Audio-Visual Speech Recognition, which I currently developed at: https://github.com/hhoangphuoc/AVSL For reproducing the work I've done to create this dataset, checkout the… See the full description on the dataset page: https://huggingface.co/datasets/hhoangphuoc/ami-disfluency.
Hydrographic and Impairment Statistics (HIS) is a National Park Service (NPS) Water Resources Division (WRD) project established to track certain goals created in response to the Government Performance and Results Act of 1993 (GPRA). One water resources management goal established by the Department of the Interior under GRPA requires NPS to track the percent of its managed surface waters that are meeting Clean Water Act (CWA) water quality standards. This goal requires an accurate inventory that spatially quantifies the surface water hydrography that each bureau manages and a procedure to determine and track which waterbodies are or are not meeting water quality standards as outlined by Section 303(d) of the CWA. This project helps meet this DOI GRPA goal by inventorying and monitoring in a geographic information system for the NPS: (1) CWA 303(d) quality impaired waters and causes; and (2) hydrographic statistics based on the United States Geological Survey (USGS) National Hydrography Dataset (NHD). Hydrographic and 303(d) impairment statistics were evaluated based on a combination of 1:24,000 (NHD) and finer scale data (frequently provided by state GIS layers).
Dataset Summary
This is the processed Audio-Visual Dataset from AMI Meeting Corpus. The dataset was segmented into sentence-level audio/video segments based on the individual [meeting_id]-[speaker_id] transcripts. The purpose of this data is for audio-visual speech recognition task (AVSR), particularly for spontaneous conversational speech. General information about dataset: Total #segments: 83,438 (including either audio/video or both) Dataset({ features: ['id', 'meeting_id'… See the full description on the dataset page: https://huggingface.co/datasets/hhoangphuoc/ami-av.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce ChannelSet, a dataset which provides a launchpad for exploring the extraneous acoustic information typically suppressed or ignored in audio tasks such as automatic speech recognition. We combined components of existing publicly available datasets to encompass broad variability in recording equipment, microphone position, room or surrounding acoustics, event density (i.e., how many audio events are present), and proportion of foreground and background sounds. Source datasets include: the CHiME-3 background dataset, CHiME-5 evaluation dataset, AMI meeting corpus, Freefield1010, and Vystadial2016.
ChannelSet includes 13 classes spanning various acoustic environments: Indoor_Commercial_Bus, Indoor_Commercial_Cafe, Indoor_Domestic, Indoor_Meeting_Room1, Indoor_Meeting_Room2, Indoor_Meeting_Room3, Outdoor_City_Pedestrian, Outdoor_City_Traffic, Outdoor_Nature_Birds, Outdoor_Nature_Water, Outdoor_Nature_Weather, Telephony_CZ, and Telephony_EN. Each sample is between 1 and 10 seconds in duration. Each class contains 100 minutes of audio, for a total of 21.6 hours, split into separate test (20%) and train (80%) partitions.
Download includes scripts, metadata, and instructions for producing ChannelSet from source datasets.
AMI DisfluencyLaughter Dataset
This dataset contains segmented audio and video clips which extract from AMI Meeting Corpus. The segmented audio/videos created in this dataset are mainly the disfluencies and laughter events, extracted from original recordings. General information about this dataset:
Number of recordings: 35,731 Has audio: True Has video: True Has lip video: True
Dataset({ features: ['id', 'meeting_id', 'speaker_id', 'start_time', 'end_time', 'duration'… See the full description on the dataset page: https://huggingface.co/datasets/hhoangphuoc/ami-dsfl-av.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
STGkM is our method, CC uses dynamic connected components, k-medoids compresses a dynamic graph into a single static one and uses k-medoids, and DCDID is a heuristic method [4]. The best performance is bolded.
The Advanced Manufacturing Investment Strategy focused on manufacturing companies that were investing in leading edge technologies and processes to increase their productivity and competitiveness in Ontario. Projects must have had a minimum total project value of $10 million or create/retain 50 or more high value jobs within 5 years. Ontario's Advanced Manufacturing Investment Strategy is no longer accepting applications, but has been very successful to date in meeting its objectives. This data set contains a list of recipients of Advanced Manufacturing Investment Strategy from 2006 to 2012. This list includes the following details: * funding program * name of company * location * fiscal year contract signed * government loan commitment * total project jobs created and retained as in the contract.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers.