2 datasets found
  1. P

    Voice Conversion Challenge 2018 Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Apr 10, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Voice Conversion Challenge 2018 Dataset [Dataset]. https://paperswithcode.com/dataset/voice-conversion-challenge-2018
    Explore at:
    Dataset updated
    Apr 10, 2018
    Description

    Voice conversion (VC) is a technique to transform a speaker identity included in a source speech waveform into a different one while preserving linguistic information of the source speech waveform. The Voice Conversion Challenge (VCC) 2016 was launched in 2016 at Interspeech 2016. The objective of the 2016 challenge was to better understand different VC techniques built on a freely-available common dataset to look at a common goal, and to share views about unsolved problems and challenges faced by the current VC techniques. The VCC 2016 focused on the most basic VC task, that is, the construction of VC models that automatically transform the voice identity of a source speaker into that of a target speaker using a parallel clean training database where source and target speakers read out the same set of utterances in a professional recording studio. 17 research groups had participated in the 2016 challenge. The challenge was successful and it established new standard evaluation methodology and protocols for bench-marking the performance of VC systems. The second edition of VCC was launched in 2018, the VCC 2018. In this second edition, three aspects of the challenge were revised. First, the amount of speech data used for the construction of participant's VC systems was reduced to half. This is based on feedback from participants in the previous challenge and this is also essential for practical applications. Second, a more challenging task refereed to a Spoke task in addition to a similar task to the 1st edition was introduced, which we call a Hub task. In the Spoke task, participants need to build their VC systems using a non-parallel database in which source and target speakers read out different sets of utterances. Both parallel and non-parallel voice conversion systems are evaluated via the same large-scale crowdsourcing listening test. Third, bridging the gap between the ASV and VC communities was also attempted. Since new VC systems developed for the VCC 2018 may be strong candidates for enhancing the ASVspoof 2015 database, spoofing performance of the VC systems based on anti-spoofing scores was assessed.

    Description from: https://datashare.ed.ac.uk/handle/10283/3061

  2. o

    Voice Conversion Challenge 2020 database v1.0

    • explore.openaire.eu
    • data.niaid.nih.gov
    • +1more
    Updated Dec 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhao Yi; Wen-Chin Huang; Xiaohai Tian; Junichi Yamagishi; Rohan Kumar Das; Tomi Kinnunen; Zhenhua Ling; Tomoki Toda (2020). Voice Conversion Challenge 2020 database v1.0 [Dataset]. http://doi.org/10.5281/zenodo.4345689
    Explore at:
    Dataset updated
    Dec 18, 2020
    Authors
    Zhao Yi; Wen-Chin Huang; Xiaohai Tian; Junichi Yamagishi; Rohan Kumar Das; Tomi Kinnunen; Zhenhua Ling; Tomoki Toda
    Description

    Voice conversion (VC) is a technique to transform a speaker identity included in a source speech waveform into a different one while preserving linguistic information of the source speech waveform. In 2016, we have launched the Voice Conversion Challenge (VCC) 2016 [1][2] at Interspeech 2016. The objective of the 2016 challenge was to better understand different VC techniques built on a freely-available common dataset to look at a common goal, and to share views about unsolved problems and challenges faced by the current VC techniques. The VCC 2016 focused on the most basic VC task, that is, the construction of VC models that automatically transform the voice identity of a source speaker into that of a target speaker using a parallel clean training database where source and target speakers read out the same set of utterances in a professional recording studio. 17 research groups had participated in the 2016 challenge. The challenge was successful and it established new standard evaluation methodology and protocols for bench-marking the performance of VC systems. In 2018, we have launched the second edition of VCC, the VCC 2018 [3]. In the second edition, we revised three aspects of the challenge. First, we educed the amount of speech data used for the construction of participant's VC systems to half. This is based on feedback from participants in the previous challenge and this is also essential for practical applications. Second, we introduced a more challenging task refereed to a Spoke task in addition to a similar task to the 1st edition, which we call a Hub task. In the Spoke task, participants need to build their VC systems using a non-parallel database in which source and target speakers read out different sets of utterances. We then evaluate both parallel and non-parallel voice conversion systems via the same large-scale crowdsourcing listening test. Third, we also attempted to bridge the gap between the ASV and VC communities. Since new VC systems developed for the VCC 2018 may be strong candidates for enhancing the ASVspoof 2015 database, we also asses spoofing performance of the VC systems based on anti-spoofing scores. In 2020, we launched the third edition of VCC, the VCC 2020 [4][5]. In this third edition, we constructed and distributed a new database for two tasks, intra-lingual semi-parallel and cross-lingual VC. The dataset for intra-lingual VC consists of a smaller parallel corpus and a larger nonparallel corpus, where both of them are of the same language. The dataset for cross-lingual VC consists of a corpus of the source speakers speaking in the source language and another corpus of the target speakers speaking in the target language. As a more challenging task than the previous ones, we focused on cross-lingual VC, in which the speaker identity is transformed between two speakers uttering different languages, which requires handling completely nonparallel training over different languages. This repository contains the training and evaluation data released to participants, target speaker’s speech data in English for reference purpose, and the transcriptions for evaluation data. For more details about the challenge and the listening test results please refer to [4] and README file. [1] Tomoki Toda, Ling-Hui Chen, Daisuke Saito, Fernando Villavicencio, Mirjam Wester, Zhizheng Wu, Junichi Yamagishi "The Voice Conversion Challenge 2016" in Proc. of Interspeech, San Francisco. [2] Mirjam Wester, Zhizheng Wu, Junichi Yamagishi "Analysis of the Voice Conversion Challenge 2016 Evaluation Results" in Proc. of Interspeech 2016. [3] Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, Zhenhua Ling, "The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods", Proc Speaker Odyssey 2018, June 2018. [4] Yi Zhao, Wen-Chin Huang, Xiaohai Tian, Junichi Yamagishi, Rohan Kumar Das, Tomi Kinnunen, Zhenhua Ling, and Tomoki Toda. "Voice conversion challenge 2020: Intra-lingual semi-parallel and cross-lingual voice conversion" Proc. Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020, 80-98, DOI: 10.21437/VCC_BC.2020-14. If your publish using any of the data in this dataset please cite the above paper [4]. This is a bibtex entry for [4]. @inproceedings{Yi2020, author={Zhao Yi and Wen-Chin Huang and Xiaohai Tian and Junichi Yamagishi and Rohan Kumar Das and Tomi Kinnunen and Zhen-Hua Ling and Tomoki Toda}, title={{Voice Conversion Challenge 2020 –- Intra-lingual semi-parallel and cross-lingual voice conversion –-}}, year=2020, booktitle={Proc. Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020}, pages={80--98}, doi={10.21437/VCC_BC.2020-14}, url={http://dx.doi.org/10.21437/VCC_BC.2020-14} } {"references": ["Yi Zhao, Wen-Chin Huang, Xiaohai Tian, Junichi Yamagishi, Rohan Kumar Das, Tomi Kinnunen, Zhenhua Ling, and Tomoki Toda. "Voice conversion challenge 2020: Intra...

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2018). Voice Conversion Challenge 2018 Dataset [Dataset]. https://paperswithcode.com/dataset/voice-conversion-challenge-2018

Voice Conversion Challenge 2018 Dataset

Explore at:
266 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Apr 10, 2018
Description

Voice conversion (VC) is a technique to transform a speaker identity included in a source speech waveform into a different one while preserving linguistic information of the source speech waveform. The Voice Conversion Challenge (VCC) 2016 was launched in 2016 at Interspeech 2016. The objective of the 2016 challenge was to better understand different VC techniques built on a freely-available common dataset to look at a common goal, and to share views about unsolved problems and challenges faced by the current VC techniques. The VCC 2016 focused on the most basic VC task, that is, the construction of VC models that automatically transform the voice identity of a source speaker into that of a target speaker using a parallel clean training database where source and target speakers read out the same set of utterances in a professional recording studio. 17 research groups had participated in the 2016 challenge. The challenge was successful and it established new standard evaluation methodology and protocols for bench-marking the performance of VC systems. The second edition of VCC was launched in 2018, the VCC 2018. In this second edition, three aspects of the challenge were revised. First, the amount of speech data used for the construction of participant's VC systems was reduced to half. This is based on feedback from participants in the previous challenge and this is also essential for practical applications. Second, a more challenging task refereed to a Spoke task in addition to a similar task to the 1st edition was introduced, which we call a Hub task. In the Spoke task, participants need to build their VC systems using a non-parallel database in which source and target speakers read out different sets of utterances. Both parallel and non-parallel voice conversion systems are evaluated via the same large-scale crowdsourcing listening test. Third, bridging the gap between the ASV and VC communities was also attempted. Since new VC systems developed for the VCC 2018 may be strong candidates for enhancing the ASVspoof 2015 database, spoofing performance of the VC systems based on anti-spoofing scores was assessed.

Description from: https://datashare.ed.ac.uk/handle/10283/3061

Search
Clear search
Close search
Google apps
Main menu