20 datasets found
  1. Government Debt in the EU: interest rate on select Euro members' debt...

    • statista.com
    Updated Jan 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Government Debt in the EU: interest rate on select Euro members' debt 1993-2023 [Dataset]. https://www.statista.com/statistics/1380613/government-debt-eu-interest-rate-select-eurozone-members/
    Explore at:
    Dataset updated
    Jan 24, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jan 1993 - Mar 2023
    Area covered
    European Union
    Description

    The long-term interest rate on government debt is a key indicator of the economic health of a country. The rate reflects financial market actors' perceptions of the creditworthiness of the government and the health of the domestic economy, with a strong and robust economic outlook allowing governments to borrow for essential investments in their economies, thereby boosting long-term growth.

    The Euro and converging interest rates in the early 2000s

    In the case of many Eurozone countries, the early 2000s were a time where this virtuous cycle of economic growth reduced the interest rates they paid on government debt to less than 5 percent, a dramatic change from the pre-Euro era of the 1990s. With the outbreak of the Global Financial Crisis and the subsequent deep recession, however, the economies of Greece, Italy, Spain, Portugal, and Ireland were seen to be much weaker than previously assumed by lenders. Interest rates on their debt gradually began to rise during the crisis, before rapidly increasing beginning in 2010, as first Greece and then Ireland and Portugal lost the faith of financial markets.

    The Eurozone crisis

    This market adjustment was initially triggered due to revelations by the Greek government that the country's budget deficit was much larger than had been previously expected, with investors seeing the country as an unreliable debtor. The crisis, which became known as the Eurozone crisis, spread to Ireland and then Portugal, as lenders cut-off lending to highly indebted Eurozone members with weak fundamentals. During this period there was also intense speculation that due to unsustainable debt loads, some countries would have to leave the Euro currency area, further increasing the interest on their debt. Interest rates on their debt began to come back down after ECB Chief Mario Draghi signaled to markets that the central bank would intervene to keep the states within the currency area in his famous "whatever it takes" speech in Summer 2012.

    The return of higher interest rates in the post-COVID era

    Since this period of extremely high interest rates on government debt for these member states, the interest they are charged for borrowing has shrunk considerably, as the financial markets were flooded with "cheap money" due to the policy measures of central banks in the aftermath of the financial crisis, such as near-zero policy rates and quantitative easing. As interest rates have risen to combat inflation since 2022, so have the interest rates on government debt in the Eurozone also risen, however, these rises are modest compared to during the Eurozone crisis.

  2. f

    Mean scores of speech intelligibility for different strategies, where three...

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ying-Hui Lai; Yu Tsao; Fei Chen (2023). Mean scores of speech intelligibility for different strategies, where three factors [types of masker (masker), SNR levels (SNR), and processing method (F1)] were included in the three-way ANOVA and Tukey post-hoc testing. [Dataset]. http://doi.org/10.1371/journal.pone.0133519.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ying-Hui Lai; Yu Tsao; Fei Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    F1 group variable: 1, Wiener+SEC; 2, Wiener+AEC; 3, KLT+SEC; 4, KLT+AEC.Dependent variable: speech intelligibility scores.aR2 = 0.669 (adjusted R2 = 0.624)Mean scores of speech intelligibility for different strategies, where three factors [types of masker (masker), SNR levels (SNR), and processing method (F1)] were included in the three-way ANOVA and Tukey post-hoc testing.

  3. D

    Voice to Text on Mobile Devices Market Report | Global Forecast From 2025 To...

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Voice to Text on Mobile Devices Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-voice-to-text-on-mobile-devices-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 22, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Voice to Text on Mobile Devices Market Outlook



    The global market size for Voice to Text on Mobile Devices was valued at approximately USD 2.5 billion in 2023, with an expected compound annual growth rate (CAGR) of 18% from 2024 to 2032, projecting the market to reach around USD 13.6 billion by 2032. This robust growth is primarily driven by advancements in artificial intelligence technologies and increasing adoption of voice-activated features across various applications.



    One of the significant growth factors for the Voice to Text on Mobile Devices market is the rapid evolution of automatic speech recognition (ASR) technology. Enhanced accuracy and reduced latency in ASR systems have made voice-to-text applications more reliable and user-friendly, encouraging widespread adoption. Additionally, the integration of natural language processing (NLP) and machine learning algorithms has significantly improved the contextual understanding of spoken words, making these applications more effective and efficient.



    Another crucial driver is the growing demand for hands-free operations in mobile devices. As consumers seek convenience and efficiency, voice-to-text capabilities enable users to perform tasks without manual input, which is particularly beneficial while driving, during workouts, or when multitasking. The increasing prevalence of virtual assistants like Google Assistant, Siri, and Amazon Alexa on mobile devices has further fueled the demand for accurate and responsive voice-to-text technologies.



    The surge in remote working and online education due to the COVID-19 pandemic has also accelerated the adoption of voice-to-text applications. Enterprises and educational institutions are increasingly relying on transcription services and virtual assistants to facilitate communication and productivity in a virtual environment. This trend is expected to continue post-pandemic, driving sustained growth in the market.



    Regionally, North America dominates the Voice to Text on Mobile Devices market, owing to the high penetration of advanced mobile technologies and the presence of key market players. However, the Asia Pacific region is anticipated to witness the highest growth rate during the forecast period, driven by the rapidly expanding smartphone user base and increasing investments in AI technologies. Europe also presents significant growth opportunities due to the rising adoption of smart devices and growing emphasis on accessibility tools.



    Technology Analysis



    The Voice to Text on Mobile Devices market is segmented by technology into Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Machine Learning. ASR technology is at the forefront of this market, enabling the conversion of spoken language into text with high accuracy. The continuous advancements in ASR algorithms have significantly reduced error rates, making voice-to-text applications more reliable for everyday use. ASR technology is widely used in virtual assistants, transcription services, and customer service applications, driving its substantial market share.



    Natural Language Processing (NLP) plays a critical role in understanding and interpreting human language, allowing voice-to-text applications to comprehend context, intent, and sentiment. The integration of NLP with ASR systems has enhanced the overall user experience by enabling more natural and intuitive interactions. NLP is particularly important in developing sophisticated virtual assistants and accessibility tools that cater to diverse user needs, including those with disabilities.



    Machine Learning algorithms are employed to continuously improve the performance of voice-to-text applications. By analyzing vast amounts of speech data, machine learning models can learn and adapt to different accents, dialects, and speech patterns, further enhancing the accuracy and efficiency of these systems. The combination of machine learning with ASR and NLP technologies has led to significant advancements in real-time transcription and voice-activated features.



    Moreover, the convergence of these technologies has enabled the development of multi-language support in voice-to-text applications, catering to a global user base. As more languages and dialects are incorporated into these systems, the market is expected to witness increased adoption across different regions. The ongoing research and development in artificial intelligence will continue to drive innovations in voice-to-text technologies, creating new opportunities for market growth.


    <

  4. Mandarin matrix sentence test recordings: Lombard and plain speech with...

    • zenodo.org
    bin
    Updated Mar 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongmei Hu; Hongmei Hu; Anna Warzybok; Anna Warzybok; Maximilian Scharf; Maximilian Scharf; Birger Kollmeier; Fei Chen; Fei Chen; Sabine Hochmuth; Birger Kollmeier; Sabine Hochmuth (2025). Mandarin matrix sentence test recordings: Lombard and plain speech with different speakers [Dataset]. http://doi.org/10.5281/zenodo.7063030
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hongmei Hu; Hongmei Hu; Anna Warzybok; Anna Warzybok; Maximilian Scharf; Maximilian Scharf; Birger Kollmeier; Fei Chen; Fei Chen; Sabine Hochmuth; Birger Kollmeier; Sabine Hochmuth
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was recorded within the Deutsche Forschungsgemeinschaft (DFG) project: Experiments and models of speech recognition across tonal and non-tonal language systems (EMSATON, Projektnummer 415895050).

    The Lombard effect or Lombard reflex is the involuntary tendency of speakers to increase their vocal effort when speaking in loud noise to enhance the audibility of their voice. Up to date, the phenomena of Lombard effects were observed in different languages. The present database aimed at providing recordings for studying the Lombard effect with Mandarin speech.

    Eleven native-Mandarin talkers (6 female and 5 male) were recruited, both Lombard/plain speech were recorded from the same talker in the same day.

    All speakers produced fluent standard Mandarin speech (North China). All listeners were normal-hearing with pure tone thresholds of 20 dB hearing level or better at audiometric octave frequencies between 125 and 8000 Hz. All listeners provided written informed consent, approved by the Ethics Committee of Carl von Ossietzky University of Oldenburg. Listeners received an hourly compensation for their participation.

    The recording sentences were same as the official Mandarin Chinese matrix sentence test (CMNmatrix, Hu et al. 2018).

    One hundred sentences (ten base lists of ten sentences) of the CMNmatrix were recorded from each speaker in both plain and Lombard speaking styles (each base list containing all 50 words). The 100 sentences were divided into 10 blocks of 10 sentences each, and the plain and Lombard blocks were presented in an alternating order. The recording took place in a double-walled, sound-attenuated booth fulfilling ISO 8253-3 (ISO 8253-3, 2012), using a Neumann 184 microphone with a cardioid characteristic (Georg Neumann GmbH, Berlin, Germany) and a Fireface UC soundcard (with a sampling rate of 44100 Hz and resolution of 16 bits). The recording procedure generally followed the procedures of Alghamdi et al. (2018). A Mandarin-native speaker and a phonetician participated in the recording session and listened to the sentences to control the pronunciations, intonation, and speaking rate. During the recording, the speaker was instructed to read the sentence presented on a frontal screen. In case of any mispronunciation or change in the intonation, the speaker was asked via the screen to repeat the sentence again, and on average, each sentence was recorded twice. In Lombard conditions the speaker was regularly asked via a prompt to repeat a sentence, to keep the speaker in the Lombard communication situation. For the plain-speech recording blocks, the speakers were asked to pronounce the sentences with natural intonation and accentuation, and at an intermediate speaking rate, which was facilitated by a progress bar on the screen. Furthermore, the speakers were asked to keep the speaking effort constant and to avoid any exaggerated pronunciations that could lead to unnatural speech cues. For the Lombard speech recording blocks, speakers were instructed to imagine a conversation to another person in a pub-like situation. During the whole recording session, speakers wore headphones (Sennheiser HDA200) that provided the audio signal of the speaker.. In the Lombard condition, the stationary speech-shaped noise ICRA1 (Dreschler et al., 2001) was mixed with the speaker’s audio signal at a level of 80 dB SPL (calibrated with a Brüel & Kjær (B&K) 4153 artificial ear, a B&K 4134 0.5-inch inch microphone, a B&K 2669 preamplifier, and a B&K 2610). Previous studies showed that this level induced a robust Lombard speech without the danger of inducing hearing damage (Alghamdi et al., 2018).

    The sentences were cut from the recording, high-pass filtered (60 Hz cut-off frequency) and set to the average root-mean-square level of the original speech material of the Mandarin Matrix test (Hu et al., 2018). Then the best version of each sentence was chosen by native-Mandarin speakers regarding pronunciation, tempo, and intonation.

    For more detailed information, please contact hongmei.hu@uni-oldenburg, sabine.hochmuth@uni-oldenburg.de.

    Hu H, Xi X, Wong LLN, Hochmuth S, Warzybok A, Kollmeier B (2018) Construction and evaluation of the mandarin chinese matrix (cmnmatrix) sentence test for the assessment of speech recognition in noise. International Journal of Audiology 57:838-850. https://doi.org/10.1080/14992027.2018.1483083

  5. Rate modulation abilities & motor speech disorders (Utianski et al., 2023)

    • asha.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rene L. Utianski; Joseph R. Duffy; Peter R. Martin; Heather M. Clark; Julie A. G. Stierwalt; Hugo Botha; Farwa Ali; Jennifer L. Whitwell; Keith A. Josephs (2023). Rate modulation abilities & motor speech disorders (Utianski et al., 2023) [Dataset]. http://doi.org/10.23641/asha.22044632.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    American Speech–Language–Hearing Associationhttp://www.asha.org/
    Authors
    Rene L. Utianski; Joseph R. Duffy; Peter R. Martin; Heather M. Clark; Julie A. G. Stierwalt; Hugo Botha; Farwa Ali; Jennifer L. Whitwell; Keith A. Josephs
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Purpose: The purpose of this study was to describe, compare, and understand speech modulation capabilities of patients with varying motor speech disorders (MSDs) in a paradigm in which patients made highly cued attempts to speak faster or slower. Method: Twenty-nine patients, 12 with apraxia of speech (AOS; four phonetic and eight prosodic subtype), eight with dysarthria (six hypokinetic and two spastic subtype), and nine patients without any neurogenic MSD completed a standard motor speech evaluation where they were asked to repeat words and sentences, which served as their “natural” speaking rate. They were then asked to repeat lower complexity (counting 1–5; repeating “cat” and “catnip” 3 times each) and higher complexity stimuli (repeating “catastrophe” and “stethoscope” 3 times each and “My physician wrote out a prescription” once) as fast/slow as possible. Word durations and interword intervals were measured. Linear mixed-effects models were used to assess differences related to MSD subtype and stimuli complexity on bidirectional rate modulation capacity as indexed by word duration and interword interval. Articulatory accuracy was also judged and compared. Results: Patients with prosodic AOS demonstrated a reduced ability to go faster; while they performed similarly to patients with spastic dysarthria when counting, patients with spastic dysarthria were able to increase rate similar to controls during sentence repetition; patients with prosodic AOS could not and made increased articulatory errors attempting to increase rate. AOS patients made more articulatory errors relative to other groups, regardless of condition; however, their percentage of errors reduced with an intentionally slowed speaking rate. Conclusions: The findings suggest comparative rate modulation abilities in conjunction with their impact on articulatory accuracy may support differential diagnosis between healthy and abnormal speech and among subtypes of MSDs (i.e., type of dysarthria or AOS). Findings need to be validated in a larger, more representative cohort encompassing several types of MSDs. Supplemental Material S1. Numerical results of linear mixed effects models. Supplemental Material S2. The fastest and slowest durations for each group. Utianski, R. L., Duffy, J. R., Martin, P. R., Clark, H. M., Stierwalt, J. A. G., Botha, H., Ali, F., Whitwell, J. L., & Josephs, K. A. (2023). Rate modulation abilities in acquired motor speech disorders. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2022_JSLHR-22-00286 Publisher Note: This article is part of the Special Issue: Select Papers From the 2022 Conference on Motor Speech.

  6. D

    AI Voice Commerce Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). AI Voice Commerce Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-ai-voice-commerce-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Sep 23, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Voice Commerce Market Outlook



    The global AI Voice Commerce market size was valued at approximately USD 4.2 billion in 2023 and is projected to reach around USD 24.5 billion by 2032, growing at a compound annual growth rate (CAGR) of 21.8% during the forecast period. This significant growth is driven by increasing consumer reliance on voice-activated technologies, rising penetration of smart devices, and the convenience offered by voice-enabled shopping experiences.



    One of the primary growth factors for the AI Voice Commerce market is the rapid adoption of smart speakers and voice assistants. Devices like Amazon Echo, Google Home, and Apple's Siri have become household staples, simplifying tasks and providing seamless voice-based interactions. The growing popularity of these devices is fueling the demand for voice commerce, as consumers are becoming more comfortable using their voices for shopping, making inquiries, and completing transactions.



    The integration of artificial intelligence (AI) and machine learning (ML) algorithms into voice commerce platforms is also a significant growth driver. These technologies enable systems to understand and process natural language more effectively, providing personalized shopping experiences and recommendations. Advanced AI capabilities, such as sentiment analysis and predictive analytics, are enhancing user engagement and satisfaction, contributing to market growth.



    Additionally, the ongoing advancements in natural language processing (NLP) and speech recognition technologies are crucial to the market's expansion. Improved accuracy in voice recognition and language understanding has reduced the error rate, making voice commerce more reliable and user-friendly. This technological progress is encouraging more businesses to adopt voice commerce solutions, further propelling market growth.



    The regional outlook for the AI Voice Commerce market highlights North America as a dominant player, driven by high consumer adoption rates and technological advancements. However, the Asia Pacific region is expected to witness the fastest growth due to the rising penetration of smartphones, increasing internet connectivity, and a growing tech-savvy population. Europe and Latin America are also significant markets, with steady growth anticipated over the forecast period.



    Component Analysis



    The AI Voice Commerce market can be segmented by component into software, hardware, and services. Each of these components plays a distinct role in the ecosystem and contributes to the overall market dynamics. The software segment, which includes voice recognition and NLP algorithms, is critical for enabling voice interactions and ensuring accurate comprehension of user commands. Companies are investing heavily in developing sophisticated software solutions capable of handling complex queries and providing personalized responses.



    The hardware segment primarily comprises smart speakers, voice-enabled devices, and other peripheral equipment necessary for voice commerce operations. The proliferation of devices like Amazon Echo, Google Home, and Apple's HomePod demonstrates the growing importance of hardware in this market. Consumers' increasing preference for smart home devices is driving the demand for voice commerce hardware, which is also evolving to include more advanced features such as high-definition sound quality and integrated home automation capabilities.



    Services, including implementation, maintenance, and support, are essential for the smooth functioning of voice commerce platforms. These services ensure that businesses can adopt and integrate voice commerce solutions seamlessly into their existing systems. Professional services also include training and consultancy to help companies maximize the benefits of voice commerce. As more businesses recognize the potential of voice commerce, the demand for specialized services is expected to grow, contributing to market expansion.



    Each component of the AI Voice Commerce market is interconnected and plays a crucial role in the overall ecosystem. The continuous advancements in software and hardware technologies are creating opportunities for new services, which in turn drive further adoption and innovation in the market. By focusing on all three components, businesses can develop comprehensive voice commerce strategies that meet the evolving needs of consumers and stay competitive in the market.



    Report Scope


  7. f

    Results of a linear mixed effects model for speech rate.

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muge Ozker; Peter Hagoort (2025). Results of a linear mixed effects model for speech rate. [Dataset]. http://doi.org/10.1371/journal.pone.0323201.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muge Ozker; Peter Hagoort
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of a linear mixed effects model for speech rate.

  8. f

    Data Extraction Table.

    • figshare.com
    • plos.figshare.com
    xlsx
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller (2025). Data Extraction Table. [Dataset]. http://doi.org/10.1371/journal.pone.0328833.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Speech analysis offers a non-invasive method for assessing emotional and cognitive states through acoustic correlates, including spectral, prosodic, and voice quality features. Despite growing interest, research remains inconsistent in identifying reliable acoustic markers, providing limited guidance for researchers and practitioners in the field. This review identifies key acoustic correlates for detecting negative emotions, stress, and cognitive load in speech. A systematic search was conducted across four electronic databases: PubMed, PsycInfo, Web of Science, and Scopus. Peer-reviewed articles reporting studies conducted with healthy adult participants were included. Thirty-eight articles were reviewed, encompassing 39 studies, as one article reported on two studies. Among all features, prosodic features were the most investigated and showed the greatest accuracy in detecting negative emotions, stress, and cognitive load. Specifically, anger was associated with elevated fundamental frequency (F0), increased speech volume, and faster speech rate. Stress was associated with increased F0 and intensity, and reduced speech duration. Cognitive load was linked to increased F0 and intensity, although the results for F0 were overall less clear than those for negative emotions and stress. No consistent acoustic patterns were identified for fear or anxiety. The findings support speech analysis as a useful tool for researchers and practitioners aiming to assess negative emotions, stress, and cognitive load in experimental and field studies.

  9. f

    Source text complexity control description.

    • figshare.com
    xls
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shanshan Yang; Defeng Li; Victoria Lai Cheng Lei (2025). Source text complexity control description. [Dataset]. http://doi.org/10.1371/journal.pone.0326527.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Shanshan Yang; Defeng Li; Victoria Lai Cheng Lei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Simultaneous interpreting (SI) with text, a hybrid modality combining auditory and visual inputs, presents greater cognitive complexity than traditional SI. This study investigates multimodal processing in Chinese-English SI with text by examining how source speech rate and professional experience modulate interpreters’ Ear-Eye-Voice Span (EIVS)—a temporal measure reflecting the cognitive coordination among auditory input, visual processing, and verbal output—and interpreting performance. Using eye-tracking technology, we analyzed EIVS patterns in 15 professional interpreters and 30 interpreting trainees performing three SI with text tasks at slow, intermediate and fast speech rates. EIVS measures, including Ear-Eye Span (EIS), Eye-Voice Span (IVS), and Ear-Voice Span (EVS), were analyzed to assess temporal coordination of listening, reading and interpreting processes. Results indicate that faster speech rates significantly reduced EIVS across all measures, suggesting accelerated information processing and strategic cognitive adaptation. A significant interaction effect between speech rate and professional experience was observed. Professionals maintained more stable and efficient EIVS patterns, particularly under accelerated speech rates, reflecting an advantage in cross-modal attention allocation and cognitive resource management. In contrast, trainees exhibited greater reliance on visual input, and struggled more with multimodal demands, manifested in longer EIVS values and greater individual variation. Both groups exhibited an ear-lead-eye coordination pattern during the fast speech rate task, though professionals achieved more efficient auditory-visual synchronization. Despite a decline in interpreting performance with increasing speech rates, professionals consistently outperformed trainees. These findings underscore the critical role of experience in enhancing multimodal coordination, and highlight the importance of dedicated skill-specific practice in enhancing auditory-visual coordination and optimizing interpreting performance under cognitively demanding conditions.

  10. f

    Results of a linear mixed effects model for the peak amplitude of the...

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muge Ozker; Peter Hagoort (2025). Results of a linear mixed effects model for the peak amplitude of the compensatory responses to all F0 perturbations. [Dataset]. http://doi.org/10.1371/journal.pone.0323201.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muge Ozker; Peter Hagoort
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of a linear mixed effects model for the peak amplitude of the compensatory responses to all F0 perturbations.

  11. f

    Results of a linear mixed effects model for voice intensity.

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muge Ozker; Peter Hagoort (2025). Results of a linear mixed effects model for voice intensity. [Dataset]. http://doi.org/10.1371/journal.pone.0323201.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muge Ozker; Peter Hagoort
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of a linear mixed effects model for voice intensity.

  12. f

    Don’t speak too fast! Processing of fast rate speech in children with...

    • figshare.com
    zip
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hélène Guiraud; Nathalie Bedoin; Sonia Krifi-Papoz; Vania Herbillon; Aurélia Caillot-Bascoul; Sibylle Gonzalez-Monge; Véronique Boulenger (2023). Don’t speak too fast! Processing of fast rate speech in children with specific language impairment [Dataset]. http://doi.org/10.1371/journal.pone.0191808
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Hélène Guiraud; Nathalie Bedoin; Sonia Krifi-Papoz; Vania Herbillon; Aurélia Caillot-Bascoul; Sibylle Gonzalez-Monge; Véronique Boulenger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundPerception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children.MethodSixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured.ResultsOverall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing.ConclusionIn agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception.

  13. f

    Results of a linear mixed effects model for the peak amplitude of the...

    • plos.figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muge Ozker; Peter Hagoort (2025). Results of a linear mixed effects model for the peak amplitude of the compensatory response. [Dataset]. http://doi.org/10.1371/journal.pone.0323201.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muge Ozker; Peter Hagoort
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of a linear mixed effects model for the peak amplitude of the compensatory response.

  14. f

    Table_1_Sensitivity of Speech Output to Delayed Auditory Feedback in Primary...

    • frontiersin.figshare.com
    docx
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris J. D. Hardy; Rebecca L. Bond; Kankamol Jaisin; Charles R. Marshall; Lucy L. Russell; Katrina Dick; Sebastian J. Crutch; Jonathan D. Rohrer; Jason D. Warren (2023). Table_1_Sensitivity of Speech Output to Delayed Auditory Feedback in Primary Progressive Aphasias.DOCX [Dataset]. http://doi.org/10.3389/fneur.2018.00894.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Frontiers
    Authors
    Chris J. D. Hardy; Rebecca L. Bond; Kankamol Jaisin; Charles R. Marshall; Lucy L. Russell; Katrina Dick; Sebastian J. Crutch; Jonathan D. Rohrer; Jason D. Warren
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Delayed auditory feedback (DAF) is a classical paradigm for probing sensori-motor interactions in speech output and has been studied in various disorders associated with speech dysfluency and aphasia. However, little information is available concerning the effects of DAF on degenerating language networks in primary progressive aphasia: the paradigmatic “language-led dementias.” Here we studied two forms of speech output (reading aloud and propositional speech) under natural listening conditions (no feedback delay) and under DAF at 200 ms, in a cohort of 19 patients representing all major primary progressive aphasia syndromes vs. healthy older individuals and patients with other canonical dementia syndromes (typical Alzheimer's disease and behavioral variant frontotemporal dementia). Healthy controls and most syndromic groups showed a quantitatively or qualitatively similar profile of reduced speech output rate and increased speech error rate under DAF relative to natural auditory feedback. However, there was no group effect on propositional speech output rate under DAF in patients with nonfluent primary progressive aphasia and logopenic aphasia. Importantly, there was considerable individual variation in DAF sensitivity within syndromic groups and some patients in each group (though no healthy controls) apparently benefited from DAF, showing paradoxically increased speech output rate and/or reduced speech error rate under DAF. This work suggests that DAF may be an informative probe of pathophysiological mechanisms underpinning primary progressive aphasia: identification of “DAF responders” may open up an avenue to novel therapeutic applications.

  15. f

    Results of a linear mixed effects model for voice pitch.

    • figshare.com
    xls
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muge Ozker; Peter Hagoort (2025). Results of a linear mixed effects model for voice pitch. [Dataset]. http://doi.org/10.1371/journal.pone.0323201.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muge Ozker; Peter Hagoort
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of a linear mixed effects model for voice pitch.

  16. Dual-task effects on speech production in children (Eichorn & Pirutinsky,...

    • asha.figshare.com
    pdf
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Naomi Eichorn; Steven Pirutinsky (2023). Dual-task effects on speech production in children (Eichorn & Pirutinsky, 2022) [Dataset]. http://doi.org/10.23641/asha.19945838.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    American Speech–Language–Hearing Associationhttp://www.asha.org/
    Authors
    Naomi Eichorn; Steven Pirutinsky
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Purpose: Contemporary motor theories indicate that well-practiced movements are best performed automatically, without conscious attention or monitoring. We applied this perspective to speech production in school-age children and examined how dual-task conditions that engaged sustained attention affected speech fluency, speech rate, and language productivity in children with and without stuttering disorders. Method: Participants included 47 children (19 children who stutter, 28 children who do not stutter) from 7 to 12 years of age. Children produced speech in two baseline conditions with no concurrent task and under a dual-task condition requiring sustained attention to on-screen stimuli. Measures of speech fluency, speech rate, and language productivity were obtained for each trial and compared across conditions and groups. Results: Dual-task conditions resulted in a reduction in stutter-like disfluencies relative to the initial baseline speaking condition. Effects were similar for both groups of children and could not be attributed to decreases in language productivity or a simple order effect. Conclusions: Findings suggest that diverting attention during the process of speech production enhances speech fluency in children, possibly by increasing the automaticity of motor speech sequences. Further research is needed to clarify neurophysiological mechanisms underlying these changes and to evaluate potential clinical applications of such effects.

    Supplemental Material S1. Model fit of sequential generalized multilevel regression models of stuttering-like disfluencies.

    Supplemental Material S2. Model fit of sequential generalized multilevel regression models of non-stuttering-like disfluencies.

    Supplemental Material S3. Model fit of sequential generalized multilevel regression models of stuttering-like disfluencies excluding bilingual participants (n = 3).

    Supplemental Material S4. Model fit of sequential generalized multilevel regression models of non-stuttering-like disfluencies excluding bilingual participants (n = 3).

    Supplemental Material S5. Results of final models of stuttering-like disfluencies (SLDs; M4) and non-stuttering-like disfluencies (non-SLDs; M4) excluding bilingual participants (n = 3).

    Eichorn, N., & Pirutinsky, S. (2022). Dual-task effects on concurrent speech production in school-age children with and without stuttering disorders. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2022_JSLHR-21-00426

  17. f

    Predicting Speech Intelligibility Decline in Amyotrophic Lateral Sclerosis...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panying Rong; Yana Yunusova; Jun Wang; Lorne Zinman; Gary L. Pattee; James D. Berry; Bridget Perry; Jordan R. Green (2023). Predicting Speech Intelligibility Decline in Amyotrophic Lateral Sclerosis Based on the Deterioration of Individual Speech Subsystems [Dataset]. http://doi.org/10.1371/journal.pone.0154971
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panying Rong; Yana Yunusova; Jun Wang; Lorne Zinman; Gary L. Pattee; James D. Berry; Bridget Perry; Jordan R. Green
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PurposeTo determine the mechanisms of speech intelligibility impairment due to neurologic impairments, intelligibility decline was modeled as a function of co-occurring changes in the articulatory, resonatory, phonatory, and respiratory subsystems.MethodSixty-six individuals diagnosed with amyotrophic lateral sclerosis (ALS) were studied longitudinally. The disease-related changes in articulatory, resonatory, phonatory, and respiratory subsystems were quantified using multiple instrumental measures, which were subjected to a principal component analysis and mixed effects models to derive a set of speech subsystem predictors. A stepwise approach was used to select the best set of subsystem predictors to model the overall decline in intelligibility.ResultsIntelligibility was modeled as a function of five predictors that corresponded to velocities of lip and jaw movements (articulatory), number of syllable repetitions in the alternating motion rate task (articulatory), nasal airflow (resonatory), maximum fundamental frequency (phonatory), and speech pauses (respiratory). The model accounted for 95.6% of the variance in intelligibility, among which the articulatory predictors showed the most substantial independent contribution (57.7%).ConclusionArticulatory impairments characterized by reduced velocities of lip and jaw movements and resonatory impairments characterized by increased nasal airflow served as the subsystem predictors of the longitudinal decline of speech intelligibility in ALS. Declines in maximum performance tasks such as the alternating motion rate preceded declines in intelligibility, thus serving as early predictors of bulbar dysfunction. Following the rapid decline in speech intelligibility, a precipitous decline in maximum performance tasks subsequently occurred.

  18. Speech fluency in apraxia of speech (Harmon et al., 2019)

    • asha.figshare.com
    • figshare.com
    pdf
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tyson G. Harmon; Adam Jacks; Katarina L. Haley (2023). Speech fluency in apraxia of speech (Harmon et al., 2019) [Dataset]. http://doi.org/10.23641/asha.8847845.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    American Speech–Language–Hearing Associationhttp://www.asha.org/
    Authors
    Tyson G. Harmon; Adam Jacks; Katarina L. Haley
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Purpose: Slowed speech and interruptions to the flow of connected speech are common in aphasia. These features are also observed during dual-task performance for neurotypical adults. The purposes of this study were to determine (a) whether indices of fluency related to cognitive–linguistic versus motor processing would differ between speakers with aphasia plus apraxia of speech (AOS) and speakers with aphasia only and (b) whether cognitive load reduces fluency in speakers with aphasia with and without AOS.Method: Fourteen speakers with aphasia (7 with AOS) and 7 neurotypical controls retold short stories alone (single task) and while simultaneously distinguishing between a high and a low tone (dual task). Their narrative samples were analyzed for speech fluency according to sample duration, speech rate, pause/fill time, and repetitions per syllable.Results: As expected, both speaker groups with aphasia spoke slower and with more pauses than the neurotypical controls. The speakers with AOS produced more repetitions and longer samples than controls, but they did not differ on these measures from the speakers with aphasia without AOS. Relative to the single-task condition, the dual-task condition increased the duration of pauses and fillers for all groups but reduced speaking rate only for the control group. Sample duration and frequency of repetitions did not change in response to cognitive load.Conclusions: Speech output in aphasia becomes less fluent when speakers have to engage in simultaneous tasks, as is typical in everyday conversation. Although AOS may lead to more sound and syllable repetitions than normal, speaking tasks other than narrative discourse might better capture this specific type of disfluency. Future research is needed to confirm and expand these preliminary findings.Supplemental Material S1. Discrimination between high- and low-frequency tones across three groups.Supplemental Material S2. Two-way comparisons for speech fluency and tone discrimination variables.Harmon, T. G., Jacks, A., & Haley, K. L. (2019). Speech fluency in acquired apraxia of speech during narrative discourse: Group comparisons and dual-task effects. American Journal of Speech-Language Pathology, 28, 905–914. https://doi.org/10.1044/2018_AJSLP-MSC18-18-0107Publisher Note: This article is part of the Special Issue: Selected Papers From the 2018 Conference on Motor Speech—Clinical Science and Innovations.

  19. f

    Classification of acoustic correlates used in these studies.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller (2025). Classification of acoustic correlates used in these studies. [Dataset]. http://doi.org/10.1371/journal.pone.0328833.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Classification of acoustic correlates used in these studies.

  20. Characteristics of the studies included in the systematic review (N = 38).

    • plos.figshare.com
    xls
    Updated Jul 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller (2025). Characteristics of the studies included in the systematic review (N = 38). [Dataset]. http://doi.org/10.1371/journal.pone.0328833.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Lilien Schewski; Mathew Magimai Doss; Guido Beldi; Sandra Keller
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Characteristics of the studies included in the systematic review (N = 38).

  21. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). Government Debt in the EU: interest rate on select Euro members' debt 1993-2023 [Dataset]. https://www.statista.com/statistics/1380613/government-debt-eu-interest-rate-select-eurozone-members/
Organization logo

Government Debt in the EU: interest rate on select Euro members' debt 1993-2023

Explore at:
Dataset updated
Jan 24, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
Jan 1993 - Mar 2023
Area covered
European Union
Description

The long-term interest rate on government debt is a key indicator of the economic health of a country. The rate reflects financial market actors' perceptions of the creditworthiness of the government and the health of the domestic economy, with a strong and robust economic outlook allowing governments to borrow for essential investments in their economies, thereby boosting long-term growth.

The Euro and converging interest rates in the early 2000s

In the case of many Eurozone countries, the early 2000s were a time where this virtuous cycle of economic growth reduced the interest rates they paid on government debt to less than 5 percent, a dramatic change from the pre-Euro era of the 1990s. With the outbreak of the Global Financial Crisis and the subsequent deep recession, however, the economies of Greece, Italy, Spain, Portugal, and Ireland were seen to be much weaker than previously assumed by lenders. Interest rates on their debt gradually began to rise during the crisis, before rapidly increasing beginning in 2010, as first Greece and then Ireland and Portugal lost the faith of financial markets.

The Eurozone crisis

This market adjustment was initially triggered due to revelations by the Greek government that the country's budget deficit was much larger than had been previously expected, with investors seeing the country as an unreliable debtor. The crisis, which became known as the Eurozone crisis, spread to Ireland and then Portugal, as lenders cut-off lending to highly indebted Eurozone members with weak fundamentals. During this period there was also intense speculation that due to unsustainable debt loads, some countries would have to leave the Euro currency area, further increasing the interest on their debt. Interest rates on their debt began to come back down after ECB Chief Mario Draghi signaled to markets that the central bank would intervene to keep the states within the currency area in his famous "whatever it takes" speech in Summer 2012.

The return of higher interest rates in the post-COVID era

Since this period of extremely high interest rates on government debt for these member states, the interest they are charged for borrowing has shrunk considerably, as the financial markets were flooded with "cheap money" due to the policy measures of central banks in the aftermath of the financial crisis, such as near-zero policy rates and quantitative easing. As interest rates have risen to combat inflation since 2022, so have the interest rates on government debt in the Eurozone also risen, however, these rises are modest compared to during the Eurozone crisis.

Search
Clear search
Close search
Google apps
Main menu