Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CompArg: Comparative Sentences 2019 dataset for comparative argument mining is composed of sentences annotated with BETTER / WORSE markers (the first object is better / worse than the second object) or NONE (the sentence does not contain a comparison of the target objects). The BETTER sentences stand for a pro-argument in favor of the first compared object and WORSE-sentences represent a con-argument and favor the second object.
Facebook
Twitter19 page typed manuscript. -- Draft prepared by Capell. Part IV Malekula Comparative Grammar. Foreward and sound laws of Malekula.; Date of recording unknown.. Language as given: Malekula
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sample sentences from CompSent-19 dataset [9], with preference indications. Note: Sequence matters—preferences reference the initial entity in comparison to the subsequent one.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The extraction of subjective comparative relations is essential in the field of question answering systems, playing a crucial role in accurately interpreting and addressing complex questions. To tackle this challenge, we propose the SCQRE model, specifically designed to extract subjective comparative relations from questions by focusing on entities, aspects, constraints, and preferences. Our approach leverages multi-task learning, the Natural Language Inference (NLI) paradigm, and a specialized adapter integrated into RoBERTa_base_go_emotions to enhance performance in Element Extraction (EE), Compared Elements Identification (CEI), and Comparative Preference Classification (CPC). Key innovations include handling X- and XOR-type preferences, capturing implicit comparative nuances, and the robust extraction of constraints often neglected in existing models. We also introduce the Smartphone-SCQRE dataset, along with another domain-specific dataset, Brands-CompSent-19-SCQRE, both structured as subjective comparative questions. Experimental results demonstrate that our model outperforms existing approaches across multiple question-level and sentence-level datasets and surpasses recent language models, such as GPT-3.5-turbo-0613, Llama-2-70b-chat, and Qwen-1.5-7B-Chat, showcasing its effectiveness in question-based comparative relation extraction.
Facebook
TwitterObjectiveWe aimed to systematically review recidivism rates in individuals given community sentences internationally. We sought to explore sources of variation between these rates and how reporting practices may limit their comparability across jurisdictions. Finally, we aimed to adapt previously published guidelines on recidivism reporting to include community sentenced populations.MethodsWe searched MEDLINE, PsycINFO, SAGE and Google Scholar for reports and studies of recidivism rates using non-specific and targeted searches for the 20 countries with the largest prison populations worldwide. We identified 28 studies with data from 19 countries. Of the 20 countries with the largest prison populations, only 2 reported recidivism rates for individuals given community sentences.ResultsThe most commonly reported recidivism information between countries was for 2-year reconviction, which ranged widely from 14% to 43% in men, and 9% to 35% in women. Explanations for recidivism rate variations between countries include when the follow-up period started and whether technical violations were taken into account.ConclusionRecidivism rates in individuals receiving community sentences are typically lower in comparison to those reported in released prisoners, although these two populations differ in terms of their baseline characteristics. Direct comparisons of the recidivism rates in community sentenced cohorts across jurisdictions are currently not possible, but simple changes to existing reporting practices can facilitate these. We propose recommendations to improve reporting practices.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Purpose: This study evaluated the efficacy of an explicit, combined metalinguistic training and grammar facilitation intervention aimed at improving regular past tense marking for nine children aged 5;10–6;8 (years;months) with developmental language disorder.Method: This study used an ABA across-participant multiple-baseline single-case experimental design. Participants were seen one-on-one twice a week for 20- to 30-min sessions for 10 weeks and received explicit grammar intervention combining metalinguistic training using the SHAPE CODING system with grammar facilitation techniques (a systematic cueing hierarchy). In each session, 50 trials to produce the target form were completed, resulting in a total of 1,000 trials over 20 individual therapy sessions. Repeated measures of morphosyntax were collected using probes, including trained past tense verbs, untrained past tense verbs, third-person singular verbs as an extension probe, and possessive ’s as a control probe. Probing contexts included expressive morphosyntax and grammaticality judgment. Outcome measures also included pre–poststandard measures of expressive and receptive grammar.Results: Analyses of repeated measures demonstrated significant improvement in past tense production on trained verbs (eight of nine children) and untrained verbs (seven of nine children), indicating efficacy of the treatment. These gains were maintained for 5 weeks. The majority of children made significant improvement on standardized measures of expressive grammar (eight of nine children). Only five of nine children improved on grammaticality judgment or receptive measures.Conclusion: Results continue to support the efficacy of explicit grammar interventions to improve past tense marking in early school-aged children. Future research should aim to evaluate the efficacy of similar interventions with group comparison studies and determine whether explicit grammar interventions can improve other aspects of grammatical difficulty for early school-aged children with developmental language disorder.Supplemental Materials:S1. Expressive raw scores of participants on trained past tense verbs within-session.S2. Expressive raw scores of participants on trained past tense verbs between-session.S3. Expressive raw scores of participants on untrained past tense verbs.S4. Expressive scores of participants on third-person singular (extension).S5. Summary of Tau-U analyses for expressive repeated measures baseline versus treatment phase contrasts on untrained third-person singular targets (extension).S6. Graph of % correct on expressive third-person singular repeated measures (extension).S7. Expressive raw scores of participants on possessive ’s (control).S8. Summary of expressive repeated measures baseline versus treatment phase contrasts on untrained possessive ’s targets (control).S9. Graph of % correct on expressive possessive ’s repeated measures (control).S10. Grammaticality judgment raw scores of participants on trained past tense verbs within session.S11. Grammaticality judgment raw scores of participants on trained past tense verbs between-session.S12. Grammaticality judgment raw scores of participants on untrained past tense verbs.S13. Summary of grammaticality judgment repeated measures baseline versus treatment phase contrasts on trained and untrained targets.S14. Graph of % correct on grammaticality judgment within-session repeated measures.S15. Graph of % correct on grammaticality judgment between-session repeated measures.S16. Graph of % correct on expressive untrained repeated measures.S17. Grammaticality judgment raw scores of participants on third-person singular (extension).S18. Summary grammaticality judgment repeated measures baseline versus treatment phase contrasts on untrained third-person singular targets (extension).S19. Graph of % correct on grammaticality judgment third-person singular repeated measures (extension).S20. Grammaticality judgment raw scores of participants on possessive ’s (control).S21. Summary of grammaticality judgment repeated measures baseline versus treatment phase contrasts on untrained possessive ’s targets (control).S22. Graph of % correct on grammaticality judgment possessive ’s repeated measures (control).Calder, S. D., Claessen, M., Ebbels, S., & Leitão, S. (2020). Explicit grammar intervention in young school-aged children with developmental language disorder: An efficacy study using single-case experimental design. Language, Speech, and Hearing Services in Schools, 51(2), 298-316. https://doi.org/10.1044/2019_LSHSS-19-00060 Publisher Note: This article is part of the Forum: Morphosyntax Assessment and Intervention for Children.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Purpose: The aim of this study was to develop and validate a large Korean sentence set with varying degrees of semantic predictability that can be used for testing speech recognition and lexical processing.Method: Sentences differing in the degree of final-word predictability (predictable, neutral, and anomalous) were created with words selected to be suitable for both native and nonnative speakers of Korean. Semantic predictability was evaluated through a series of cloze tests in which native (n = 56) and nonnative (n = 19) speakers of Korean participated. This study also used a computer language model to evaluate final-word predictabilities; this is a novel approach that the current study adopted to reduce human effort in validating a large number of sentences, which produced results comparable to those of the cloze tests. In a speech recognition task, the sentences were presented to native (n = 23) and nonnative (n = 21) speakers of Korean in speech-shaped noise at two levels of noise.Results: The results of the speech-in-noise experiment demonstrated that the intelligibility of the sentences was similar to that of related English corpora. That is, intelligibility was significantly different depending on the semantic condition, and the sentences had the right degree of difficulty for assessing intelligibility differences depending on noise levels and language experience. Conclusions: This corpus (1,021 sentences in total) adds to the target languages available in speech research and will allow researchers to investigate a range of issues in speech perception in Korean.Supplemental Material S1. Full list of sentences.Song, J., Kim, B., Kim, M., & Iverson, P. (2023). The Korean speech recognition sentences: A large corpus for evaluating semantic context and language experience in speech perception. Journal of Speech, Language, and Hearing Research, 66(9), 3399–3412. https://doi.org/10.1044/2023_JSLHR-23-00137
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CompArg: Comparative Sentences 2019 dataset for comparative argument mining is composed of sentences annotated with BETTER / WORSE markers (the first object is better / worse than the second object) or NONE (the sentence does not contain a comparison of the target objects). The BETTER sentences stand for a pro-argument in favor of the first compared object and WORSE-sentences represent a con-argument and favor the second object.