Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data dictionary describes the coding system applied to the data extracted from systematic reviews included in the paper:
Cumpston MS, Brennan SE, Ryan R, McKenzie JE. 2023. Statistical synthesis methods other than meta-analysis are commonly used but seldom specified: survey of systematic reviews of interventions
Associated files: 1. Synthesis methods data file: Cumpston_et_al_2023_other_synthesis_methods.xlsx (https://doi.org/10.26180/20785396) 2. Synthesis methods Stata code: Cumpston_et_al_2023_other_synthesis_methods.do (https://doi.org/10.26180/20786251) 3. Study protocol: Cumpston MS, McKenzie JE, Thomas J and Brennan SE. The use of ‘PICO for synthesis’ and methods for synthesis without meta-analysis: protocol for a survey of current practice in systematic reviews of health interventions. F1000Research 2021, 9:678. (https://doi.org/10.12688/f1000research.24469.2)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design. It transforms a high-level description of hardware in a programming language like Verilog into an optimized digital circuit netlist, a network of interconnected Boolean logic gates, that implements the function. Spurred by the success of ML in solving combinatorial and graph problems in other domains, there is growing interest in the design of ML-guided logic synthesis tools. Yet, there are no standard datasets or prototypical learning tasks defined for this problem domain. Here, we describe OpenABC-D,a large-scale, labeled dataset produced by synthesizing open source designs with a leading open-source logic synthesis tool and illustrate its use in developing, evaluating and benchmarking ML-guided logic synthesis. OpenABC-D has intermediate and final outputs in the form of 870,000 And-Inverter-Graphs (AIGs) produced from 1500 synthesis runs plus labels such as the optimized node counts, and de-lay. We define a generic learning problem on this dataset and benchmark existing solutions for it. The codes related to dataset creation and benchmark models are available athttps://github.com/NYU-MLDA/OpenABC.git.
Facebook
TwitterThis digital dataset compiles a 3-layer geologic model of the conterminous United States by mapping the altitude of three surfaces: land surface, top of bedrock, and top of basement. These surfaces are mapped through the compilation and synthesis of published stratigraphic horizons from numerous topical studies. The mapped surfaces create a 3-layer geologic model with three geomaterials-based subdivisions: unconsolidated to weakly consolidated sediment; layered consolidated rock strata that constitute bedrock, and crystalline basement, consisting of either igneous, metamorphic, or highly deformed rocks. Compilation of subsurface data from published reports involved standard techniques within a geographic information system (GIS) including digitizing contour lines, gridding the contoured data, sampling the resultant grids at regular intervals, and attribution of the dataset. However, data compilation and synthesis is highly dependent on the definition of the informal terms “bedrock” and “basement”, terms which may describe different ages or types of rock in different places. The digital dataset consists of a single polygon feature class which contains an array of square polygonal cells that are 2.5 km m in x and y dimensions. These polygonal cells multiple attributes including x-y location, altitude of the three mapped layers at each x-y location, the published data source from which each surface altitude was compiled, and an attribute that allows for spatially varying definitions of the bedrock and basement units. The spatial data are linked through unique identifiers to non-spatial tables that describe the sources of geologic information and a glossary of terms used to describe bedrock and basement type.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Cross-electrophile coupling (XEC), defined by us as the cross-coupling of two different σ-electrophiles that is driven by catalyst reduction, has seen rapid progression in recent years. As such, this review aims to summarize the field from its beginnings up until mid-2023 and to provide comprehensive coverage on synthetic methods and current state of mechanistic understanding. Chapters are split by type of bond formed, which include C(sp3)–C(sp3), C(sp2)–C(sp2), C(sp2)–C(sp3), and C(sp2)–C(sp) bond formation. Additional chapters include alkene difunctionalization, alkyne difunctionalization, and formation of carbon-heteroatom bonds. Each chapter is generally organized with an initial summary of mechanisms followed by detailed figures and notes on methodological developments and ending with application notes in synthesis. While XEC is becoming an increasingly utilized approach in synthesis, its early stage of development means that optimal catalysts, ligands, additives, and reductants are still in flux. This review has collected data on these and various other aspects of the reactions to capture the state of the field. Finally, the data collected on the papers in this review is offered as Supporting Information for readers.
Facebook
TwitterBiological nitrogen fixation converts inert di-nitrogen gas into bioavailable nitrogen and can be an important source of bioavailable nitrogen to organisms. This dataset synthesizes the aquatic nitrogen fixation rate measurements across inland and coastal waters. Data were derived from papers and datasets published by April 2022 and include rates measured using the acetylene reduction assay (ARA), 15N2 labeling, or the N2/Ar technique. The dataset is comprised of 4793 nitrogen fixation rates measurements from 267 studies, and is structured into four tables: 1) a reference table with sources from which data were extracted, 2) a rates table with nitrogen fixation rates that includes habitat, substrate, geographic coordinates, and method of measuring N2 fixation rates, 3) a table with supporting environmental and chemical data for a subset of the rate measurements when data were available, and 4) a data dictionary with definitions for each variable in each data table. This dataset was compiled and curated by the NSF-funded Aquatic Nitrogen Fixation Research Coordination Network (award number 2015825).
Facebook
TwitterSystematic reviews are the method of choice to synthesize research evidence. To identify main topics (so-called hot spots) relevant to large corpora of original publications in need of a synthesis, one must address the “three Vs” of big data (volume, velocity, and variety), especially in loosely defined or fragmented disciplines. For this purpose, text mining and predictive modeling are very helpful. Thus, we applied these methods to a compilation of documents related to digitalization in aesthetic, arts, and cultural education, as a prototypical, loosely defined, fragmented discipline, and particularly to quantitative research within it (QRD-ACE). By broadly querying the abstract and citation database Scopus with terms indicative of QRD-ACE, we identified a corpus of N = 55,553 publications for the years 2013–2017. As the result of an iterative approach of text mining, priority screening, and predictive modeling, we identified n = 8,304 potentially relevant publications of which n = 1,666 were included after priority screening. Analysis of the subject distribution of the included publications revealed video games as a first hot spot of QRD-ACE. Topic modeling resulted in aesthetics and cultural activities on social media as a second hot spot, related to 4 of k = 8 identified topics. This way, we were able to identify current hot spots of QRD-ACE by screening less than 15% of the corpus. We discuss implications for harnessing text mining, predictive modeling, and priority screening in future research syntheses and avenues for future original research on QRD-ACE. Dataset for: Christ, A., Penthin, M., & Kröner, S. (2019). Big Data and Digital Aesthetic, Arts, and Cultural Education: Hot Spots of Current Quantitative Research. Social Science Computer Review, 089443931988845. https://doi.org/10.1177/0894439319888455
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is associated with the paper 'Artificial Personality and Disfluency' by Mirjam Wester, Matthew Aylett, Marcus Tomalin and Rasmus Dall published at Interspeech 2015, Dresden. The focus of this paper is artificial voices with different personalities. Previous studies have shown links between an individual's use of disfluencies in their speech and their perceived personality. Here, filled pauses (uh and um) and discourse markers (like, you know, I mean) have been included in synthetic speech as a way of creating an artificial voice with different personalities. We discuss the automatic insertion of filled pauses and discourse markers (i.e., fillers) into otherwise fluent texts. The automatic system is compared to a ground truth of human ``acted' filler insertion. Perceived personality (as defined by the big five personality dimensions) of the synthetic speech is assessed by means of a standardised questionnaire. Synthesis without fillers is compared to synthesis with either spontaneous or synthetic fillers. Our findings explore how the inclusion of disfluencies influences the way in which subjects rate the perceived personality of an artificial voice.
Facebook
TwitterThe United States Geological Survey (USGS) - Science Analytics and Synthesis (SAS) - Gap Analysis Project (GAP) manages the Protected Areas Database of the United States (PAD-US), an Arc10x geodatabase, that includes a full inventory of areas dedicated to the preservation of biological diversity and to other natural, recreation, historic, and cultural uses, managed for these purposes through legal or other effective means (www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/protected-areas). The PAD-US is developed in partnership with many organizations, including coordination groups at the [U.S.] Federal level, lead organizations for each State, and a number of national and other non-governmental organizations whose work is closely related to the PAD-US. Learn more about the USGS PAD-US partners program here: www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/pad-us-data-stewards. The United Nations Environmental Program - World Conservation Monitoring Centre (UNEP-WCMC) tracks global progress toward biodiversity protection targets enacted by the Convention on Biological Diversity (CBD) through the World Database on Protected Areas (WDPA) and World Database on Other Effective Area-based Conservation Measures (WD-OECM) available at: www.protectedplanet.net. See the Aichi Target 11 dashboard (www.protectedplanet.net/en/thematic-areas/global-partnership-on-aichi-target-11) for official protection statistics recognized globally and developed for the CBD, or here for more information and statistics on the United States of America's protected areas: www.protectedplanet.net/country/USA. It is important to note statistics published by the National Oceanic and Atmospheric Administration (NOAA) Marine Protected Areas (MPA) Center (www.marineprotectedareas.noaa.gov/dataanalysis/mpainventory/) and the USGS-GAP (www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/pad-us-statistics-and-reports) differ from statistics published by the UNEP-WCMC as methods to remove overlapping designations differ slightly and U.S. Territories are reported separately by the UNEP-WCMC (e.g. The largest MPA, "Pacific Remote Islands Marine Monument" is attributed to the United States Minor Outlying Islands statistics). At the time of PAD-US 2.1 publication (USGS-GAP, 2020), NOAA reported 26% of U.S. marine waters (including the Great Lakes) as protected in an MPA that meets the International Union for Conservation of Nature (IUCN) definition of biodiversity protection (www.iucn.org/theme/protected-areas/about). USGS-GAP released PAD-US 3.0 Statistics and Reports in the summer of 2022. The relationship between the USGS, the NOAA, and the UNEP-WCMC is as follows: - USGS manages and publishes the full inventory of U.S. marine and terrestrial protected areas data in the PAD-US representing many values, developed in collaboration with a partnership network in the U.S. and; - USGS is the primary source of U.S. marine and terrestrial protected areas data for the WDPA, developed from a subset of the PAD-US in collaboration with the NOAA, other agencies and non-governmental organizations in the U.S., and the UNEP-WCMC and; - UNEP-WCMC is the authoritative source of global protected area statistics from the WDPA and WD-OECM and; - NOAA is the authoritative source of MPA data in the PAD-US and MPA statistics in the U.S. and; - USGS is the authoritative source of PAD-US statistics (including areas primarily managed for biodiversity, multiple uses including natural resource extraction, and public access). The PAD-US 3.0 Combined Marine, Fee, Designation, Easement feature class (GAP Status Code 1 and 2 only) is the source of protected areas data in this WDPA update. Tribal areas and military lands represented in the PAD-US Proclamation feature class as GAP Status Code 4 (no known mandate for biodiversity protection) are not included as spatial data to represent internal protected areas are not available at this time. The USGS submitted more than 51,000 protected areas from PAD-US 3.0, including all 50 U.S. States and 6 U.S. Territories, to the UNEP-WCMC for inclusion in the WDPA, available at www.protectedplanet.net. The NOAA is the sole source of MPAs in PAD-US and the National Conservation Easement Database (NCED, www.conservationeasement.us/) is the source of conservation easements. The USGS aggregates authoritative federal lands data directly from managing agencies for PAD-US (https://ngda-gov-units-geoplatform.hub.arcgis.com/pages/federal-lands-workgroup), while a network of State data-stewards provide state, local government lands, and some land trust preserves. National nongovernmental organizations contribute spatial data directly (www.usgs.gov/core-science-systems/science-analytics-and-synthesis/gap/science/pad-us-data-stewards). The USGS translates the biodiversity focused subset of PAD-US into the WDPA schema (UNEP-WCMC, 2019) for efficient aggregation by the UNEP-WCMC. The USGS maintains WDPA Site Identifiers (WDPAID, WDPA_PID), a persistent identifier for each protected area, provided by UNEP-WCMC. Agency partners are encouraged to track WDPA Site Identifier values in source datasets to improve the efficiency and accuracy of PAD-US and WDPA updates. The IUCN protected areas in the U.S. are managed by thousands of agencies and organizations across the country and include over 51,000 designated sites such as National Parks, National Wildlife Refuges, National Monuments, Wilderness Areas, some State Parks, State Wildlife Management Areas, Local Nature Preserves, City Natural Areas, The Nature Conservancy and other Land Trust Preserves, and Conservation Easements. The boundaries of these protected places (some overlap) are represented as polygons in the PAD-US, along with informative descriptions such as Unit Name, Manager Name, and Designation Type. As the WDPA is a global dataset, their data standards (UNEP-WCMC 2019) require simplification to reduce the number of records included, focusing on the protected area site name and management authority as described in the Supplemental Information section in this metadata record. Given the numerous organizations involved, sites may be added or removed from the WDPA between PAD-US updates. These differences may reflect actual change in protected area status; however, they also reflect the dynamic nature of spatial data or Geographic Information Systems (GIS). Many agencies and non-governmental organizations are working to improve the accuracy of protected area boundaries, the consistency of attributes, and inventory completeness between PAD-US updates. In addition, USGS continually seeks partners to review and refine the assignment of conservation measures in the PAD-US.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionFoundational to a well-functioning health system is a strong routine health information system (RHIS) that informs decisions and actions at all levels of the health system. In the context of decentralization across low- and middle-income countries, RHIS has the promise of supporting sub-national health staff to take data-informed actions to improve health system performance. However, there is wide variation in how “RHIS data use” is defined and measured in the literature, impeding the development and evaluation of interventions that effectively promote RHIS data use.MethodsAn integrative review methodology was used to: (1) synthesize the state of the literature on how RHIS data use in low- and middle-income countries is conceptualized and measured; (2) propose a refined RHIS data use framework and develop a common definition for RHIS data use; and (3) propose improved approaches to measure RHIS data use. Four electronic databases were searched for peer-reviewed articles published between 2009 and 2021 investigating RHIS data use.ResultsA total of 45 articles, including 24 articles measuring RHIS data use, met the inclusion criteria. Less than half of included articles (42%) explicitly defined RHIS data use. There were differences across the literature whether RHIS data tasks such as data analysis preceded or were a part of RHIS data use; there was broad consensus that data-informed decisions and actions were essential steps within the RHIS data use process. Based on the synthesis, the Performance of Routine Information System Management (PRISM) framework was refined to specify the steps of the RHIS data use process.ConclusionConceptualizing RHIS data use as a process that includes data-informed actions emphasizes the importance of actions in improving health system performance. Future studies and implementation strategies should be designed with consideration for the different support needs for each step of the RHIS data use process.
Facebook
TwitterThis digital GIS dataset and accompanying nonspatial files synthesize the model outputs from a regional-scale volumetric 3-D geologic model that portrays the generalized subsurface geology of western South Dakota from a wide variety of input data sources.The study area includes all of western South Dakota from west of the Missouri River to the Black Hills uplift and Wyoming border. The model data released here consist of the stratigraphic contact elevation of major Phanerozoic sedimentary units that broadly define the geometry of the subsurface, the elevation of Tertiary intrusive and Precambrian basement rocks, and point data representing the three-dimensional geometry of fault surfaces. the presence of folds and unconformities are implied by the 3D geometry of the stratigraphic units, but these are not included as discrete features in this data release. The 3D geologic model was constructed from a wide variety of publicly available surface and subsurface geologic data; none of these input data are part of this Data Release, but data sources are thoroughly documented such that a user could obtain these data from other sources if desired. This model was created as part of the U.S. Geological Survey’s (USGS) National Geologic Synthesis (NGS) project—a part of the National Cooperative Geologic Mapping Program (NCGMP). The WSouthDakota3D geodatabase contains twenty-five (25) subsurface horizons in raster format that represent the tops of modeled subsurface units, and a feature dataset “GeologicModel”. The GeologicModel feature dataset contains a feature class of thirty-five (35) faults served in elevation grid format (FaultPoints). The feature class “ModelBoundary” describes the footprint of the geologic model, and was included to meet the NCGMP’s GeMS data schema. Nonspatial tables define the data sources used (DataSources), define terms used in the dataset (Glossary), and provide a description of the modeled surfaces (DescriptionOfModelUnits). Separate file folders contain the vector data in shapefile format, the raster data in ASCII format, and the nonspatial tables as comma-separated values. In addition, a tabular data dictionary describes the entity and attribute information for all attributes of the geospatial data and the accompanying nonspatial tables (EntityAndAttributes). An included READ_ME file documents the process of manipulating and interpreting publicly available surface and subsurface geologic data to create the model. It additionally contains critical information about model units, and uncertainty regarding their ability to predict true ground conditions. Accompanying this data release is the “WSouthDakotaInputSummaryTable.csv”, which tabulates the global settings for each fault block, the stratigraphic horizons modeled in each fault block, the types and quantity of data inputs for each stratigraphic horizon, and then the settings associated with each data input.
Facebook
TwitterThe synthetic lesion library was generated using manually defined lesions using the Lesion Synthesis Toolbox (LST). The lesions are spherical with well-defined characteristics (diameter and intensity). This dataset is intended to be used to test lesion detection AI. The ground truth data is intentionally not included - please reach out the authors (rklein@toh.ca) to request. Real patient positron emission tomography (PET) and x-ray computed tomography (CT) data were used. The patients were screened for low likelihood of disease. The LST was used to manually place fake lesions in anatomically realistic locations. The lesions were then simulated into PET sinogram space (with time of flight), and then reconstructed. Lesions were also painted into the corresponding CT space.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Melting point (Tm) is one of the defining characteristics of ionic liquids (ILs) and is often one of the most important factors in their selection for applications in separation processes, lubrication, or thermal energy storage. Due to the almost limitless number of theoretically possible ILs, each with incrementally different physiochemical properties, there is significant scope for designing ILs for specific applications. However, the need for extensive synthesis and experimental characterization to find the optimum IL is a major barrier. Therefore, it is essential that predictive tools are developed for estimating the physiochemical properties of ILs. The starting point for any such approach should be the prediction of Tm since most other property models will be based on the assumption that the IL is in the liquid phase at the application temperature. While several attempts have previously been made at developing group contribution methods (GCMs) for estimating IL Tm, the complex relationship between the IL structure and Tm has resulted in only limited success. In this study, an extensive database of IL Tm has been compiled and used as the basis for a top-down structure–property analysis. Based on the findings, a new hybrid GCM has been developed, which combines functional group parameters with simple, indirect structural parameters derived from the structure–property analysis. The new hybrid GCM has a mean absolute percentage error (MAPE) of 8.6% over the dataset of around 1700 data points and performs quantitatively and qualitatively better than the standard GCM approach.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facebook
TwitterUnder the direction and funding of the National Cooperative Mapping Program with guidance and encouragement from the United States Geological Survey (USGS), a digital database of three-dimensional (3D) vector data, displayed as two-dimensional (2D) data-extent bounding polygons. This geodatabase is to act as a virtual and digital inventory of 3D structure contour and isopach vector data for the USGS National Geologic Synthesis (NGS) team. This data will be available visually through a USGS web application and can be queried using complimentary nonspatial tables associated with each data harboring polygon. This initial publication contains 60 datasets collected directly from USGS specific publications and federal repositories. Further publications of dataset collections in versioned releases will be annotated in additional appendices, respectfully. These datasets can be identified from their specific version through their nonspatial tables. This digital dataset contains spatial extents of the 2D geologic vector data as polygon features that are attributed with unique identifiers that link the spatial data to nonspatial tables that define the data sources used and describe various aspects of each published model. The nonspatial DataSources table includes full citation and URL address for both published model reports, any digital model data released as a separate publication, and input type of vector data, using several classification schemes. A tabular glossary defines terms used in the dataset. A tabular data dictionary describes the entity and attribute information for all attributes of the geospatial data and the accompanying nonspatial tables.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionFoundational to a well-functioning health system is a strong routine health information system (RHIS) that informs decisions and actions at all levels of the health system. In the context of decentralization across low- and middle-income countries, RHIS has the promise of supporting sub-national health staff to take data-informed actions to improve health system performance. However, there is wide variation in how “RHIS data use” is defined and measured in the literature, impeding the development and evaluation of interventions that effectively promote RHIS data use.MethodsAn integrative review methodology was used to: (1) synthesize the state of the literature on how RHIS data use in low- and middle-income countries is conceptualized and measured; (2) propose a refined RHIS data use framework and develop a common definition for RHIS data use; and (3) propose improved approaches to measure RHIS data use. Four electronic databases were searched for peer-reviewed articles published between 2009 and 2021 investigating RHIS data use.ResultsA total of 45 articles, including 24 articles measuring RHIS data use, met the inclusion criteria. Less than half of included articles (42%) explicitly defined RHIS data use. There were differences across the literature whether RHIS data tasks such as data analysis preceded or were a part of RHIS data use; there was broad consensus that data-informed decisions and actions were essential steps within the RHIS data use process. Based on the synthesis, the Performance of Routine Information System Management (PRISM) framework was refined to specify the steps of the RHIS data use process.ConclusionConceptualizing RHIS data use as a process that includes data-informed actions emphasizes the importance of actions in improving health system performance. Future studies and implementation strategies should be designed with consideration for the different support needs for each step of the RHIS data use process.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The anthropometric datasets presented here are virtual datasets. The unweighted virtual dataset was generated using a synthesis and subsequent validation algorithm (Ackermann et al., 2023). The underlying original dataset used in the algorithm was collected within a regional epidemiological public health study in northeastern Germany (SHIP, see Völzke et al., 2022). Important details regarding the collection of the anthropometric dataset within SHIP (e.g. sampling strategy, measurement methodology & quality assurance process) are discussed extensively in the study by Bonin et al. (2022). To approximate nationally representative values for the German working-age population, the virtual dataset was weighted with reference data from the first survey wave of the Study on health of adults in Germany (DEGS1, see Scheidt-Nave et al., 2012). Two different algorithms were used for the weighting procedure: (1) iterative proportional fitting (IPF), which is described in more detail in the publication by Bonin et al. (2022), and (2) a nearest neighbor approach (1NN), which is presented in the study by Kumar and Parkinson (2018). Weighting coefficients were calculated for both algorithms and it is left to the practitioner which coefficients are used in practice. Therefore, the weighted virtual dataset has two additional columns containing the calculated weighting coefficients with IPF ("WeightCoef_IPF") or 1NN ("WeightCoef_1NN"). Unfortunately, due to the sparse data basis at the distribution edges of SHIP compared to DEGS1, values underneath the 5th and above the 95th percentile should be considered with caution. In addition, the following characteristics describe the weighted and unweighted virtual datasets: According to ISO 15535, values for "BMI" are in [kg/m2], values for "Body mass" are in [kg], and values for all other measures are in [mm]. Anthropometric measures correspond to measures defined in ISO 7250-1. Offset values were calculated for seven anthropometric measures because there were systematic differences in the measurement methodology between SHIP and ISO 7250-1 regarding the definition of two bony landmarks: the acromion and the olecranon. Since these seven measures rely on one of these bony landmarks, and it was not possible to modify the SHIP methodology regarding landmark definitions, offsets had to be calculated to obtain ISO-compliant values. In the presented datasets, two columns exist for these seven measures. One column contains the measured values with the landmarking definitions from SHIP, and the other column (marked with the suffix "_offs") contains the calculated ISO-compliant values (for more information concerning the offset values see Bonin et al., 2022). The sample size is N = 5000 for the male and female subsets. The original SHIP dataset has a sample size of N = 1152 (women) and N = 1161 (men). Due to this discrepancy between the original SHIP dataset and the virtual datasets, users may get a false sense of comfort when using the virtual data, which should be mentioned at this point. In order to get the best possible representation of the original dataset, a virtual sample size of N = 5000 is advantageous and has been confirmed in pre-tests with varying sample sizes, but it must be kept in mind that the statistical properties of the virtual data are based on an original dataset with a much smaller sample size.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Video abstractPurpose: Multimodal therapy is a frequent term in aphasia literature, but it has no agreed upon definition. Phrases such as “multimodal therapy” and “multimodal treatment” are applied to a range of aphasia interventions as if mutually understood, and yet, the interventions reported in the literature differ significantly in methodology, approach, and aims. This inconsistency can be problematic for researchers, policy makers, and clinicians accessing the literature and potentially compromises data synthesis and meta-analysis. A literature review was conducted to examine what types of aphasia treatment are labeled multimodal and determine whether any patterns are present.Method: A systematic search was conducted to identify literature pertaining to aphasia that included the term multimodal therapy (and variants). Sources included literature databases, dissertation databases, textbooks, professional association websites, and Google Scholar.Results: Thirty-three original review articles were identified, as well as another 31 sources referring to multimodal research, all of which used a variant of the term multimodal therapy. Treatments had heterogeneous aims, underlying theories, and methods. The rationale for using more than 1 modality was not always clear, nor was the reason each therapy was considered to be multimodal when similar treatments had not used the title. Treatments were noted to differ across 2 key features. The 1st was whether the ultimate aim of intervention was to improve total communication, as in augmentative and alternative communication approaches, or to improve 1 specific modality, as when gesture is used to improve word retrieval. The 2nd was the point in the treatment that the nonspeech modalities were employed.Discussion: Our review demonstrated that references to “multimodal” treatments represent very different therapies with little consistency. We propose a framework to define and categorize multimodal treatments, which is based both on our results and on current terminology in speech-language pathology.Supplemental Material S1. Secondary sources referring to multimodal treatments. Supplemental Material S2. Data extraction table for original research on "multimodal therapy."Pierce, J. E., O'Halloran, R., Togher, L., & Rose, M. L. (2019). What is meant by "multimodal therapy" for aphasia? American Journal of Speech-Language Pathology, 28, 706–716. https://doi.org/10.1044/2018_AJSLP-18-0157
Facebook
TwitterAn efficient solid-phase method has been reported to prepare well-defined lysine defect dendrimers. Using orthogonally protected lysine residues, pure G2 to G4 lysine defect dendrimers were prepared with 48–95% yields within 13 h. Remarkably, high-purity products were collected via precipitation without further purification steps. This method was applied to prepare a pair of 4-carboxyphenylboronic acid-decorated defect dendrimers (16 and 17), which possessed the same number of boronic acids. The binding affinity of 16, in which the ε-amines of G1 lysine are fractured, for glucose and sorbitol was 4 times that of 17. This investigation indicated the role of allocation and distribution of peripheries for the dendrimer’s properties and activity.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
A palladium-catalyzed C–H bond functionalization of acrylamides was developed to build up stereoselectively trifluoromethylated 1,3-butadienes. Using a tertiary amide as a directing group, olefins were selectively functionalized with 2-bromo-3,3,3-trifluoropropene to access these important fluorinated compounds. The methodology was extended to the construction of pentafluoroethyl-substituted 1,3-dienes. Mechanistic studies supported by density functional theory calculations suggested a redox neutral mechanism for this transformation.
Facebook
Twitterseed desiccation response 1 means desiccation-tolerant, 0 means desiccation-sensitive, blanks mean lack of information.
growth form 1 means woody, and 0 means herbaceous.
fruit type 1 means fleshy, 0 means dry, blanks mean lack of information.
nondormant 1 means nondormant, 0 means dormant, blanks mean lack of information.
physical dormant 1 means physical dormant, 0 means non physical dormant, blanks mean lack of information.
other dormant 1 means other dormant (include physiological dormancy, morphological dormancy and morphophysiological dormancy), 0 means nondormant or physical dormant, blanks mean lack of information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data dictionary describes the coding system applied to the data extracted from systematic reviews included in the paper:
Cumpston MS, Brennan SE, Ryan R, McKenzie JE. 2023. Statistical synthesis methods other than meta-analysis are commonly used but seldom specified: survey of systematic reviews of interventions
Associated files: 1. Synthesis methods data file: Cumpston_et_al_2023_other_synthesis_methods.xlsx (https://doi.org/10.26180/20785396) 2. Synthesis methods Stata code: Cumpston_et_al_2023_other_synthesis_methods.do (https://doi.org/10.26180/20786251) 3. Study protocol: Cumpston MS, McKenzie JE, Thomas J and Brennan SE. The use of ‘PICO for synthesis’ and methods for synthesis without meta-analysis: protocol for a survey of current practice in systematic reviews of health interventions. F1000Research 2021, 9:678. (https://doi.org/10.12688/f1000research.24469.2)