Facebook
TwitterA common problem in clinical trials is the missing data that occurs when patients do not complete the study and drop out without further measurements. Missing data cause the usual statistical analysis of complete or all available data to be subject to bias. There are no universally applicable methods for handling missing data. We recommend the following: (1) Report reasons for dropouts and proportions for each treatment group; (2) Conduct sensitivity analyses to encompass different scenarios of assumptions and discuss consistency or discrepancy among them; (3) Pay attention to minimize the chance of dropouts at the design stage and during trial monitoring; (4) Collect post-dropout data on the primary endpoints, if at all possible; and (5) Consider the dropout event itself an important endpoint in studies with many.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Pharmaceutical obesity RCTs used to evaluate the scope of the missing data problem.pdf (0.27 MB DOC)
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/39492/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/39492/terms
Clinical trials study the effects of medical treatments, like how safe they are and how well they work. But most clinical trials don't get all the data they need from patients. Patients may not answer all questions on a survey, or they may drop out of a study after it has started. The missing data can affect researchers' ability to detect the effects of treatments. To address the problem of missing data, researchers can make different guesses based on why and how data are missing. Then they can look at results for each guess. If results based on different guesses are similar, researchers can have more confidence that the study results are accurate. In this study, the research team created new methods to do these tests and developed software that runs these tests. To access the sensitivity analysis methods and software, please visit the MissingDataMatters website.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Missing data is a common problem in general applied studies, and specially in clinical trials. For implementing sensitivity analysis, several multiple imputation methods exist, like sequential imputation, which restricts to monotone missingness, and Bayesian, where the imputation and analysis models differ, entailing overestimation of variance. Also, full conditional specification provides a conditional interpretation of sensitivity parameters, requiring further calibration to get the desired marginal interpretation. We propose in this paper a multiple imputation procedure, based on a multivariate linear regression model, which keeps compatibility in sensitivity analysis under intermittent missingness, providing a marginal interpretation of the elicited parameters. Simulation studies show that the method behaves well with longitudinal data and remains robust under demanding constraints. We conclude the possibility of situations not covered by the existing methods and well suited for our proposal, which allows more efficient handling of a given multivariate linear regression structure. Its use is illustrated in a real case study, where a sensitivity analysis is accomplished.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Heart failure (HF) affects at least 26 million people worldwide, so predicting adverse events in HF patients represents a major target of clinical data science. However, achieving large sample sizes sometimes represents a challenge due to difficulties in patient recruiting and long follow-up times, increasing the problem of missing data. To overcome the issue of a narrow dataset cardinality (in a clinical dataset, the cardinality is the number of patients in that dataset), population-enhancing algorithms are therefore crucial. The aim of this study was to design a random shuffle method to enhance the cardinality of an HF dataset while it is statistically legitimate, without the need of specific hypotheses and regression models. The cardinality enhancement was validated against an established random repeated-measures method with regard to the correctness in predicting clinical conditions and endpoints. In particular, machine learning and regression models were employed to highlight the benefits of the enhanced datasets. The proposed random shuffle method was able to enhance the HF dataset cardinality (711 patients before dataset preprocessing) circa 10 times and circa 21 times when followed by a random repeated-measures approach. We believe that the random shuffle method could be used in the cardiovascular field and in other data science problems when missing data and the narrow dataset cardinality represent an issue.
Facebook
TwitterBackground While a number of reviews of homeopathic clinical trials have been done, all have used methods dependent on allopathic diagnostic classifications foreign to homeopathic practice. In addition, no review has used established and validated quality criteria allowing direct comparison of the allopathic and homeopathic literature. Methods In a systematic review, we compared the quality of clinical-trial research in homeopathy to a sample of research on conventional therapies using a validated and system-neutral approach. All clinical trials on homeopathic treatments with parallel treatment groups published between 1945–1995 in English were selected. All were evaluated with an established set of 33 validity criteria previously validated on a broad range of health interventions across differing medical systems. Criteria covered statistical conclusion, internal, construct and external validity. Reliability of criteria application is greater than 0.95. Results 59 studies met the inclusion criteria. Of these, 79% were from peer-reviewed journals, 29% used a placebo control, 51% used random assignment, and 86% failed to consider potentially confounding variables. The main validity problems were in measurement where 96% did not report the proportion of subjects screened, and 64% did not report attrition rate. 17% of subjects dropped out in studies where this was reported. There was practically no replication of or overlap in the conditions studied and most studies were relatively small and done at a single-site. Compared to research on conventional therapies the overall quality of studies in homeopathy was worse and only slightly improved in more recent years. Conclusions Clinical homeopathic research is clearly in its infancy with most studies using poor sampling and measurement techniques, few subjects, single sites and no replication. Many of these problems are correctable even within a "holistic" paradigm given sufficient research expertise, support and methods.
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/39158/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/39158/terms
The cost and challenge of running clinical trials in each subpopulation and each setting results in a patchwork of evidence. Often, some interventions are evaluated with trial samples that are distinct in the distribution of potential treatment modifiers. These gaps in understanding can be filled using transportability methods where the subset of trials that evaluate an intervention are used to transport the potential outcome mean for that intervention to the target population. Despite this, transportability analysis suffers from systematic missing data, a missing data problem that is unique to the settings where data comes from multiple sources. This harmonized dataset was created to evaluate novel transportability methods. It includes data from six trials of adolescent sexual risk prevention. Interventions evaluated include emotion regulation, skills training, family based, and health promotion. Populations studied include adolescents in mental health care (aged 13-18), those in alternative educational placement (aged 12-19), those indicated by school personnel as having mental or behavioral health issues (aged 12-14), and Black and African American adolescents (aged 14-17) that had lived in the urban areas of four major U.S. cities.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Medical Student Dataset is a simulated dataset containing 100,000 rows and 12 columns. The dataset is designed to mimic real-world data commonly encountered in medical education and research. It includes various preprocessing issues commonly observed in data, such as missing values, duplicates, and inconsistencies.
The dataset consists of the following columns:
StudentID: Unique identifier for each medical student.Gender: Gender of the student (e.g., Male, Female).Age: Age of the student in years.Ethnicity: Ethnicity of the student.Year: Academic year of the student.University: Name of the university where the student is enrolled.GPA: Grade Point Average of the student.MCAT Score: Medical College Admission Test (MCAT) score of the student.Clinical Experience: Indicator of whether the student has previous clinical experience (Yes/No).Research Experience: Indicator of whether the student has previous research experience (Yes/No).Publication Count: Number of publications attributed to the student.Exam Score: Performance score on a standardized medical examination.The dataset has been intentionally created to include various preprocessing issues, such as:
This dataset can be used for various purposes, including data cleaning and preprocessing exercises, exploring data analysis techniques, and evaluating machine learning algorithms. It provides an opportunity to practice handling real-world data challenges often encountered in the field of medical education and research.
Facebook
TwitterBackgroundPatient reported outcomes (PROs) are increasingly assessed in clinical trials, and guidelines are available to inform the design and reporting of such trials. However, researchers involved in PRO data collection report that specific guidance on ‘in-trial’ activity (recruitment, data collection and data inputting) and the management of ‘concerning’ PRO data (i.e., data which raises concern for the well-being of the trial participant) appears to be lacking. The purpose of this review was to determine the extent and nature of published guidelines addressing these areas. Methods and FindingsSystematic review of 1,362 articles identified 18 eligible papers containing ‘in-trial’ guidelines. Two independent authors undertook a qualitative content analysis of the selected papers. Guidelines presented in each of the articles were coded according to an a priori defined coding frame, which demonstrated reliability (pooled Kappa 0.86–0.97), and validity (<2% residual category coding). The majority of guidelines present were concerned with ‘pre-trial’ activities (72%), for example, outcome measure selection and study design issues, or ‘post-trial’ activities (16%) such as data analysis, reporting and interpretation. ‘In-trial’ guidelines represented 9.2% of all guidance across the papers reviewed, with content primarily focused on compliance, quality control, proxy assessment and reporting of data collection. There were no guidelines surrounding the management of concerning PRO data. ConclusionsThe findings highlight there are minimal in-trial guidelines in publication regarding PRO data collection and management in clinical trials. No guidance appears to exist for researchers involved with the handling of concerning PRO data. Guidelines are needed, which support researchers to manage all PRO data appropriately and which facilitate unbiased data collection.
Facebook
TwitterExpanded methods, used to generate the data in the following sections. Sections 7–16 provide details on (7) The 1331 categories of Type of Study in 12,673 trials, year-wise from 2007–2018; (8) Determining the truly foreign trials; (9) Determining the unambiguously Indian trials, and error rates over time; (10) Determining the unambiguously Multinational trials, and error rates over time; (11) Identifying the actual trials in the categories Foreign, Indian and Multinational: A summary. (12) 55 Interventional Indian cases with Phase listed as PMS, and error rates over time; (13) For the redefined Indian and Multinational sets (i) cases of confusion between PMS and Phase 4 trials; (ii) cases where Type of Trial is BA/BE but Phase is 1–4; and (iii) Sites of study: Incorrect listing of cities. Error rates over time for some of these; (14) Missing data in terms of (i) Name of the PI was missing; (ii) for the redefined Indian set, and the Multinational set, cases where Type of Study was not available, but Phase of Trial was Phase 1, 1/2, 2, 2/3, 3, 3/4 or 4; (iii) Name of Primary Sponsor was missing; and (iv) the state hosting a trial was not listed. Error rates over time for some of these; (15) Examples of types of variations in PIs’ names; and examples, or the entire listing, of problems with ethics committees; and (16) A brief on the 47 trials conducted at the Malpani Multispeciality Hospital, Jaipur, Rajasthan. (ZIP 596 kb)
Facebook
Twitter
According to our latest research, the global Data-Independent Acquisition (DIA) Software market size reached USD 457 million in 2024, reflecting robust adoption across life sciences and healthcare sectors. The market is expected to grow at a CAGR of 12.3% from 2025 to 2033, reaching a forecasted value of USD 1,287 million by 2033. This impressive growth trajectory is primarily driven by the increasing demand for high-throughput, reproducible, and accurate proteomics and metabolomics data analysis, alongside the growing necessity for advanced clinical diagnostics and drug discovery solutions. As per our latest research, the DIA software market continues to evolve rapidly, fueled by technological advancements and expanding applications across diverse end-user segments.
A primary growth factor for the Data-Independent Acquisition Software market is the escalating need for advanced data analysis tools in proteomics and metabolomics research. Traditional data-dependent acquisition (DDA) methods often suffer from issues such as limited reproducibility and missing values, which can hinder comprehensive biological insights. In contrast, DIA software offers a more systematic and unbiased approach to data collection, enabling researchers to capture a broader spectrum of analytes with higher reproducibility and sensitivity. This capability is particularly valuable in biomarker discovery and systems biology, where the integrity and completeness of data are critical for downstream analysis. The continuous innovation in DIA algorithms and user-friendly interfaces has further accelerated the adoption of these platforms, making them indispensable in modern omics research workflows.
Another significant driver is the expanding role of DIA software in clinical diagnostics and drug development. The pharmaceutical and biotechnology industries are increasingly leveraging DIA methodologies to enhance the accuracy and throughput of their analytical pipelines. This is particularly evident in drug discovery, where DIA facilitates the rapid screening of potential therapeutic targets and the identification of novel biomarkers. Additionally, regulatory agencies are emphasizing data quality and traceability, prompting clinical laboratories to adopt advanced DIA solutions that can deliver reliable and reproducible results. The integration of DIA software with laboratory information management systems (LIMS) and electronic health records (EHRs) is further streamlining clinical workflows, reducing manual intervention, and ensuring compliance with industry standards.
The proliferation of cloud-based deployment models has also contributed substantially to the growth of the DIA Software market. Cloud-based solutions offer several advantages, including scalability, remote accessibility, and cost-effectiveness, which are particularly appealing to academic and research institutes with limited IT infrastructure. The ability to process large datasets in real time and collaborate across geographies has democratized access to advanced proteomics and metabolomics analysis tools. Moreover, cloud platforms facilitate seamless integration with other bioinformatics software and databases, enhancing the overall efficiency and productivity of research teams. As data volumes continue to surge, cloud-based DIA solutions are expected to play an increasingly pivotal role in meeting the computational demands of next-generation omics research.
From a regional perspective, North America currently dominates the Data-Independent Acquisition Software market, driven by a strong presence of leading pharmaceutical companies, well-established healthcare infrastructure, and significant investments in life sciences research. Europe follows closely, supported by robust government funding and a thriving academic research community. The Asia Pacific region is emerging as a high-growth market, fueled by increasing R&D expenditure, expanding healthcare infrastructure, and a growing pool of skilled researchers. Latin America and the Middle East & Africa are witnessing steady growth, albeit from a smaller base, as local governments and private entities ramp up investments in healthcare and life sciences. The global landscape is characterized by a dynamic interplay of technological innovation, regulatory developments, and evolving end-user needs, all of which are shaping the future trajectory of the DIA software market.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Importance: The efficacy of physical activity interventions among individuals with type 2 diabetes has been established; however, practical approaches to translate and extend these findings into community settings have not been well explored. Objective: To test the effectiveness of providing varying frequencies of weekly structured exercise sessions to improve diabetes control. Design: The IMPACT study was a randomized controlled clinical trial (randomization: October 2016 to April 2019) that included a 6-month, structured exercise intervention either once or thrice-weekly versus usual care (advice only). Statistical analysis was performed in 2022. Setting: The exercise intervention was conducted at community-based fitness centers. Follow-up visits were conducted in a university research clinic. Participants: Participants included 357 adults with type 2 diabetes (HbA1c 6.5-13.0%, not taking insulin, no precluding health issues). Interventions: 119 participants were randomized to the usual care (UC) group, 119 to the once-weekly structured exercise group, and 119 to the thrice-weekly structured exercise group. Main Outcomes and Measures: The primary outcome was HbA1c at 3 and 6 months. Results: 357 participants with mean age of 57.4 (SD 11.1) years and 40.1% females were randomized. No difference in HbA1c change was observed by study group in the intention-to-treat (ITT) analysis (P=.17). 54.6% of the once-weekly group and 48.7% of the thrice-weekly group were at least 50% adherent to the assigned structured exercise regimen. Per-protocol analysis (PP) showed HbA1c was lower by 0.35% (95% CI, 0.10% – 0.60%) at 3 months (P=.005) and by 0.38% (95% CI, 0.12% – 0.65%) at 6 months in the thrice-weekly group compared to UC (P=.005), with no statistically significant decrease in HbA1c in the once-weekly group. The exercise intervention was effective in improving self-reported MET min/week for participants in the thrice-a-week structured exercise program (both overall and in PP). Conclusions and Relevance: Only participants in the thrice-weekly structured exercise group who attended at least 50% of the sessions during the 6-month exercise intervention program improved HbA1c levels at 3 and 6 months. No statistically significant improvement in HbA1c was observed among participants in UC or the once-weekly structured exercise group, in either ITT or PP.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Background: Patient health information is collected routinely in electronic health records (EHRs) and used for research purposes, however, many health conditions are known to be under-diagnosed or under-recorded in EHRs. In research, missing diagnoses result in under-ascertainment of true cases, which attenuates estimated associations between variables and results in a bias toward the null. Bayesian approaches allow the specification of prior information to the model, such as the likely rates of missingness in the data. This paper describes a Bayesian analysis approach which aimed to reduce attenuation of associations in EHR studies focussed on conditions characterized by under-diagnosis.Methods: Study 1: We created synthetic data, produced to mimic structured EHR data where diagnoses were under-recorded. We fitted logistic regression (LR) models with and without Bayesian priors representing rates of misclassification in the data. We examined the LR parameters estimated by models with and without priors. Study 2: We used EHR data from UK primary care in a case-control design with dementia as the outcome. We fitted LR models examining risk factors for dementia, with and without generic prior information on misclassification rates. We examined LR parameters estimated by models with and without the priors, and estimated classification accuracy using Area Under the Receiver Operating Characteristic.Results: Study 1: In synthetic data, estimates of LR parameters were much closer to the true parameter values when Bayesian priors were added to the model; with no priors, parameters were substantially attenuated by under-diagnosis. Study 2: The Bayesian approach ran well on real life clinic data from UK primary care, with the addition of prior information increasing LR parameter values in all cases. In multivariate regression models, Bayesian methods showed no improvement in classification accuracy over traditional LR.Conclusions: The Bayesian approach showed promise but had implementation challenges in real clinical data: prior information on rates of misclassification was difficult to find. Our simple model made a number of assumptions, such as diagnoses being missing at random. Further development is needed to integrate the method into studies using real-life EHR data. Our findings nevertheless highlight the importance of developing methods to address missing diagnoses in EHR data.
Facebook
TwitterData collection was conducted across three workstreams. In workstream (WS) 1, 17 ICU clinicians/researchers, and eight patient and public involvement (PPI) contributors with experience of working on ICU studies, took part in semi-structured telephone interviews about the problems and potential solutions in recruitment and consent to ICU studies. This informed the development of the survey for WS2. In WS2, 1453 participants from 14 ICUs in England took part in the survey, which explored experiences and views of ICU research recruitment and consent process. Forty-four surveys were either duplicates or had substantial missing data so 1409 surveys were included in the analysis. Of these, 333 surveys were from ICU patients, 488 from family members (of whom 63 were bereaved) and 588 were from healthcare practitioners. Thirty five percent (115/333) of patient surveys and 32% (157/488) of family member surveys were from individuals who reported having been approached about research in the ICU, while 44% (260/588) of healthcare practitioner surveys were from those who indicated they had a role in research. For WS3, a purposive sample of 60 participants, 54 of whom had completed the WS2 survey, took part in semi-structured interviews to explore their survey responses and their wider perspectives on ICU research in more depth. This included 13 patients, 30 family members (of whom 4 were bereaved before completing the survey, and 5 were bereaved since they or another family member completed the survey), and 17 healthcare practitioners. Of interviewed patients and family members, 25 had been approached about a study while in the ICU. Of healthcare practitioners, 12 had research roles at the time of their interview (3 doctors, 7 research nurses and 2 pharmacists). The six additional interviewees comprised: four family members of surveyed patients where the family member had visited the patient’s during their ICU stay; two ICU patients whose family members had completed a survey. Although these six interviewees had not completed the WS2 survey, the protocol permitted interviews with such individuals if they had close ties to WS2 participants.
Clinical research in intensive care units (ICUs) is essential for improving treatments for critically ill patients. However, invitations to participate in clinical research in this situation pose numerous challenges. ICU studies frequently take place within a narrow time window and patients will often be unconscious and unable to consent. Consent must, therefore, be sought from representatives or proxies of a patient, usually the patient's relatives. Conversations about research participation in this setting will be difficult, as relatives are often overwhelmed and some will feel uneasy about making decisions on behalf of their loved ones. In some circumstances, legislation allows doctors act as representatives so patients can be enrolled in research. Despite these and other distinctive practices in recruitment and consent to ICU research, prior to the Perspectives study there was little good quality evidence and guidance on stakeholders’ perspectives to inform how recruitment and consent is carried out in ICU studies. Knowledge of stakeholder perspectives was needed to avoid basing recruitment and consent processes on presumptions about peoples’ experience of ICU research.
The Perspectives study explored the views of stakeholders with recent first-hand experience of ICU treatment and research to inform approaches to recruitment and consent. Established social science methods and empirical ethics were employed to balance the interests of the various stakeholders and justify recommendations. The findings were used to inform good practice guidance on recruitment and consent to future ICU studies. Researchers and an expert Advisory Group of key stakeholders (including patients, relatives, ICU doctors, nurses and research regulators) contributed throughout the process of developing the guidance bringing different viewpoints to interpreting the evidence and informing the guidance.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Notes to table:Variables entered are listed in Tables S1 and S2. Categories were collapsed to avoid low numbers, as necessary. A backwards likelihood ratio criterion was used to select predictor variables. Reports of asthma and eczema were tested separately and together. Models were checked by two of us and found to be robust: removal of outliers made no overall difference. Not all participants responded to all questions. Missing data in some variables reduced the number cases in the analyses. We did not impute values.1Most adverse events related to symptoms typical of common problems in routine clinical practice. None were attributed to the trial intervention by the trial’s data monitoring committee [35].2Not all rashes had been diagnosed by a professional: carer’s report was the variable of interest.3Testing was not offered to 5 infants, who were excluded from this analysis. NS represents ‘not significant’. * denotes reference category.
Facebook
TwitterBackgroundWith the onset of prevention trials for individuals at high risk for Alzheimer disease, there is increasing need for accurate risk prediction to inform study design and enrollment, but available risk estimates are limited. We developed risk estimates for the incidence of mild cognitive impairment (MCI) or dementia among cognitively unimpaired individuals by APOE-e4 dose for the genetic disclosure process of the Alzheimer’s Prevention Initiative Generation Study, a prevention trial in cognitively unimpaired APOE-e4/e4 homozygote individuals.Methods and findingsWe included cognitively unimpaired individuals aged 60–75 y, consistent with Generation Study eligibility criteria, from the National Alzheimer’s Coordinating Center (NACC) (n = 5,073, 158 APOE-e4/e4), the Rotterdam Study (n = 6,399, 156 APOE-e4/e4), the Framingham Heart Study (n = 4,078, 67 APOE-e4/e4), and the Sacramento Area Latino Study on Aging (SALSA) (n = 1,294, 11 APOE-e4/e4). We computed stratified cumulative incidence curves by age (60–64, 65–69, 70–75 y) and APOE-e4 dose, adjusting for the competing risk of mortality, and determined risk of MCI and/or dementia by genotype and baseline age. We also used subdistribution hazard regression to model relative hazard based on age, APOE genotype, sex, education, family history of dementia, vascular risk, subjective memory concerns, and baseline cognitive performance. The four cohorts varied considerably in age, education, ethnicity/race, and APOE-e4 allele frequency. Overall, cumulative incidence was uniformly higher in NACC than in the population-based cohorts. Among APOE-e4/e4 individuals, 5-y cumulative incidence was as follows: in the 60–64-y age stratum, it ranged from 0% to 5.88% in the three population-based cohorts versus 23.06% in NACC; in the 65–69-y age stratum, from 9.42% to 10.39% versus 34.62%; and in the 70–75-y age stratum, from 18.64% to 33.33% versus 38.34%. Five-year incidence of dementia was negligible except for APOE-e4/e4 individuals and those over 70 y. Lifetime incidence (to age 80–85 y) of MCI or dementia for the APOE-e4/e4 individuals in the long-term Framingham and Rotterdam cohorts was 34.69%–38.45% at age 60–64 y, 30.76%–40.26% at 65–69 y, and 33.3%–35.17% at 70–75 y. Confidence limits for these estimates are often wide, particularly for APOE-e4/e4 individuals and for the dementia outcome at 5 y. In regression models, APOE-e4 dose and age both consistently increased risk, as did lower education, subjective memory concerns, poorer baseline cognitive performance, and family history of dementia. We discuss several limitations of the study, including the small numbers of APOE-e4/e4 individuals, missing data and differential dropout, limited ethnic and racial diversity, and differences in definitions of exposure and outcome variables.ConclusionsEstimates of the absolute risk of MCI or dementia, particularly over short time intervals, are sensitive to sampling and a variety of methodological factors. Nonetheless, such estimates were fairly consistent across the population-based cohorts, and lower than those from a convenience cohort and those estimated in prior studies—with implications for informed consent and design for clinical trials targeting high-risk individuals.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer’s disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been successfully applied to actual clinical trial data accrued from the Phase 3 clinical trials of solanezumab.
Facebook
TwitterBackgroundSchizophrenia is a leading cause of disability, and a shift from facility- to community-based care has been proposed to meet the resource challenges of mental healthcare in low- and middle-income countries. We hypothesized that the addition of mobile texting would improve schizophrenia care in a resource-poor community setting compared with a community-based free-medicine program alone.Methods and findingsIn this 2-arm randomized controlled trial, 278 community-dwelling villagers (patient participants) were randomly selected from people with schizophrenia from 9 townships of Hunan, China, and were randomized 1:1 into 2 groups. The program participants were recruited between May 1, 2015, and August 31, 2015, and the intervention and follow-up took place between December 15, 2015, and July 1, 2016. Baseline characteristics of the 2 groups were similar. The patients were on average 46 years of age, had 7 years of education, had a duration of schizophrenia of 18 years with minimal to mild symptoms and nearly one-fifth loss of functioning, and were mostly living with family (95%) and had low incomes. Both the intervention and the control groups received a nationwide community-based mental health program that provided free antipsychotic medications. The patient participants in the intervention group also received LEAN (Lay health supporters, E-platform, Award, and iNtegration), a program that featured recruitment of a lay health supporter and text messages for medication reminders, health education, monitoring of early signs of relapses, and facilitated linkage to primary healthcare. The primary outcome was medication adherence (proportion of dosages taken) assessed by 2 unannounced home-based pill counts 30 days apart at the 6-month endpoint. The secondary and other outcomes included patient symptoms, functioning, relapses, re-hospitalizations, death for any reason, wandering away without notifying anyone, violence against others, damaging goods, and suicide. Intent-to-treat analysis was used. Missing data were handled with multiple imputations. In total, 271 out of 278 patient participants were successfully followed up for outcome assessment. Medication adherence was 0.48 in the control group and 0.61 in the intervention group (adjusted mean difference [AMD] 0.12 [95% CI 0.03 to 0.22]; p = 0.013; effect size 0.38). Among secondary and other outcomes we noted substantial reduction in the risk of relapse (26 [21.7%] of 120 interventional participants versus 40 [34.2%] of 117 controls; relative risk 0.63 [95% CI 0.42 to 0.97]; number needed to treat [NNT] 8.0) and re-hospitalization (9 [7.3%] of 123 interventional participants versus 25 [20.5%] of 122 controls; relative risk 0.36 [95% CI 0.17 to 0.73]; NNT 7.6). The program showed no statistical difference in all other outcomes. During the course of the program, 2 participants in the intervention group and 1 in the control group died. The limitations of the study include its lack of a full economic analysis, lack of individual tailoring of the text messages, the relatively short 6-month follow-up, and the generalizability constraint of the Chinese context.ConclusionsThe addition of texting to patients and their lay health supporters in a resource-poor community setting was more effective than a free-medicine program alone in improving medication adherence and reducing relapses and re-hospitalizations. Future studies may test the effectiveness of customization of the texting to individual patients.Trial registrationChinese Clinical Trial Registry ChiCTR-ICR-15006053.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Cultural beliefs, personal experiences, and historic abuses within the healthcare system—rooted in structural racism—all contribute to community distrust in science and medicine. This lack of trust, particularly within underserved communities, contributes to decreased participation in clinical trials and a lack of representation in the data. Open dialogue about community concerns and experiences related to research participation and medical care processes can help build trust and change attitudes and behaviors that affect community health. This protocol outlines an approach to increase trust in science and clinical trials among communities in the Bronx, New York that are typically underrepresented in research data. Bridging Research, Accurate Information and Dialogue (BRAID) is a two-phased, evidence-based community engagement model that creates safe spaces for bilateral dialogues between trusted community messengers, and clinicians and scientists. The team will conduct a series of BRAID Conversation Circles on the topic of clinical trials with local trusted community messengers. Participants will be members of the community who are perceived as “trusted messengers” and can represent the community’s voice because they have insight into “what matters” locally. Conversation Circles will be audiotaped, transcribed, and analyzed to identify emergent challenges and opportunities surrounding clinical trial participation. These key themes will subsequently inform the codesign and co-creation of tailored messages and outreach efforts that community participants can disseminate downstream to their social networks. Surveys will be administered to all participants before and after each Conversation Circle to understand participants experience and evaluate changes in knowledge and attitudes about clinical trials, including protections for research participants the advantages of having diverse representation. Changes in motivation and readiness to share accurate clinical trial information downstream will also be assessed. Lastly, we will measure participants dissemination of codesigned science messages through their social networks by tracking participant specific resource URLs of materials and videos posted on a BRAID website. This protocol will assess the effectiveness and adoptability of an innovative CBPR model that can be applied to a wide range of public health issues and has the potential to navigate the ever-changing needs of the communities that surround health systems.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Loss of power and clear description of treatment differences are key issues in designing and analyzing a clinical trial where nonproportional hazard (NPH) is a possibility. A log-rank test may be inefficient and interpretation of the hazard ratio estimated using Cox regression is potentially problematic. In this case, the current ICH E9 (R1) addendum would suggest designing a trial with a clinically relevant estimand, for example, expected life gain. This approach considers appropriate analysis methods for supporting the chosen estimand. However, such an approach is case specific and may suffer from lack of power for important choices of the underlying alternate hypothesis distribution. On the other hand, there may be a desire to have robust power under different deviations from proportional hazards. We would contend that no single number adequately describes treatment effect under NPH scenarios. The cross-pharma working group has proposed a combination test to provide robust power under a variety of alternative hypotheses. These can be specified for primary analysis at the design stage and methods appropriately accounting for combination test correlations are efficient for a variety of scenarios. We have provided design and analysis considerations based on a combination test under different NPH types and present a straw man proposal for practitioners. The proposals are illustrated with real life example and simulation.
Facebook
TwitterA common problem in clinical trials is the missing data that occurs when patients do not complete the study and drop out without further measurements. Missing data cause the usual statistical analysis of complete or all available data to be subject to bias. There are no universally applicable methods for handling missing data. We recommend the following: (1) Report reasons for dropouts and proportions for each treatment group; (2) Conduct sensitivity analyses to encompass different scenarios of assumptions and discuss consistency or discrepancy among them; (3) Pay attention to minimize the chance of dropouts at the design stage and during trial monitoring; (4) Collect post-dropout data on the primary endpoints, if at all possible; and (5) Consider the dropout event itself an important endpoint in studies with many.