Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Dataset provided by = Björn Holzhauer
Dataset Description==Meta-analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time-to-event models are unavailable. Assuming identical drop-out time distributions across arms, random censorship and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared to time-to-event methods. To deal with differences in follow-up - at the cost of assuming specific distributions for event and drop-out times - we propose a hierarchical multivariate meta-analysis model using the aggregate data likelihood based on the number of cases, fatal cases and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta-analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop-out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta-analysis methods in a simulation study.
Facebook
TwitterAggregate data for the PLOS ONE article "Beyond funding: Acknowledgement patterns in biomedical, natural and social sciences." DOI: 10.1371/journal.pone.0185578Table 1. Explained and cumulative variance for each axisTable 2. Relative contributions of the factor to the element for disciplines (expressed as a percentage)Table 3. Number of papers indexed in WoS (all and with funding acknowledgements) and percentage of papers with funding acknowledgements, by discipline (2015)For the purposes of the analysis presented in Fig 1 and 2, the dataset was partitioned by discipline and a Correspondence Analysis was applied to these subsets and using a MATLAB program.Fig 1. Bidimensional Correspondence Analysis for acknowledgements patterns by discipline (plane 1-2)Fig 2. Bidimensional Correspondence Analysis for acknowledgements patterns by discipline (plane 3-4).Supporting Information:S1 Fig. Frequency distribution of noun phrases found in acknowledgementsS1 Table. Frequency of the 214 most frequent noun phrases, by disciplineS2 Table. Quality of representation of the rows (cumulative contribution for each NP)S3 Table. Quality of representation of the columns (cumulative contribution for each discipline)
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Aggregated data from the SOM project. Here I have aggregated several of the indicators to country-years, and added contextual data from many other sources. For the original SOM data, refer to Joost Berkhout; Sudulich, Laura; Ruedin, Didier; Peintinger, Teresa; Meyer, Sarah; Vangoidsenhoven, Guido; Cunningham, Kevin; Ros, Virgina; Wunderlich, Daniel, 2013, "Political Claims Analysis: Support and Opposition to Migration", https://hdl.handle.net/1902.1/17967, Harvard Dataverse, V1, UNF:5:8Gnxt4ColWPEe52HFrHoeg== For an enhanced version: https://doi.org/10.7910/DVN/4FGJTH
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set consists of 3D scanned aggregate surface mesh files (.stl) arranged according to a particle size distribution curve. Aggregates were scanned using a 3D structured light scanner by placing the aggregates on a wooden turntable and obtaining sufficient number of scans using the 3D scanner by slowly rotating the turntable.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.
Dataset
The artifact contains the resources described below.
Experiment resources
The resources needed for replicating the experiment, namely in directory experiment:
alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.
alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.
docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.
api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.
Experiment data
The task database used in our application of the experiment, namely in directory data/experiment:
Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.
identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.
Collected data
Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:
data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).
data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:
participant identification: participant's unique identifier (ID);
socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).
data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:
participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);
detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.
data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:
participant identification: participant's unique identifier (ID);
user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).
participants.txt: the list of participant identifiers that have registered for the experiment.
Analysis scripts
The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:
analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.
requirements.r: An R script to install the required libraries for the analysis script.
normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.
normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.
Dockerfile: Docker script to automate the analysis script from the collected data.
Setup
To replicate the experiment and the analysis of the results, only Docker is required.
If you wish to manually replicate the experiment and collect your own data, you'll need to install:
A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.
If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:
Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.
R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.
Usage
Experiment replication
This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.
To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.
cd experimentdocker-compose up
This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.
In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:
Group N (no hints): http://localhost:3000/0CAN
Group L (error locations): http://localhost:3000/CA0L
Group E (counter-example): http://localhost:3000/350E
Group D (error description): http://localhost:3000/27AD
In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.
Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.
Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.
After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:
Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.
Analysis of other applications of the experiment
This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.
The analysis script expects data in 4 CSV files,
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Advantages of using an IPD rather than aggregate data approach to systematic review and meta-analysis of RCTs.
Facebook
TwitterUnlock insights with Echo's Activity data, offering views of locations based on visitor behavior. Enhance site selection, urban planning, and real estate with metrics like unique visitors and visits. Our high-quality, global data reveals movement patterns, updated daily and normalized monthly.
Facebook
TwitterTwo workbooks were constructed to log observation data records and performs preliminary analysis of the 2012-2013 and 2013-2014 seasons Elk and Bison Density study on the NER. The two workbooks were: A. Density Data Records for the 12-13 and 13-14 Seasons B. Aggregate of All Observation Data
Both of these workbooks are included as separate digital holdings, along with another digital holding that describes the contents and use of the workbooks.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Sharing research data provides benefit to the general scientific community, but the benefit is less obvious for the investigator who makes his or her data available. We examined the citation history of 85 cancer microarray clinical trial publications with respect to the availability of their data. The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p = 0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin using linear regression. This correlation between publicly available data and increased literature impact may further motivate investigators to share their detailed research data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a summary of aggregated data being the foundation of the research study " Perceptions and attitudes of patients and healthcare workers towards the use of telemedicine in Botswana: A qualitative study."
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Rank aggregation methods collect individual lists of ranked items from various sources that may represent users, preferences, products, suggestions, events, etc. Then, they merge all the input lists into a single aggregate list by applying a data fusion technique, and they rearrange the elements of the aggregate list to generate an improved consensus ranking.
This repository includes 6 synthetic datasets for rank aggregation with different number of rankers and different list sizes. Each dataset comprises two files:
These datasets have been used in the following research papers: 1. L. Akritidis, A. Fevgas, P. Bozanis, Y. Manolopoulos, "An Unsupervised Distance-Based Model for Weighted Rank Aggregation with List Pruning", Expert Systems with Applications, vol. 202, pp. 117435, 2022. 2. L. Akritidis, M. Alamaniotis, P. Bozanis, "FLAGR: A flexible high-performance library for rank aggregation", Elsevier SoftwareX, vol. 21, pp. 101319, 2023.
The authors that use these datasets in their research are requested to cite the above papers in their works.
Facebook
TwitterData from 14 of the 18 sites on the fractionation of soil into three aggregate categories and the amount of protected carbon within each category. Also included are earthworm numbers, both total and by ecotype. A separate dataset will have earthworm counts by species. Each site had six subplots that were sampled in three increments: the Oi and Oe horizons, 0-10 cm below the Oe, and 10-20 cm below the Oe. Soil aggregate analysis was done on the mineral soil below the Oe, i.e. the Oa was not included.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The size of the Cement and Aggregate market was valued at USD 204170 million in 2023 and is projected to reach USD 244356.25 million by 2032, with an expected CAGR of 2.6% during the forecast period.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Environmental Impact Statement: Risk analysis study of a marine aggregate proposal
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheet for calculation of a recurrence equation and analysis of fixed points in the illness-death model.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
List of Subdatasets: Long-term data: 2000-2021 5th percentile (p05) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 50th percentile (p50) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 95th percentile (p95) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 General Description The monthly aggregated Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) dataset is derived from 250m 8d GLASS V6 FAPAR. The data set is derived from Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance and LAI data using several other FAPAR products (MODIS Collection 6, GLASS FAPAR V5, and PROBA-V1 FAPAR) to generate a bidirectional long-short-term memory (Bi-LSTM) model to estimate FAPAR. The dataset time spans from March 2000 to December 2021 and provides data that covers the entire globe. The dataset can be used in many applications like land degradation modeling, land productivity mapping, and land potential mapping. The dataset includes: Long-term: Derived from monthly time-series. This dataset provides linear trend model for the p95 variable: (1) slope beta mean (p95.beta_m), p-value for beta (p95.beta_pv), intercept alpha mean (p95.alpha_m), p-value for alpha (p95.alpha_pv), and coefficient of determination R2 (p95.r2_m). Monthly time-series: Monthly aggregation with three standard statistics: (1) 5th percentile (p05), median (p50), and 95th percentile (p95). For each month, we aggregate all composites within that month plus one composite each before and after, ending up with 5 to 6 composites for a single month depending on the number of images within that month. Data Details Time period: March 2000 – December 2021 Type of data: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) How the data was collected or derived: Derived from 250m 8 d GLASS V6 FAPAR using Python running in a local HPC. The time-series analysis were computed using the Scikit-map Python package. Statistical methods used: for the long-term, Ordinary Least Square (OLS) of p95 monthly variable; for the monthly time-series, percentiles 05, 50, and 95. Limitations or exclusions in the data: The dataset does not include data for Antarctica. Coordinate reference system: EPSG:4326 Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.0008094, 179.9999424, 87.37000) Spatial resolution: 1/480 d.d. = 0.00208333 (250m) Image size: 172,800 x 71,698 File format: Cloud Optimized Geotiff (COG) format. Support If you discover a bug, artifact, or inconsistency, or if you have a question please raise a GitHub issue: https://github.com/Open-Earth-Monitor/Global_FAPAR_250m/issues Reference Hackländer, J., Parente, L., Ho, Y.-F., Hengl, T., Simoes, R., Consoli, D., Şahin, M., Tian, X., Herold, M., Jung, M., Duveiller, G., Weynants, M., Wheeler, I., (2023?) "Land potential assessment and trend-analysis using 2000–2021 FAPAR monthly time-series at 250 m spatial resolution", submitted to PeerJ, preprint available at: https://doi.org/10.21203/rs.3.rs-3415685/v1 Name convention To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are: generic variable name: fapar = Fraction of Absorbed Photosynthetically Active Radiation variable procedure combination: essd.lstm = Earth System Science Data with bidirectional long short-term memory (Bi–LSTM) Position in the probability distribution / variable type: p05/p50/p95 = 5th/50th/95th percentile Spatial support: 250m Depth reference: s = surface Time reference begin time: 20000301 = 2000-03-01 Time reference end time: 20211231 = 2022-12-31 Bounding box: go = global (without Antarctica) EPSG code: epsg.4326 = EPSG:4326 Version code: v20230628 = 2023-06-28 (creation date)
Facebook
TwitterOverview: Public health surveillance data are collected and reported voluntarily to CDC by U.S. states and territories through the National Notifiable Diseases Surveillance System (NNDSS) (https://www.cdc.gov/nndss/index.html). Data include demographic, clinical, and geographic information; data do not include direct identifiers. Two types of datasets of human Lyme disease case data collected through public health surveillance are available: one includes annual case count aggregated by county of residence according to specific demographic variables and one is line-listed with patient demographic factors, month of illness onset, and clinical presentation information but without corresponding geographic information. These privacy-protected datasets were implemented in accordance with methodology described in Lee et al. Protecting Privacy and Transforming COVID-19 Case Surveillance Datasets for Public Use. Public Health Rep. 2021 Sep-Oct;136(5):554-561. doi: 10.1177/00333549211026817.
Lyme disease became nationally notifiable in 1991. Different surveillance case definitions have been in effect over time; details are available here: https://ndc.services.cdc.gov/conditions/lyme-disease/. In 2008, a probable case definition was included in public health surveillance for the first time. In 2022, states with a high incidence of Lyme disease started reporting cases based on laboratory evidence alone without requirement for a clinical investigation, precluding comparison with historical data (for more information: https://www.cdc.gov/mmwr/volumes/73/wr/mm7306a1.htm?s_cid=mm7306a1_w). As such, Lyme disease surveillance data are grouped into separate datasets based on when these major changes occurred; data are provided for download separately for 1992–2007, 2008–2021, and 2022 to current. Data will be updated annually upon final verification of Lyme disease surveillance data by health departments.
Data Limitations: Surveillance data have significant limitations that must be considered in the analysis, interpretation, and reporting of results. 1. Under-reporting and misclassification are features common to all surveillance systems. Not every case of Lyme disease is reported to CDC, and some cases that are reported may be reflect illness due to another cause. 2. Please note that before the 2022 surveillance case definition went into effect, several states with high Lyme disease incidence had initiated alternative methods of surveillance and those data were not reportable to CDC. 3. Final case data are subject to each state’s abilities to capture and classify cases, which is dependent upon budget and personnel. This can vary not only between states, but also from year to year within a given state. Consequently, a sudden or marked change in reported cases does not necessarily represent a true change in disease incidence. Every effort should be made to construct analyses to limit overinterpretation of this variation (see the following reference for more context: Kugeler KJ, Eisen RJ. Challenges in Predicting Lyme Disease Risk. JAMA Netw Open. 2020 Mar 2;3(3):e200328. doi: 10.1001/jamanetworkopen.2020.0328.)
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-3706https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-3706
This dataset contains the supplemental materials for our publication "De-emphasise, Aggregate, and Hide: A Study on Interactive Visual Transformations for Group Structures in Network Visualisations". The publication reports on an experiment that we conducted to explore the effects of different interactive visual transformations in network drawings on the user performance. We evaluated five specific visual transformations and one control condition in five different tasks and collected data on user performance (time, accuracy), usefulness, mental effort, subjective preference, as well as some metrics of user interaction, such as usage of zoom and pan operations and the application of the visual transformations. Within these supplemental materials, we share the following: network and task data results data analysis code example images of the study demonstration videos of the interface participants demographic overview The experiment was preregistered on OSF before it was conducted. The preregistration can be found at https://doi.org/10.17605/OSF.IO/TRBWD.
Facebook
TwitterBackgroundSpatial data are often aggregated by area to protect the confidentiality of individuals and aid the calculation of pertinent risks and rates. However, the analysis of spatially aggregated data is susceptible to the modifiable areal unit problem (MAUP), which arises when inference varies with boundary or aggregation changes. While the impact of the MAUP has been examined previously, typically these studies have focused on well-populated areas. Understanding how the MAUP behaves when data are sparse is particularly important for countries with less populated areas, such as Australia. This study aims to assess different geographical regions’ vulnerability to the MAUP when data are relatively sparse to inform researchers’ choice of aggregation level for fitting spatial models.MethodsTo understand the impact of the MAUP in Queensland, Australia, the present study investigates inference from simulated lung cancer incidence data using the five levels of spatial aggregation defined by the Australian Statistical Geography Standard. To this end, Bayesian spatial BYM models with and without covariates were fitted.Results and conclusionThe MAUP impacted inference in the analysis of cancer counts for data aggregated to coarsest areal structures. However, area structures with moderate resolution were not greatly impacted by the MAUP, and offer advantages in terms of data sparsity, computational intensity and availability of data sets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the results of permeability tests on recycled materials, specifically fine recycled concrete aggregate mixed with rubber granulate from waste car tires (3 mixtures marked as M1_R, M3_R, and M3) and shredded tires (3 mixtures marked as RTWS1, RTWS2, and RTWS3). For permeability testing, two different equipment were used. The permeability coefficient of the three tested mixtures: M3, M1_R, and M3_R, was investigated using the Humboldt Flex Panels model “HM-4150”. The permeability coefficient of the shredded tires was investigated employing the homemade apparatus.The dataset includes the results from variable-gradient tests under various consolidation presssures.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Dataset provided by = Björn Holzhauer
Dataset Description==Meta-analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time-to-event models are unavailable. Assuming identical drop-out time distributions across arms, random censorship and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared to time-to-event methods. To deal with differences in follow-up - at the cost of assuming specific distributions for event and drop-out times - we propose a hierarchical multivariate meta-analysis model using the aggregate data likelihood based on the number of cases, fatal cases and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta-analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop-out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta-analysis methods in a simulation study.