Facebook
TwitterWe compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.
The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.
These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis.
The data include the following:
1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc).
2. A text file to import the analysis database into SAS
3. The SAS code to format the analysis database to be used for analytics
4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6
5. SAS code for deriving the multiple regression formula in Table 4.
6. SAS code for deriving the multiple regression formula in Table 5
7. SAS code for deriving the multiple regression formula in Supplementary Table 7
8. SAS code for deriving the multiple regression formula in Supplementary Table 8
9. The Excel files that accompanied the above SAS code to produce the tables
For questions, please email davidkcundiff@gmail.com. Thanks.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These SAS files are sample code used for the Monte Carlo studies in a manuscript on statistical properties of four effect size measures for the mediated effect.Citation:Miočević, M., O’Rourke, H. P., MacKinnon, D. P., & Brown, H. C. (2016). The bias and efficiency of five effect size measures for mediation models. Under review at Behavior Research Methods.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List NBvsPoi_FINAL.sas -- SAS code SSEAK98_FINAL.txt -- Harbor seal data used by SAS code Description The NBvsPoi_FINAL SAS program uses a SAS macro to analyze the data in SSEAK98_FINAL.txt. The SAS program and macro are commented for further explanation.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List ECO101_sample_data.xls ECO101_sample_data.txt SAS_Code.rtf
Please note that ESA cannot guarantee the availability of Excel files in perpetuity as it is proprietary software. Thus, the data file here is also supplied as a tab-delimited ASCII file, and the other Excel workbook sheets are provided below in the description section. Description -- TABLE: Please see in attached file. --
Facebook
TwitterThis SAS program calculates CFI for each patient from analytic data files containing information on patient identifiers, ICD-9-CM diagnosis codes (version 32), ICD-10-CM Diagnosis Codes (version 2020), CPT codes, and HCPCS codes. NOTE: When downloading, store "CFI_ICD9CM_V32.tab", "CFI_ICD10CM_V2020.tab", and "PX_CODES.tab" as csv files (these files are originally stored as csv files, but Dataverse automatically converts them to tab files). Please read "Frailty-Index-SAS-code-Guide" before proceeding. Interpretation, validation data, and annotated references are provided in "Research Background - Claims-Based Frailty Index".
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Matching is frequently used in observational studies, especially in medical research. However, only a small number of articles with matching programs for the SAS software (SAS Institute Inc., Cary, NC, USA) are available, even less are usable for inexperienced users of SAS software. This article presents a matching program for the SAS software and links to an online repository for examples and test data. The program enables matching on several variables and includes in-depth explanation of the expressions used and how to customize the program. The selection of controls is randomized and automated, minimizing the risk of selection bias. Also, the program provides means for the researcher to test for incomplete matching.
Facebook
TwitterFile List Code_and_Data_Supplement.zip (md5: dea8636b921f39c9d3fd269e44b6228c) Description The supplementary material provided includes all code and data files necessary to replicate the simulation models other demographic analyses presented in the paper. MATLAB code is provided for the simulations, and SAS code is provided to show how model parameters (vital rates) were estimated. The principal programs are Figure_3_4_5_Elasticity_Contours.m and Figure_6_Contours_Stochastic_Lambda.m which perform the elasticity analyses and run the stochastic simulation, respectively. The files are presented in a zipped folder called Code_and_Data_Supplement. When uncompressed, users may run the MATLAB programs by opening them from within this directory. Subdirectories contain the data files and supporting MATLAB functions necessary to complete execution. The programs are written to find the necessary supporting functions in the Code_and_Data_Supplement directory. If users copy these MATLAB files to a different directory, they must add the Code_and_Data_Supplement directory and its subdirectories to their search path to make the supporting files available. More details are provided in the README.txt file included in the supplement. The file and directory structure of entire zipped supplement is shown below. Folder PATH listing Code_and_Data_Supplement | Figure_3_4_5_Elasticity_Contours.m | Figure_6_Contours_Stochastic_Lambda.m | Figure_A1_RefitG2.m | Figure_A2_PlotFecundityRegression.m | README.txt | +---FinalDataFiles +---Make Tables | README.txt | Table_lamANNUAL.csv | Table_mgtProbPredicted.csv | +---ParameterEstimation | | Categorical Model output.xls | | | +---Fecundity | | Appendix_A3_Fecundity_Breakpoint.sas | | fec_Cat_Indiv.sas | | Mean_Fec_Previous_Study.m | | | +---G1 | | G1_Cat.sas | | | +---G2 | | G2_Cat.sas | | | +---Model Ranking | | Categorical Model Ranking.xls | | | +---Seedlings | | sdl_Cat.sas | | | +---SS | | SS_Cat.sas | | | +---SumSrv | | sum_Cat.sas | | | ---WinSrv | modavg.m | winCatModAvgfitted.m | winCatModAvgLinP.m | winCatModAvgMu.m | win_Cat.sas | +---ProcessedDatafiles | fecdat_gm_param_est_paper.mat | hierarchical_parameters.mat | refitG2_param_estimation.mat | ---Required_Functions | hline.m | hmstoc.m | Jeffs_Figure_Settings.m | Jeffs_startup.m | newbootci.m | sem.m | senstuff.m | vline.m | +---export_fig | change_value.m | eps2pdf.m | export_fig.m | fix_lines.m | ghostscript.m | license.txt | pdf2eps.m | pdftops.m | print2array.m | print2eps.m | +---lowess | license.txt | lowess.m | +---Multiprod_2009 | | Appendix A - Algorithm.pdf | | Appendix B - Testing speed and memory usage.pdf | | Appendix C - Syntaxes.pdf | | license.txt | | loc2loc.m | | MULTIPROD Toolbox Manual.pdf | | multiprod.m | | multitransp.m | | | ---Testing | | arraylab13.m | | arraylab131.m | | arraylab132.m | | arraylab133.m | | genop.m | | multiprod13.m | | readme.txt | | sysrequirements_for_testing.m | | testing_memory_usage.m | | testMULTIPROD.m | | timing_arraylab_engines.m | | timing_matlab_commands.m | | timing_MX.m | | | ---Data | Memory used by MATLAB statements.xls | Timing results.xlsx | timing_MX.txt | +---province | PROVINCE.DBF | province.prj | PROVINCE.SHP | PROVINCE.SHX | README.txt | +---SubAxis | parseArgs.m | subaxis.m | +---suplabel | license.txt | suplabel.m | suplabel_test.m | ---tight_subplot license.txt tight_subplot.m
Facebook
TwitterSAS codes for generating heatmaps “IBT Heat Maps”.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
SAS code. This replicate the numbers and tables in the research article “Using a Deliberative Poll on breast cancer screening to assess and improve the decision quality of laypeople” by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma, and John Brodersen
Facebook
TwitterThe raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This code replicate the numbers for the tables and figures in the article "How video information on mammography screening affects the recommendations of laypeople: a randomised controlled trial" by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma and John Brodersen.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT. Genotype-by-environment interaction refers to the differential response of different genotypes across different environments. This is a general phenomenon in all living organisms and always has been one of the main challenges for biologists and plant breeders. The nonparametric methods based on the rank of original data have been suggested as the alternative methods after parametric methods to analyze data without perquisite assumptions needed for common analysis of variance. But, the lack of statistical software or package, especially for analysis of two-way data, is one of the main reasons that plant breeders have not greatly used the nonparametric methods. Here, we have explained the nonparametric methods and presented a comprehensive two-parts SAS program for calculation of four nonparametric statistical tests (Bredenkamp, Hildebrand, Kubinger and van der Laan-de Kroon) and all of the valid stability statistics including Hühn's parameters (Si(1), Si(2), Si(3), Si(6)), Thennarasu's parameters (NPi(1), NPi(2), NPi(3), NPi(4)), Fox's ranking technique and Kang's rank-sum.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The SAS code (Supplementary File 1) and R program code (Supplementary File 2). For the analysis to proceed, this code requires an input data file (Supplementary File 3-5) prepared in excel format (CSV). Data can be stored in any format such as xlsx, txt, xls and others. Economic values in the SAS code are entered manually in the code, but in the R code are stored in an Excel file (Supplementary File 6).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
Sas is a dataset for object detection tasks - it contains Sasas annotations for 2,737 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Facebook
TwitterThe SAS code used to fit a Poisson regression to detailed and aggregate data.
Facebook
TwitterThis SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.
Facebook
TwitterThis formatted dataset originates from raw data files from the Institute of Health Metrics and Evaluation Global Burden of Disease (GBD2017). It is population weighted worldwide data on male and female cohorts ages 15-69 years including cardiovascular disease early death and associated dietary, metabolic and other risk factors. The purpose of creating this formatted database is to explore the univariate and multiple regression correlations of cardiovascular early deaths and other health outcomes with risk factors. Our research hypothesis is that we can successfully apply artificial intelligence to model cardiovascular disease outcomes with risk factors. We found that fat-soluble vitamin containing foods (animal products) and added fats are negatively correlated with CVD early deaths worldwide but positively correlated with CVD early deaths in high fat-soluble vitamin cohorts. We interpret this as showing that optimal cardiovascular outcomes come with moderate (not low and not high) intakes of animal foods and added fats. You are invited to download the dataset, the associated SAS code to access the dataset, and the tables that have resulted from the analysis. Please comment on the article by indicating what you found by exploring the dataset with the provided SAS codes. Please say whether or not you found the outputs from the SAS codes accurately reflected the tables provided and the tables in the published article. If you use our data to reproduce our findings and comment on your findings on the MedRxIV website (https://www.medrxiv.org/content/10.1101/2021.04.17.21255675v4) and would like to be recognized, we will be happy to list you as a contributor when the article is summited to JAMA. For questions, please email davidkcundiff@gmail.com. Thanks.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The datasets and their legends, for an observational cohort study described by Fréchette et al., 2021. The SAS code used to conduct the analyses described in the article is also reported.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The sample SAS and Stata code provided here is intended for use with certain datasets in the National Neighborhood Data Archive (NaNDA). NaNDA (https://www.openicpsr.org/openicpsr/nanda) contains some datasets that measure neighborhood context at the ZIP Code Tabulation Area (ZCTA) level. They are intended for use with survey or other individual-level data containing ZIP codes. Because ZIP codes do not exactly match ZIP code tabulation areas, a crosswalk is required to use ZIP-code-level geocoded datasets with ZCTA-level datasets from NaNDA. A ZIP-code-to-ZCTA crosswalk was previously available on the UDS Mapper website, which is no longer active. An archived copy of the ZIP-code-to-ZCTA crosswalk file has been included here. Sample SAS and Stata code are provided for merging the UDS mapper crosswalk with NaNDA datasets.
Facebook
TwitterWe compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).