87 datasets found
  1. SAS code used to analyze data and a datafile with metadata glossary

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Nov 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). SAS code used to analyze data and a datafile with metadata glossary [Dataset]. https://catalog.data.gov/dataset/sas-code-used-to-analyze-data-and-a-datafile-with-metadata-glossary
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).

  2. m

    Global Burden of Disease analysis dataset of noncommunicable disease...

    • data.mendeley.com
    Updated Apr 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Cundiff (2023). Global Burden of Disease analysis dataset of noncommunicable disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.10
    Explore at:
    Dataset updated
    Apr 6, 2023
    Authors
    David Cundiff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.

    The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.

    These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis. The data include the following: 1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc). 2. A text file to import the analysis database into SAS 3. The SAS code to format the analysis database to be used for analytics 4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6 5. SAS code for deriving the multiple regression formula in Table 4. 6. SAS code for deriving the multiple regression formula in Table 5 7. SAS code for deriving the multiple regression formula in Supplementary Table 7
    8. SAS code for deriving the multiple regression formula in Supplementary Table 8 9. The Excel files that accompanied the above SAS code to produce the tables

    For questions, please email davidkcundiff@gmail.com. Thanks.

  3. Sample SAS code for the Monte Carlo Study

    • figshare.com
    Updated May 12, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Milica Miocevic (2016). Sample SAS code for the Monte Carlo Study [Dataset]. http://doi.org/10.6084/m9.figshare.3376093.v1
    Explore at:
    Dataset updated
    May 12, 2016
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Milica Miocevic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These SAS files are sample code used for the Monte Carlo studies in a manuscript on statistical properties of four effect size measures for the mediated effect.Citation:Miočević, M., O’Rourke, H. P., MacKinnon, D. P., & Brown, H. C. (2016). The bias and efficiency of five effect size measures for mediation models. Under review at Behavior Research Methods.

  4. f

    Supplement 1. SAS code and data set for obtaining the results described in...

    • wiley.figshare.com
    html
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jay M. Ver Hoef; Peter L. Boveng (2023). Supplement 1. SAS code and data set for obtaining the results described in this paper. [Dataset]. http://doi.org/10.6084/m9.figshare.3528452.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Wiley
    Authors
    Jay M. Ver Hoef; Peter L. Boveng
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List NBvsPoi_FINAL.sas -- SAS code SSEAK98_FINAL.txt -- Harbor seal data used by SAS code Description The NBvsPoi_FINAL SAS program uses a SAS macro to analyze the data in SSEAK98_FINAL.txt. The SAS program and macro are commented for further explanation.

  5. f

    Supplement 1. Sample data, metadata, and SAS code.

    • wiley.figshare.com
    html
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Everett Weber (2023). Supplement 1. Sample data, metadata, and SAS code. [Dataset]. http://doi.org/10.6084/m9.figshare.3521543.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Wiley
    Authors
    Everett Weber
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List ECO101_sample_data.xls ECO101_sample_data.txt SAS_Code.rtf

    Please note that ESA cannot guarantee the availability of Excel files in perpetuity as it is proprietary software. Thus, the data file here is also supplied as a tab-delimited ASCII file, and the other Excel workbook sheets are provided below in the description section. Description -- TABLE: Please see in attached file. --

  6. d

    SAS Programs - Claims-Based Frailty Index

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kim, Dae Hyun; Gautam, Nileesa (2024). SAS Programs - Claims-Based Frailty Index [Dataset]. http://doi.org/10.7910/DVN/HM8DOI
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Kim, Dae Hyun; Gautam, Nileesa
    Description

    This SAS program calculates CFI for each patient from analytic data files containing information on patient identifiers, ICD-9-CM diagnosis codes (version 32), ICD-10-CM Diagnosis Codes (version 2020), CPT codes, and HCPCS codes. NOTE: When downloading, store "CFI_ICD9CM_V32.tab", "CFI_ICD10CM_V2020.tab", and "PX_CODES.tab" as csv files (these files are originally stored as csv files, but Dataverse automatically converts them to tab files). Please read "Frailty-Index-SAS-code-Guide" before proceeding. Interpretation, validation data, and annotated references are provided in "Research Background - Claims-Based Frailty Index".

  7. f

    fdata-02-00004_Matching Cases and Controls Using SAS® Software.pdf

    • frontiersin.figshare.com
    pdf
    Updated Jun 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg (2023). fdata-02-00004_Matching Cases and Controls Using SAS® Software.pdf [Dataset]. http://doi.org/10.3389/fdata.2019.00004.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Matching is frequently used in observational studies, especially in medical research. However, only a small number of articles with matching programs for the SAS software (SAS Institute Inc., Cary, NC, USA) are available, even less are usable for inexperienced users of SAS software. This article presents a matching program for the SAS software and links to an online repository for examples and test data. The program enables matching on several variables and includes in-depth explanation of the expressions used and how to customize the program. The selection of controls is randomized and automated, minimizing the risk of selection bias. Also, the program provides means for the researcher to test for incomplete matching.

  8. f

    Supplement 1. MATLAB and SAS code necessary to replicate the simulation...

    • datasetcatalog.nlm.nih.gov
    • wiley.figshare.com
    Updated Aug 4, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Davis, Adam S.; Landis, Douglas A.; Schemske, Douglas W.; Raghu, S.; Evans, Jeffrey A.; Ragavendran, Ashok (2016). Supplement 1. MATLAB and SAS code necessary to replicate the simulation models and other demographic analyses presented in the paper. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001528932
    Explore at:
    Dataset updated
    Aug 4, 2016
    Authors
    Davis, Adam S.; Landis, Douglas A.; Schemske, Douglas W.; Raghu, S.; Evans, Jeffrey A.; Ragavendran, Ashok
    Description

    File List Code_and_Data_Supplement.zip (md5: dea8636b921f39c9d3fd269e44b6228c) Description The supplementary material provided includes all code and data files necessary to replicate the simulation models other demographic analyses presented in the paper. MATLAB code is provided for the simulations, and SAS code is provided to show how model parameters (vital rates) were estimated. The principal programs are Figure_3_4_5_Elasticity_Contours.m and Figure_6_Contours_Stochastic_Lambda.m which perform the elasticity analyses and run the stochastic simulation, respectively. The files are presented in a zipped folder called Code_and_Data_Supplement. When uncompressed, users may run the MATLAB programs by opening them from within this directory. Subdirectories contain the data files and supporting MATLAB functions necessary to complete execution. The programs are written to find the necessary supporting functions in the Code_and_Data_Supplement directory. If users copy these MATLAB files to a different directory, they must add the Code_and_Data_Supplement directory and its subdirectories to their search path to make the supporting files available. More details are provided in the README.txt file included in the supplement. The file and directory structure of entire zipped supplement is shown below. Folder PATH listing Code_and_Data_Supplement | Figure_3_4_5_Elasticity_Contours.m | Figure_6_Contours_Stochastic_Lambda.m | Figure_A1_RefitG2.m | Figure_A2_PlotFecundityRegression.m | README.txt | +---FinalDataFiles +---Make Tables | README.txt | Table_lamANNUAL.csv | Table_mgtProbPredicted.csv | +---ParameterEstimation | | Categorical Model output.xls | | | +---Fecundity | | Appendix_A3_Fecundity_Breakpoint.sas | | fec_Cat_Indiv.sas | | Mean_Fec_Previous_Study.m | | | +---G1 | | G1_Cat.sas | | | +---G2 | | G2_Cat.sas | | | +---Model Ranking | | Categorical Model Ranking.xls | | | +---Seedlings | | sdl_Cat.sas | | | +---SS | | SS_Cat.sas | | | +---SumSrv | | sum_Cat.sas | | | ---WinSrv | modavg.m | winCatModAvgfitted.m | winCatModAvgLinP.m | winCatModAvgMu.m | win_Cat.sas | +---ProcessedDatafiles | fecdat_gm_param_est_paper.mat | hierarchical_parameters.mat | refitG2_param_estimation.mat | ---Required_Functions | hline.m | hmstoc.m | Jeffs_Figure_Settings.m | Jeffs_startup.m | newbootci.m | sem.m | senstuff.m | vline.m | +---export_fig | change_value.m | eps2pdf.m | export_fig.m | fix_lines.m | ghostscript.m | license.txt | pdf2eps.m | pdftops.m | print2array.m | print2eps.m | +---lowess | license.txt | lowess.m | +---Multiprod_2009 | | Appendix A - Algorithm.pdf | | Appendix B - Testing speed and memory usage.pdf | | Appendix C - Syntaxes.pdf | | license.txt | | loc2loc.m | | MULTIPROD Toolbox Manual.pdf | | multiprod.m | | multitransp.m | | | ---Testing | | arraylab13.m | | arraylab131.m | | arraylab132.m | | arraylab133.m | | genop.m | | multiprod13.m | | readme.txt | | sysrequirements_for_testing.m | | testing_memory_usage.m | | testMULTIPROD.m | | timing_arraylab_engines.m | | timing_matlab_commands.m | | timing_MX.m | | | ---Data | Memory used by MATLAB statements.xls | Timing results.xlsx | timing_MX.txt | +---province | PROVINCE.DBF | province.prj | PROVINCE.SHP | PROVINCE.SHX | README.txt | +---SubAxis | parseArgs.m | subaxis.m | +---suplabel | license.txt | suplabel.m | suplabel_test.m | ---tight_subplot license.txt tight_subplot.m

  9. f

    SAS codes for generating heatmaps “IBT Heat Maps”.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Mar 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ozoor, Mounira; Holtrop, Jodi Summers; Gritz, Mark; Dolor, Rowena J.; Luo, Zhehui (2023). SAS codes for generating heatmaps “IBT Heat Maps”. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000987950
    Explore at:
    Dataset updated
    Mar 24, 2023
    Authors
    Ozoor, Mounira; Holtrop, Jodi Summers; Gritz, Mark; Dolor, Rowena J.; Luo, Zhehui
    Description

    SAS codes for generating heatmaps “IBT Heat Maps”.

  10. H

    SAS code

    • dataverse.harvard.edu
    Updated Oct 1, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manja Jensen (2021). SAS code [Dataset]. http://doi.org/10.7910/DVN/ZQHO5W
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 1, 2021
    Dataset provided by
    Harvard Dataverse
    Authors
    Manja Jensen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    SAS code. This replicate the numbers and tables in the research article “Using a Deliberative Poll on breast cancer screening to assess and improve the decision quality of laypeople” by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma, and John Brodersen

  11. f

    SAS scripts for supplementary data.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Jul 13, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M. (2015). SAS scripts for supplementary data. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001869731
    Explore at:
    Dataset updated
    Jul 13, 2015
    Authors
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M.
    Description

    The raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)

  12. H

    SAS code

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Feb 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manja Jensen (2022). SAS code [Dataset]. http://doi.org/10.7910/DVN/NWXK9G
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 24, 2022
    Dataset provided by
    Harvard Dataverse
    Authors
    Manja Jensen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This code replicate the numbers for the tables and figures in the article "How video information on mammography screening affects the recommendations of laypeople: a randomised controlled trial" by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma and John Brodersen.

  13. Data from: A SAS macro for computing statistical tests for two-way table and...

    • scielo.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Omid Ali Akbarpour; Hamid Dehghani; Bezad Sorkhi-Lalelo; Manjit Singh Kang (2023). A SAS macro for computing statistical tests for two-way table and stability indices of nonparametric method from genotype-by-environment interaction [Dataset]. http://doi.org/10.6084/m9.figshare.20012572.v1
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Omid Ali Akbarpour; Hamid Dehghani; Bezad Sorkhi-Lalelo; Manjit Singh Kang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT. Genotype-by-environment interaction refers to the differential response of different genotypes across different environments. This is a general phenomenon in all living organisms and always has been one of the main challenges for biologists and plant breeders. The nonparametric methods based on the rank of original data have been suggested as the alternative methods after parametric methods to analyze data without perquisite assumptions needed for common analysis of variance. But, the lack of statistical software or package, especially for analysis of two-way data, is one of the main reasons that plant breeders have not greatly used the nonparametric methods. Here, we have explained the nonparametric methods and presented a comprehensive two-parts SAS program for calculation of four nonparametric statistical tests (Bredenkamp, Hildebrand, Kubinger and van der Laan-de Kroon) and all of the valid stability statistics including Hühn's parameters (Si(1), Si(2), Si(3), Si(6)), Thennarasu's parameters (NPi(1), NPi(2), NPi(3), NPi(4)), Fox's ranking technique and Kang's rank-sum.

  14. e

    The simple and new SAS and R codes to estimate optimum and base selection...

    • ebi.ac.uk
    Updated Jun 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mehdi rahimi (2022). The simple and new SAS and R codes to estimate optimum and base selection indices to choice superior genotypes in plants and animals breeding program [Dataset]. https://www.ebi.ac.uk/biostudies/studies/S-BSST853
    Explore at:
    Dataset updated
    Jun 10, 2022
    Authors
    mehdi rahimi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The SAS code (Supplementary File 1) and R program code (Supplementary File 2). For the analysis to proceed, this code requires an input data file (Supplementary File 3-5) prepared in excel format (CSV). Data can be stored in any format such as xlsx, txt, xls and others. Economic values in the SAS code are entered manually in the code, but in the R code are stored in an Excel file (Supplementary File 6).

  15. R

    Sas Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    new-workspace-g5k1w (2025). Sas Dataset [Dataset]. https://universe.roboflow.com/new-workspace-g5k1w/sas-03lmz/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    new-workspace-g5k1w
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Sasas Bounding Boxes
    Description

    Sas

    ## Overview
    
    Sas is a dataset for object detection tasks - it contains Sasas annotations for 2,737 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  16. f

    Appendix B. The SAS code used to fit a Poisson regression to detailed and...

    • datasetcatalog.nlm.nih.gov
    • wiley.figshare.com
    Updated Aug 5, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dixon, Philip M.; Ellison, Aaron M.; Gotelli, Nicholas J. (2016). Appendix B. The SAS code used to fit a Poisson regression to detailed and aggregate data. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001526867
    Explore at:
    Dataset updated
    Aug 5, 2016
    Authors
    Dixon, Philip M.; Ellison, Aaron M.; Gotelli, Nicholas J.
    Description

    The SAS code used to fit a Poisson regression to detailed and aggregate data.

  17. d

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • demo-b2find.dkrz.de
    Updated Sep 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. http://demo-b2find.dkrz.de/dataset/da423f51-0a3c-540f-8ee8-830d0c9e9ef0
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  18. n

    Global Burden of Disease analysis dataset of cardiovascular disease...

    • narcis.nl
    • data.mendeley.com
    Updated Jun 23, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cundiff, D (via Mendeley Data) (2021). Global Burden of Disease analysis dataset of cardiovascular disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.4
    Explore at:
    Dataset updated
    Jun 23, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Cundiff, D (via Mendeley Data)
    Description

    This formatted dataset originates from raw data files from the Institute of Health Metrics and Evaluation Global Burden of Disease (GBD2017). It is population weighted worldwide data on male and female cohorts ages 15-69 years including cardiovascular disease early death and associated dietary, metabolic and other risk factors. The purpose of creating this formatted database is to explore the univariate and multiple regression correlations of cardiovascular early deaths and other health outcomes with risk factors. Our research hypothesis is that we can successfully apply artificial intelligence to model cardiovascular disease outcomes with risk factors. We found that fat-soluble vitamin containing foods (animal products) and added fats are negatively correlated with CVD early deaths worldwide but positively correlated with CVD early deaths in high fat-soluble vitamin cohorts. We interpret this as showing that optimal cardiovascular outcomes come with moderate (not low and not high) intakes of animal foods and added fats. You are invited to download the dataset, the associated SAS code to access the dataset, and the tables that have resulted from the analysis. Please comment on the article by indicating what you found by exploring the dataset with the provided SAS codes. Please say whether or not you found the outputs from the SAS codes accurately reflected the tables provided and the tables in the published article. If you use our data to reproduce our findings and comment on your findings on the MedRxIV website (https://www.medrxiv.org/content/10.1101/2021.04.17.21255675v4) and would like to be recognized, we will be happy to list you as a contributor when the article is summited to JAMA. For questions, please email davidkcundiff@gmail.com. Thanks.

  19. B

    Clinical mastitis datasets and SAS code - Fréchette et al., 2021

    • borealisdata.ca
    Updated Jul 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Dufour (2025). Clinical mastitis datasets and SAS code - Fréchette et al., 2021 [Dataset]. http://doi.org/10.5683/SP2/KIEMHY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    Borealis
    Authors
    Simon Dufour
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The datasets and their legends, for an observational cohort study described by Fréchette et al., 2021. The SAS code used to conduct the analyses described in the article is also reported.

  20. Code for merging National Neighborhood Data Archive ZCTA level datasets with...

    • linkagelibrary.icpsr.umich.edu
    Updated Oct 15, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Megan Chenoweth; Anam Khan (2020). Code for merging National Neighborhood Data Archive ZCTA level datasets with the UDS Mapper ZIP code to ZCTA crosswalk [Dataset]. http://doi.org/10.3886/E124461V4
    Explore at:
    Dataset updated
    Oct 15, 2020
    Dataset provided by
    University of Michigan. Institute for Social Research
    Authors
    Megan Chenoweth; Anam Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The sample SAS and Stata code provided here is intended for use with certain datasets in the National Neighborhood Data Archive (NaNDA). NaNDA (https://www.openicpsr.org/openicpsr/nanda) contains some datasets that measure neighborhood context at the ZIP Code Tabulation Area (ZCTA) level. They are intended for use with survey or other individual-level data containing ZIP codes. Because ZIP codes do not exactly match ZIP code tabulation areas, a crosswalk is required to use ZIP-code-level geocoded datasets with ZCTA-level datasets from NaNDA. A ZIP-code-to-ZCTA crosswalk was previously available on the UDS Mapper website, which is no longer active. An archived copy of the ZIP-code-to-ZCTA crosswalk file has been included here. Sample SAS and Stata code are provided for merging the UDS mapper crosswalk with NaNDA datasets.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. EPA Office of Research and Development (ORD) (2020). SAS code used to analyze data and a datafile with metadata glossary [Dataset]. https://catalog.data.gov/dataset/sas-code-used-to-analyze-data-and-a-datafile-with-metadata-glossary
Organization logo

SAS code used to analyze data and a datafile with metadata glossary

Explore at:
Dataset updated
Nov 12, 2020
Dataset provided by
United States Environmental Protection Agencyhttp://www.epa.gov/
Description

We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).

Search
Clear search
Close search
Google apps
Main menu