12 datasets found
  1. Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets...

    • wiley.figshare.com
    html
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Philippi (2023). Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example. [Dataset]. http://doi.org/10.6084/m9.figshare.3524501.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Thomas Philippi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List ACS.zip -- .zip file containing SAS macro and example code, and example Aletris bracteata data sets. acs.sas chekika_ACS_estimation.sas chekika_1.csv chekika_2.csv philippi.3.1.zip

    Description "acs.sas" is a SAS macro for computing Horvitz-Thompson and Hansen-Horwitz estimates of population size for adaptive cluster sampling with random initial sampling. This version uses ugly base SAS code and does not require SQL or SAS products other than Base SAS, and should work with versions 8.2 onward (tested with versions 9.0 and 9.1). "chekika_ACS_estimation.sas" is example SAS code calling the acs macro to analyze the Chekika Aletris bracteata example data sets. "chekika_1.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 1-m2 quadrats. "chekika_2.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 4-m2 quadrats. "philippi.3.1.zip" metadata file generated by morpho, including both xml and css.

  2. f

    fdata-02-00004-g0001_Matching Cases and Controls Using SAS® Software.tif

    • frontiersin.figshare.com
    tiff
    Updated Jun 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg (2023). fdata-02-00004-g0001_Matching Cases and Controls Using SAS® Software.tif [Dataset]. http://doi.org/10.3389/fdata.2019.00004.s003
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    Frontiers
    Authors
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Matching is frequently used in observational studies, especially in medical research. However, only a small number of articles with matching programs for the SAS software (SAS Institute Inc., Cary, NC, USA) are available, even less are usable for inexperienced users of SAS software. This article presents a matching program for the SAS software and links to an online repository for examples and test data. The program enables matching on several variables and includes in-depth explanation of the expressions used and how to customize the program. The selection of controls is randomized and automated, minimizing the risk of selection bias. Also, the program provides means for the researcher to test for incomplete matching.

  3. H

    DHS_U5M: A flexible SAS macro to calculate childhood mortality estimates and...

    • data.niaid.nih.gov
    • dataverse.harvard.edu
    pdf +1
    Updated May 30, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sidney Atwood (2012). DHS_U5M: A flexible SAS macro to calculate childhood mortality estimates and standard errors from birth histories [Dataset]. http://doi.org/10.7910/DVN/OLI0ID
    Explore at:
    pdf, text/x-sas-syntax; charset=us-asciiAvailable download formats
    Dataset updated
    May 30, 2012
    Dataset provided by
    Research Core, Division of Global Health Equity, Brigham & Women's Hospital
    Authors
    Sidney Atwood
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    global
    Description

    This SAS macro generates childhood mortality estimates (neonatal, post-neonatal, infant (1q0), child (4q1) and under-five (5q0) mortality) and standard errors based on birth histories reported by women during a household survey. We have made the SAS macro flexible enough to accommodate a range of calculation specifications including multi-stage sampling frames, and simple random samples or censuses. Childhood mortality rates are the component death probabilities of dying before a specific age. This SAS macro is based on a macro built by Keith Purvis at MeasureDHS. His method is described in Estimating Sampling Errors of Means, Total Fertility, and Childhood Mortality Rates Using SAS (www.measuredhs.com/pubs/pdf/OD17/OD17.pdf, section 4). More information about Childhood Mortality Estimation can also be found in the Guide to DHS Statistics (www.measuredhs.com/pubs/pdf/DHSG1/Guide_DHS_Statistics.pdf, page 93). We allow the user to specify whether childhood mortality calculations should be based on 5 or 10 years of birth histories, when the birth history window ends, and how to handle age of death with it is reported in whole months (rather than days). The user can also calculate mortality rates within sub-populations, and take account of a complex survey design (unequal probability and cluster samples). Finally, this SAS program is designed to read data in a number of different formats.

  4. f

    The input macro parameters in the SAS macro %n_gssur.

    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhiwei Jiang; Ling Wang; Chanjuan Li; Jielai Xia; Hongxia Jia (2023). The input macro parameters in the SAS macro %n_gssur. [Dataset]. http://doi.org/10.1371/journal.pone.0044013.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Zhiwei Jiang; Ling Wang; Chanjuan Li; Jielai Xia; Hongxia Jia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The input macro parameters in the SAS macro %n_gssur.

  5. H

    Survey of Consumer Finances (SCF)

    • dataverse.harvard.edu
    Updated May 30, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Damico (2013). Survey of Consumer Finances (SCF) [Dataset]. http://doi.org/10.7910/DVN/FRMKMF
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 30, 2013
    Dataset provided by
    Harvard Dataverse
    Authors
    Anthony Damico
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    analyze the survey of consumer finances (scf) with r the survey of consumer finances (scf) tracks the wealth of american families. every three years, more than five thousand households answer a battery of questions about income, net worth, credit card debt, pensions, mortgages, even the lease on their cars. plenty of surveys collect annual income, only the survey of consumer finances captures such detailed asset data. responses are at the primary economic unit-level (peu) - the economically dominant, financially interdependent family members within a sampled household. norc at the university of chicago administers the data collection, but the board of governors of the federal reserve pay the bills and therefore call the shots. if you were so brazen as to open up the microdata and run a simple weighted median, you'd get the wrong answer. the five to six thousand respondents actually gobble up twenty-five to thirty thousand records in the final pub lic use files. why oh why? well, those tables contain not one, not two, but five records for each peu. wherever missing, these data are multiply-imputed, meaning answers to the same question for the same household might vary across implicates. each analysis must account for all that, lest your confidence intervals be too tight. to calculate the correct statistics, you'll need to break the single file into five, necessarily complicating your life. this can be accomplished with the meanit sas macro buried in the 2004 scf codebook (search for meanit - you'll need the sas iml add-on). or you might blow the dust off this website referred to in the 2010 codebook as the home of an alternative multiple imputation technique, but all i found were broken links. perhaps it's time for plan c, and by c, i mean free. read the imputation section of the latest codebook (search for imputation), then give these scripts a whirl. they've got that new r smell. the lion's share of the respondents in the survey of consumer finances get drawn from a pretty standard sample of american dwellings - no nursing homes, no active-duty military. then there's this secondary sample of richer households to even out the statistical noise at the higher end of the i ncome and assets spectrum. you can read more if you like, but at the end of the day the weights just generalize to civilian, non-institutional american households. one last thing before you start your engine: read everything you always wanted to know about the scf. my favorite part of that title is the word always. this new github repository contains t hree scripts: 1989-2010 download all microdata.R initiate a function to download and import any survey of consumer finances zipped stata file (.dta) loop through each year specified by the user (starting at the 1989 re-vamp) to download the main, extract, and replicate weight files, then import each into r break the main file into five implicates (each containing one record per peu) and merge the appropriate extract data onto each implicate save the five implicates and replicate weights to an r data file (.rda) for rapid future loading 2010 analysis examples.R prepare two survey of consumer finances-flavored multiply-imputed survey analysis functions load the r data files (.rda) necessary to create a multiply-imputed, replicate-weighted survey design demonstrate how to access the properties of a multiply-imput ed survey design object cook up some descriptive statistics and export examples, calculated with scf-centric variance quirks run a quick t-test and regression, but only because you asked nicely replicate FRB SAS output.R reproduce each and every statistic pr ovided by the friendly folks at the federal reserve create a multiply-imputed, replicate-weighted survey design object re-reproduce (and yes, i said/meant what i meant/said) each of those statistics, now using the multiply-imputed survey design object to highlight the statistically-theoretically-irrelevant differences click here to view these three scripts for more detail about the survey of consumer finances (scf), visit: the federal reserve board of governors' survey of consumer finances homepage the latest scf chartbook, to browse what's possible. (spoiler alert: everything.) the survey of consumer finances wikipedia entry the official frequently asked questions notes: nationally-representative statistics on the financial health, wealth, and assets of american hous eholds might not be monopolized by the survey of consumer finances, but there isn't much competition aside from the assets topical module of the survey of income and program participation (sipp). on one hand, the scf interview questions contain more detail than sipp. on the other hand, scf's smaller sample precludes analyses of acute subpopulations. and for any three-handed martians in the audience, ther e's also a few biases between these two data sources that you ought to consider. the survey methodologists at the federal reserve take their job...

  6. d

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • demo-b2find.dkrz.de
    Updated Sep 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. http://demo-b2find.dkrz.de/dataset/da423f51-0a3c-540f-8ee8-830d0c9e9ef0
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  7. f

    File S1 - Effect Sizes for 2×2 Contingency Tables

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    • +1more
    Updated Mar 8, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olivier, Jake; Bell, Melanie L. (2013). File S1 - Effect Sizes for 2×2 Contingency Tables [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001733121
    Explore at:
    Dataset updated
    Mar 8, 2013
    Authors
    Olivier, Jake; Bell, Melanie L.
    Description

    SAS Macro to compute sample sizes from marginal probabilities for small, medium and large odds ratios. (SAS)

  8. Example of a dataset for analyzing the ADR (adr) for the concomitant use of...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo (2023). Example of a dataset for analyzing the ADR (adr) for the concomitant use of two drugs (d1 and d2) for the listds data. [Dataset]. http://doi.org/10.1371/journal.pone.0207487.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example of a dataset for analyzing the ADR (adr) for the concomitant use of two drugs (d1 and d2) for the listds data.

  9. f

    Example of a dataset of three patients for the drugds data.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo (2023). Example of a dataset of three patients for the drugds data. [Dataset]. http://doi.org/10.1371/journal.pone.0207487.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example of a dataset of three patients for the drugds data.

  10. f

    Example of a dataset of three patients for the aeds data.

    • plos.figshare.com
    xls
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo (2023). Example of a dataset of three patients for the aeds data. [Dataset]. http://doi.org/10.1371/journal.pone.0207487.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Masahiko Gosho; Tomohiro Ohigashi; Kazushi Maruo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example of a dataset of three patients for the aeds data.

  11. f

    Data from: Mean cost and cost-effectiveness ratios with censored data: a...

    • tandf.figshare.com
    txt
    Updated Nov 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eduard Poltavskiy; Dingning Liu; Shuai Chen; Heejung Bang; Hongwei Zhao (2025). Mean cost and cost-effectiveness ratios with censored data: a tutorial and SAS® macros [Dataset]. http://doi.org/10.6084/m9.figshare.30287400.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 17, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Eduard Poltavskiy; Dingning Liu; Shuai Chen; Heejung Bang; Hongwei Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Censoring is an unignorable issue when analyzing survival data and/or medical cost data. Medical costs may be viewed as a type of survival data−in that they accrue over time until an endpoint such as death−or a ‘mark’ variable. Since Lin et al. (1997) and Mushlin et al. (1998) published landmark papers on this topic, censored cost data have been extensively studied. In this tutorial, we explain how to estimate mean cost and cost-effectiveness ratios, along with three examples under two different data scenarios: when only total cost data (so one observation per person) or longitudinal data (or cost history) are available. We also provide an updated literature review. SAS codes in the supplement could be useful to practitioners and data analysts.

  12. Additional file 2 of Computing and graphing probability values of pearson...

    • springernature.figshare.com
    bin
    Updated Feb 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qing Yang; Xinming An; Wei Pan (2024). Additional file 2 of Computing and graphing probability values of pearson distributions: a SAS/IML macro [Dataset]. http://doi.org/10.6084/m9.figshare.11423319.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 23, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Qing Yang; Xinming An; Wei Pan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sample dataset 1. The dataset dataI.sas7bdat was taken from [1].

  13. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Thomas Philippi (2023). Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example. [Dataset]. http://doi.org/10.6084/m9.figshare.3524501.v1
Organization logo

Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example.

Related Article
Explore at:
htmlAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
Wileyhttps://www.wiley.com/
Authors
Thomas Philippi
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

File List ACS.zip -- .zip file containing SAS macro and example code, and example Aletris bracteata data sets. acs.sas chekika_ACS_estimation.sas chekika_1.csv chekika_2.csv philippi.3.1.zip

Description "acs.sas" is a SAS macro for computing Horvitz-Thompson and Hansen-Horwitz estimates of population size for adaptive cluster sampling with random initial sampling. This version uses ugly base SAS code and does not require SQL or SAS products other than Base SAS, and should work with versions 8.2 onward (tested with versions 9.0 and 9.1). "chekika_ACS_estimation.sas" is example SAS code calling the acs macro to analyze the Chekika Aletris bracteata example data sets. "chekika_1.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 1-m2 quadrats. "chekika_2.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 4-m2 quadrats. "philippi.3.1.zip" metadata file generated by morpho, including both xml and css.

Search
Clear search
Close search
Google apps
Main menu