63 datasets found
  1. Data file in SAS format

    • figshare.com
    txt
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Guillaume Béraud
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    data file in SAS format

  2. A

    Provider Specific Data for Public Use in SAS Format

    • data.amerigeoss.org
    html
    Updated Jul 29, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States[old] (2019). Provider Specific Data for Public Use in SAS Format [Dataset]. https://data.amerigeoss.org/da_DK/dataset/provider-specific-data-for-public-use-in-sas-format-0d063
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 29, 2019
    Dataset provided by
    United States[old]
    Description

    The Fiscal Intermediary maintains the Provider Specific File (PSF). The file contains information about the facts specific to the provider that affects computations for the Prospective Payment System. The Provider Specific files in SAS format are located in the Download section below for the following provider-types, Inpatient, Skilled Nursing Facility, Home Health Agency, Hospice, Inpatient Rehab, Long Term Care, Inpatient Psychiatric Facility

  3. m

    Global Burden of Disease analysis dataset of noncommunicable disease...

    • data.mendeley.com
    Updated Apr 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Cundiff (2023). Global Burden of Disease analysis dataset of noncommunicable disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.10
    Explore at:
    Dataset updated
    Apr 6, 2023
    Authors
    David Cundiff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.

    The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.

    These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis. The data include the following: 1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc). 2. A text file to import the analysis database into SAS 3. The SAS code to format the analysis database to be used for analytics 4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6 5. SAS code for deriving the multiple regression formula in Table 4. 6. SAS code for deriving the multiple regression formula in Table 5 7. SAS code for deriving the multiple regression formula in Supplementary Table 7
    8. SAS code for deriving the multiple regression formula in Supplementary Table 8 9. The Excel files that accompanied the above SAS code to produce the tables

    For questions, please email davidkcundiff@gmail.com. Thanks.

  4. f

    SAS scripts for supplementary data.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Jul 13, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M. (2015). SAS scripts for supplementary data. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001869731
    Explore at:
    Dataset updated
    Jul 13, 2015
    Authors
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M.
    Description

    The raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)

  5. m

    Model-derived synthetic aperture sonar (SAS) data in Generic Data Format...

    • marine-geo.org
    Updated Sep 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Model-derived synthetic aperture sonar (SAS) data in Generic Data Format (GDF) [Dataset]. https://www.marine-geo.org/tools/files/31898
    Explore at:
    Dataset updated
    Sep 24, 2024
    Description

    The simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.

  6. d

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • demo-b2find.dkrz.de
    Updated Sep 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. http://demo-b2find.dkrz.de/dataset/da423f51-0a3c-540f-8ee8-830d0c9e9ef0
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  7. SAS-2 Photon Events Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Photon Events Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-photon-events-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  8. Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets...

    • wiley.figshare.com
    html
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Philippi (2023). Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example. [Dataset]. http://doi.org/10.6084/m9.figshare.3524501.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Thomas Philippi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List ACS.zip -- .zip file containing SAS macro and example code, and example Aletris bracteata data sets. acs.sas chekika_ACS_estimation.sas chekika_1.csv chekika_2.csv philippi.3.1.zip

    Description "acs.sas" is a SAS macro for computing Horvitz-Thompson and Hansen-Horwitz estimates of population size for adaptive cluster sampling with random initial sampling. This version uses ugly base SAS code and does not require SQL or SAS products other than Base SAS, and should work with versions 8.2 onward (tested with versions 9.0 and 9.1). "chekika_ACS_estimation.sas" is example SAS code calling the acs macro to analyze the Chekika Aletris bracteata example data sets. "chekika_1.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 1-m2 quadrats. "chekika_2.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 4-m2 quadrats. "philippi.3.1.zip" metadata file generated by morpho, including both xml and css.

  9. NC_birth_outcomes_and_greenery_metrics

    • catalog.data.gov
    • gimi9.com
    Updated Sep 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2025). NC_birth_outcomes_and_greenery_metrics [Dataset]. https://catalog.data.gov/dataset/nc-birth-outcomes-and-greenery-metrics
    Explore at:
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    This data contains linked birth registry information with greenery metrics in North Carolina. This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: Birth records can be requested through the NC State Health Department. Greenery meterics can be downloaded through EPA's EnviroAtlas. Format: Datasets are in csvs, R and SAS formats. This dataset is associated with the following publication: Tsai, W., T. Luben, and K. Rappazzo. Associations between neighborhood greenery and birth outcomes in a North Carolina cohort. Journal of Exposure Science and Environmental Epidemiology. Nature Publishing Group, London, UK, 35(5): 821-830, (2025).

  10. Safe Schools Research Initiative, Texas, 2015-2017

    • icpsr.umich.edu
    • catalog.data.gov
    Updated Nov 29, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noyola, Orlando (2018). Safe Schools Research Initiative, Texas, 2015-2017 [Dataset]. http://doi.org/10.3886/ICPSR36988.v1
    Explore at:
    Dataset updated
    Nov 29, 2018
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Noyola, Orlando
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/36988/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36988/terms

    Time period covered
    2015 - 2017
    Area covered
    Texas, United States
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.This study sought to examine any major changes in schools in the past two years as an evaluation of the Safe and Civil Schools Initiative. Students, faculty, and administrators were asked questions on topics including school safety, climate, and the discipline process.This collection includes 6 SAS data files: "psja_schools.sas7bdat" with 66 variables and 15 cases, "psja_schools_v01.sas7bdat" with 104 variables and 15 cases, "psja_staff.sas7bdat" with 39 variables and 2,921 cases, "psja_staff_v01.sas7bdat" with 202 variables and 2,398 cases, "psja_students.sas7bdat" with 97 variables and 4,382 cases, and "psja_students_v01.sas7bdat" with 332 variables and 4,267 cases. Additionally, the collection includes 1 SAS formats catalog "formats.sas7bcat", and 10 SAS syntax files.

  11. H

    Infogroup US Historical Business Data

    • dataverse.harvard.edu
    application/gzip +4
    Updated Apr 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). Infogroup US Historical Business Data [Dataset]. http://doi.org/10.7910/DVN/PNOFKI
    Explore at:
    application/x-gzip(981236468), csv(8714), application/gzip(621979196), pdf(41531), tsv(13094)Available download formats
    Dataset updated
    Apr 17, 2020
    Dataset provided by
    Harvard Dataverse
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKIhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKI

    Time period covered
    1997 - 2019
    Area covered
    United States
    Description

    InfoGroup’s Historical Business Backfile consists of geo-coded records of millions of US businesses and other organizations that contain basic information on each entity, such as: contact information, industry description, annual revenues, number of employees, year established, and other data. Each annual file consists of a “snapshot” of InfoGroup’s data as of the last day of each year, creating a time series of data 1997-2019. Access is restricted to current Harvard University community members. Use of Infogroup US Historical Business Data is subject to the terms and conditions of a license agreement (effective March 16, 2016) between Harvard and Infogroup Inc. and subject to applicable laws. Most data files are available in either .csv or .sas format. All data files are compressed into an archive in .gz, or GZIP, format. Extraction software such as 7-Zip is required to unzip these archives.

  12. g

    National Community Based Survey of Supports for Healthy Eating and Active...

    • gimi9.com
    Updated Sep 21, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). National Community Based Survey of Supports for Healthy Eating and Active Living (CBS HEAL) | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_national-community-based-survey-of-supports-for-healthy-eating-and-active-living-cbs-heal
    Explore at:
    Dataset updated
    Sep 21, 2022
    Description

    Community-Based Survey of Supports for Healthy Eating and Active Living (CBS HEAL) is a CDC survey of a nationally representative sample of U.S. municipalities to better understand existing community-level policies and practices that support healthy eating and active living. The survey collects information about policies such as nutrition standards, incentives for healthy food retail, bike/pedestrian-friendly design, and Complete Streets. About 2,000 municipalities respond to the survey. Participating municipalities receive a report that allows them to compare their policies and practices with other municipalities of similar geography, population size, and urban status. The CBS HEAL survey was first administered in 2014 and was administered again in 2021. Data is provided in multiple formats for download including as a SAS file. A methods report and a SAS program for formatting the data are also provided.

  13. g

    SAS-2 Photon Events Catalog | gimi9.com

    • gimi9.com
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). SAS-2 Photon Events Catalog | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_sas-2-photon-events-catalog/
    Explore at:
    Dataset updated
    Feb 1, 2001
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  14. c

    SAS-2 Map Product Catalog

    • s.cnmilf.com
    • catalog.data.gov
    Updated Sep 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). SAS-2 Map Product Catalog [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  15. n

    Global Burden of Disease analysis dataset of cardiovascular disease...

    • narcis.nl
    • data.mendeley.com
    Updated Jun 23, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cundiff, D (via Mendeley Data) (2021). Global Burden of Disease analysis dataset of cardiovascular disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.4
    Explore at:
    Dataset updated
    Jun 23, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Cundiff, D (via Mendeley Data)
    Description

    This formatted dataset originates from raw data files from the Institute of Health Metrics and Evaluation Global Burden of Disease (GBD2017). It is population weighted worldwide data on male and female cohorts ages 15-69 years including cardiovascular disease early death and associated dietary, metabolic and other risk factors. The purpose of creating this formatted database is to explore the univariate and multiple regression correlations of cardiovascular early deaths and other health outcomes with risk factors. Our research hypothesis is that we can successfully apply artificial intelligence to model cardiovascular disease outcomes with risk factors. We found that fat-soluble vitamin containing foods (animal products) and added fats are negatively correlated with CVD early deaths worldwide but positively correlated with CVD early deaths in high fat-soluble vitamin cohorts. We interpret this as showing that optimal cardiovascular outcomes come with moderate (not low and not high) intakes of animal foods and added fats. You are invited to download the dataset, the associated SAS code to access the dataset, and the tables that have resulted from the analysis. Please comment on the article by indicating what you found by exploring the dataset with the provided SAS codes. Please say whether or not you found the outputs from the SAS codes accurately reflected the tables provided and the tables in the published article. If you use our data to reproduce our findings and comment on your findings on the MedRxIV website (https://www.medrxiv.org/content/10.1101/2021.04.17.21255675v4) and would like to be recognized, we will be happy to list you as a contributor when the article is summited to JAMA. For questions, please email davidkcundiff@gmail.com. Thanks.

  16. SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  17. e

    The simple and new SAS and R codes to estimate optimum and base selection...

    • ebi.ac.uk
    Updated Jun 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mehdi rahimi (2022). The simple and new SAS and R codes to estimate optimum and base selection indices to choice superior genotypes in plants and animals breeding program [Dataset]. https://www.ebi.ac.uk/biostudies/studies/S-BSST853
    Explore at:
    Dataset updated
    Jun 10, 2022
    Authors
    mehdi rahimi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The SAS code (Supplementary File 1) and R program code (Supplementary File 2). For the analysis to proceed, this code requires an input data file (Supplementary File 3-5) prepared in excel format (CSV). Data can be stored in any format such as xlsx, txt, xls and others. Economic values in the SAS code are entered manually in the code, but in the R code are stored in an Excel file (Supplementary File 6).

  18. E

    SAS: Semantic Artist Similarity Dataset

    • live.european-language-grid.eu
    txt
    Updated Oct 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). SAS: Semantic Artist Similarity Dataset [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/7418
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 28, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Semantic Artist Similarity dataset consists of two datasets of artists entities with their corresponding biography texts, and the list of top-10 most similar artists within the datasets used as ground truth. The dataset is composed by a corpus of 268 artists and a slightly larger one of 2,336 artists, both gathered from Last.fm in March 2015. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter corpus we use the similarity between artists as provided by the Last.fm API. For every artist there is a list with the top-10 most related artists. In the MIREX dataset there are 188 artists with at least 10 similar artists, the other 80 artists have less than 10 similar artists. In the Last.fm API dataset all artists have a list of 10 similar artists. There are 4 files in the dataset.mirex_gold_top10.txt and lastfmapi_gold_top10.txt have the top-10 lists of artists for every artist of both datasets. Artists are identified by MusicBrainz ID. The format of the file is one line per artist, with the artist mbid separated by a tab with the list of top-10 related artists identified by their mbid separated by spaces.artist_mbid \t artist_mbid_top10_list_separated_by_spaces mb2uri_mirex and mb2uri_lastfmapi.txt have the list of artists. In each line there are three fields separated by tabs. First field is the MusicBrainz ID, second field is the last.fm name of the artist, and third field is the DBpedia uri.artist_mbid \t lastfm_name \t dbpedia_uri There are also 2 folders in the dataset with the biography texts of each dataset. Each .txt file in the biography folders is named with the MusicBrainz ID of the biographied artist. Biographies were gathered from the Last.fm wiki page of every artist.Using this datasetWe would highly appreciate if scientific publications of works partly based on the Semantic Artist Similarity dataset quote the following publication:Oramas, S., Sordo M., Espinosa-Anke L., & Serra X. (In Press). A Semantic-based Approach for Artist Similarity. 16th International Society for Music Information Retrieval Conference.We are interested in knowing if you find our datasets useful! If you use our dataset please email us at mtg-info@upf.edu and tell us about your research. https://www.upf.edu/web/mtg/semantic-similarity

  19. ODM Data Analysis—A tool for the automatic validation, monitoring and...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    mp4
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas (2023). ODM Data Analysis—A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data [Dataset]. http://doi.org/10.1371/journal.pone.0199242
    Explore at:
    mp4Available download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionA required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data.MethodsThe system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application’s performance and functionality.ResultsThe system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects.DiscussionMedical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  20. u

    Data from: Thrifty Food Plan Cost Estimates for Alaska and Hawaii

    • agdatacommons.nal.usda.gov
    pdf
    Updated Nov 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin Meyers Mathieu (2025). Data from: Thrifty Food Plan Cost Estimates for Alaska and Hawaii [Dataset]. http://doi.org/10.15482/USDA.ADC/1529439
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Nov 22, 2025
    Dataset provided by
    Ag Data Commons
    Authors
    Kevin Meyers Mathieu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Alaska, Hawaii
    Description

    This online supplement contains data files and computer code, enabling the public to reproduce the results of the analysis described in the report titled “Thrifty Food Plan Cost Estimates for Alaska and Hawaii” published by USDA FNS in July 2023. The report is available at: https://www.fns.usda.gov/cnpp/tfp-akhi. The online supplement contains a user guide, which describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis; a data file in CSV format, which contains the most detailed information on food price differentials between the mainland U.S. and Alaska and Hawaii derived from Circana (formerly Information Resources Inc) retail scanner data as could be released without disclosing proprietary information; SAS and R code, which use the provided data file to reproduce the results of the report; and an excel spreadsheet containing the reproduced results from the SAS or R code. For technical inquiries, contact: FNS.FoodPlans@usda.gov. Resources in this dataset:

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement User Guide File name: TFPCostEstimatesForAlaskaAndHawaii-UserGuide.pdf Resource description: The online supplement user guide describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Data File File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementDataFile.csv Resource description: The online supplement data file contains food price differentials between the mainland United States and Anchorage and Honolulu derived from Circana (formerly Information Resources Inc) retail scanner data. The data was aggregated to prevent disclosing proprietary information.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement R Code File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementRCode.R Resource description: The online supplement R code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the R programming language.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement SAS Code (zipped) File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementSASCode.zip Resource description: The online supplement SAS code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the SAS programming language. This SAS file is provided in zip format for compatibility with Ag Data Commons; users will need to unzip the file prior to its use.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Reproduced Results File name: TFPCostEstimatesforAlaskaandHawaii-ReproducedResults.xlsx Resource description: The online supplement reproduced results are output from either the online supplement R or SAS code and contain the results of the analysis described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
Organization logoOrganization logo

Data file in SAS format

Explore at:
10 scholarly articles cite this dataset (View in Google Scholar)
txtAvailable download formats
Dataset updated
Jan 19, 2016
Dataset provided by
figshare
Figsharehttp://figshare.com/
Authors
Guillaume Béraud
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

data file in SAS format

Search
Clear search
Close search
Google apps
Main menu