43 datasets found
  1. Data file in SAS format

    • figshare.com
    txt
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Guillaume Béraud
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    data file in SAS format

  2. A

    Provider Specific Data for Public Use in SAS Format

    • data.amerigeoss.org
    html
    Updated Jul 29, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States[old] (2019). Provider Specific Data for Public Use in SAS Format [Dataset]. https://data.amerigeoss.org/da_DK/dataset/provider-specific-data-for-public-use-in-sas-format-0d063
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 29, 2019
    Dataset provided by
    United States[old]
    Description

    The Fiscal Intermediary maintains the Provider Specific File (PSF). The file contains information about the facts specific to the provider that affects computations for the Prospective Payment System. The Provider Specific files in SAS format are located in the Download section below for the following provider-types, Inpatient, Skilled Nursing Facility, Home Health Agency, Hospice, Inpatient Rehab, Long Term Care, Inpatient Psychiatric Facility

  3. f

    SAS scripts for supplementary data.

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    docx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nneka M. George; Julia Whitaker; Giovana Vieira; Jerome T. Geronimo; Dwight A. Bellinger; Craig A. Fletcher; Joseph P. Garner (2023). SAS scripts for supplementary data. [Dataset]. http://doi.org/10.1371/journal.pone.0132092.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Nneka M. George; Julia Whitaker; Giovana Vieira; Jerome T. Geronimo; Dwight A. Bellinger; Craig A. Fletcher; Joseph P. Garner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)

  4. m

    Model-derived synthetic aperture sonar (SAS) data in Generic Data Format...

    • marine-geo.org
    Updated Sep 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Model-derived synthetic aperture sonar (SAS) data in Generic Data Format (GDF) [Dataset]. https://www.marine-geo.org/tools/files/31898
    Explore at:
    Dataset updated
    Sep 24, 2024
    Description

    The simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.

  5. m

    Global Burden of Disease analysis dataset of noncommunicable disease...

    • data.mendeley.com
    Updated Apr 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Cundiff (2023). Global Burden of Disease analysis dataset of noncommunicable disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.10
    Explore at:
    Dataset updated
    Apr 6, 2023
    Authors
    David Cundiff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.

    The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.

    These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis. The data include the following: 1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc). 2. A text file to import the analysis database into SAS 3. The SAS code to format the analysis database to be used for analytics 4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6 5. SAS code for deriving the multiple regression formula in Table 4. 6. SAS code for deriving the multiple regression formula in Table 5 7. SAS code for deriving the multiple regression formula in Supplementary Table 7
    8. SAS code for deriving the multiple regression formula in Supplementary Table 8 9. The Excel files that accompanied the above SAS code to produce the tables

    For questions, please email davidkcundiff@gmail.com. Thanks.

  6. d

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • demo-b2find.dkrz.de
    Updated Sep 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. http://demo-b2find.dkrz.de/dataset/da423f51-0a3c-540f-8ee8-830d0c9e9ef0
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  7. SAS-2 Photon Events Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Photon Events Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-photon-events-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  8. d

    SAS-2 Map Product Catalog

    • catalog.data.gov
    • s.cnmilf.com
    Updated Sep 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). SAS-2 Map Product Catalog [Dataset]. https://catalog.data.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  9. SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  10. H

    Infogroup US Historical Business Data

    • dataverse.harvard.edu
    application/gzip +4
    Updated Apr 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). Infogroup US Historical Business Data [Dataset]. http://doi.org/10.7910/DVN/PNOFKI
    Explore at:
    application/x-gzip(981236468), csv(8714), application/gzip(621979196), pdf(41531), tsv(13094)Available download formats
    Dataset updated
    Apr 17, 2020
    Dataset provided by
    Harvard Dataverse
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKIhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKI

    Time period covered
    1997 - 2019
    Area covered
    United States
    Description

    InfoGroup’s Historical Business Backfile consists of geo-coded records of millions of US businesses and other organizations that contain basic information on each entity, such as: contact information, industry description, annual revenues, number of employees, year established, and other data. Each annual file consists of a “snapshot” of InfoGroup’s data as of the last day of each year, creating a time series of data 1997-2019. Access is restricted to current Harvard University community members. Use of Infogroup US Historical Business Data is subject to the terms and conditions of a license agreement (effective March 16, 2016) between Harvard and Infogroup Inc. and subject to applicable laws. Most data files are available in either .csv or .sas format. All data files are compressed into an archive in .gz, or GZIP, format. Extraction software such as 7-Zip is required to unzip these archives.

  11. g

    SAS-2 Photon Events Catalog | gimi9.com

    • gimi9.com
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). SAS-2 Photon Events Catalog | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_sas-2-photon-events-catalog/
    Explore at:
    Dataset updated
    Feb 1, 2001
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  12. Pfizer documents

    • kaggle.com
    zip
    Updated Nov 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konrad Banachewicz (2022). Pfizer documents [Dataset]. https://www.kaggle.com/datasets/konradb/pfizer-documents/code
    Explore at:
    zip(1453383851 bytes)Available download formats
    Dataset updated
    Nov 2, 2022
    Authors
    Konrad Banachewicz
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    All the documents released so far by Pfizer about the Covid-19 vaccine: - the folders Rel_ddmmyyyy correspond to batches prepared by Public Health and Medical Professionals for Transparency https://phmpt.org/ - each contains multiple .xpt files (SAS format) along with corresponding .pdf - Court_Documents gathers the available docs on court proceedings

  13. Safe Schools Research Initiative, Texas, 2015-2017

    • icpsr.umich.edu
    • catalog.data.gov
    Updated Nov 29, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noyola, Orlando (2018). Safe Schools Research Initiative, Texas, 2015-2017 [Dataset]. http://doi.org/10.3886/ICPSR36988.v1
    Explore at:
    Dataset updated
    Nov 29, 2018
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Noyola, Orlando
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/36988/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36988/terms

    Time period covered
    2015 - 2017
    Area covered
    Texas, United States
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.This study sought to examine any major changes in schools in the past two years as an evaluation of the Safe and Civil Schools Initiative. Students, faculty, and administrators were asked questions on topics including school safety, climate, and the discipline process.This collection includes 6 SAS data files: "psja_schools.sas7bdat" with 66 variables and 15 cases, "psja_schools_v01.sas7bdat" with 104 variables and 15 cases, "psja_staff.sas7bdat" with 39 variables and 2,921 cases, "psja_staff_v01.sas7bdat" with 202 variables and 2,398 cases, "psja_students.sas7bdat" with 97 variables and 4,382 cases, and "psja_students_v01.sas7bdat" with 332 variables and 4,267 cases. Additionally, the collection includes 1 SAS formats catalog "formats.sas7bcat", and 10 SAS syntax files.

  14. E

    SAS: Semantic Artist Similarity Dataset

    • live.european-language-grid.eu
    txt
    Updated Oct 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). SAS: Semantic Artist Similarity Dataset [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/7418
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 28, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Semantic Artist Similarity dataset consists of two datasets of artists entities with their corresponding biography texts, and the list of top-10 most similar artists within the datasets used as ground truth. The dataset is composed by a corpus of 268 artists and a slightly larger one of 2,336 artists, both gathered from Last.fm in March 2015. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter corpus we use the similarity between artists as provided by the Last.fm API. For every artist there is a list with the top-10 most related artists. In the MIREX dataset there are 188 artists with at least 10 similar artists, the other 80 artists have less than 10 similar artists. In the Last.fm API dataset all artists have a list of 10 similar artists. There are 4 files in the dataset.mirex_gold_top10.txt and lastfmapi_gold_top10.txt have the top-10 lists of artists for every artist of both datasets. Artists are identified by MusicBrainz ID. The format of the file is one line per artist, with the artist mbid separated by a tab with the list of top-10 related artists identified by their mbid separated by spaces.artist_mbid \t artist_mbid_top10_list_separated_by_spaces mb2uri_mirex and mb2uri_lastfmapi.txt have the list of artists. In each line there are three fields separated by tabs. First field is the MusicBrainz ID, second field is the last.fm name of the artist, and third field is the DBpedia uri.artist_mbid \t lastfm_name \t dbpedia_uri There are also 2 folders in the dataset with the biography texts of each dataset. Each .txt file in the biography folders is named with the MusicBrainz ID of the biographied artist. Biographies were gathered from the Last.fm wiki page of every artist.Using this datasetWe would highly appreciate if scientific publications of works partly based on the Semantic Artist Similarity dataset quote the following publication:Oramas, S., Sordo M., Espinosa-Anke L., & Serra X. (In Press). A Semantic-based Approach for Artist Similarity. 16th International Society for Music Information Retrieval Conference.We are interested in knowing if you find our datasets useful! If you use our dataset please email us at mtg-info@upf.edu and tell us about your research. https://www.upf.edu/web/mtg/semantic-similarity

  15. g

    NC birth outcomes and greenery metrics

    • gimi9.com
    • catalog.data.gov
    Updated Sep 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). NC birth outcomes and greenery metrics [Dataset]. https://gimi9.com/dataset/data-gov_nc-birth-outcomes-and-greenery-metrics/
    Explore at:
    Dataset updated
    Sep 7, 2025
    Description

    This data contains linked birth registry information with greenery metrics in North Carolina. This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: Birth records can be requested through the NC State Health Department. Greenery meterics can be downloaded through EPA's EnviroAtlas. Format: Datasets are in csvs, R and SAS formats. This dataset is associated with the following publication: Tsai, W., T. Luben, and K. Rappazzo. Associations between neighborhood greenery and birth outcomes in a North Carolina cohort. Journal of Exposure Science and Environmental Epidemiology. Nature Publishing Group, London, UK, 35(5): 821-830, (2025).

  16. o

    Bynum 1-Year Standard Method for identifying Alzheimer’s Disease and Related...

    • openicpsr.org
    Updated Dec 13, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julie Bynum (2022). Bynum 1-Year Standard Method for identifying Alzheimer’s Disease and Related Dementias (ADRD) in Medicare Claims data [Dataset]. http://doi.org/10.3886/E183523V1
    Explore at:
    Dataset updated
    Dec 13, 2022
    Dataset provided by
    Institute for Healthcare Policy and Innovation, University of Michigan
    Authors
    Julie Bynum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    USA
    Description

    Here, you will find resources to use the Bynum-Standard 1-Year Algorithm including a README file that accompanies SAS and Stata scripts for the 1-Year Standard Method for identifying Alzheimer’s Disease and Related Dementias (ADRD) in Medicare Claims data. There are seven script files (plus a parameters file for SAS [parm.sas]) for both SAS and Stata. The files are numbered in the order in which they should be run; the five “1” files may be run in any order.The full algorithm requires access to a single year of Medicare Claims data for (1) MedPAR, (2) Home Health Agency (HHA) Claims File, (3) Hospice Claims File, (4) Carrier Claims and Line Files, and (5) Hospital Outpatient File (HOF) Claims and Revenue Files. All Medicare Claims files are expected to be in SAS format (.sas7bdat).For each data source, the script will output three files*:Diagnosis-level file: Lists individual ADRD diagnoses for each beneficiary for a given visit. This file allows researchers to identify which ICD-9-CM or ICD-10-CM codes are used in the claims data.Service Date-level file: Aggregated from the Diagnosis-level file, this file includes all beneficiaries with an ADRD diagnosis by Service Date (date of a claim with at least one ADRD diagnosis).Beneficiary-level file: Aggregated from the Service Date-level file, this file includes all beneficiaries with at least one* ADRD diagnosis at any point in the year within a specific file* The algorithm combines the Carrier and HOF files at the Service Date-level. The final combined Carrier and HOF Beneficiary-level file includes those with at least two (2) claims that are seven (7) or more days apart.​A final combined file is created by merging all Beneficiary-level files. This file is used to identify beneficiaries with ADRD and can be merged onto other files by the Beneficiary ID (BENE_ID).​With appreciation & acknowledgement to colleagues at the NIA IMPACT Collaboratory for their involvement in development & validation of the Bynum-Standard 1-Year Algorithm:

  17. Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets...

    • wiley.figshare.com
    html
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Philippi (2023). Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example. [Dataset]. http://doi.org/10.6084/m9.figshare.3524501.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Thomas Philippi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List ACS.zip -- .zip file containing SAS macro and example code, and example Aletris bracteata data sets. acs.sas chekika_ACS_estimation.sas chekika_1.csv chekika_2.csv philippi.3.1.zip

    Description "acs.sas" is a SAS macro for computing Horvitz-Thompson and Hansen-Horwitz estimates of population size for adaptive cluster sampling with random initial sampling. This version uses ugly base SAS code and does not require SQL or SAS products other than Base SAS, and should work with versions 8.2 onward (tested with versions 9.0 and 9.1). "chekika_ACS_estimation.sas" is example SAS code calling the acs macro to analyze the Chekika Aletris bracteata example data sets. "chekika_1.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 1-m2 quadrats. "chekika_2.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 4-m2 quadrats. "philippi.3.1.zip" metadata file generated by morpho, including both xml and css.

  18. ViC Dataset: IQ signal visualization for CBRS, SAS

    • kaggle.com
    zip
    Updated Oct 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyelin Nam (2021). ViC Dataset: IQ signal visualization for CBRS, SAS [Dataset]. https://www.kaggle.com/hyelinnam/vic-dataset-iq-signal-visualization-for-cbrs
    Explore at:
    zip(3014659649 bytes)Available download formats
    Dataset updated
    Oct 15, 2021
    Authors
    Hyelin Nam
    Description

    Context

    ViC dataset is a collection for implementing a Dynamic Spectrum Access(DSA) system testbed in the CBRS band in the USA. This data is a DSA system which consists of a 2-tier user : Incident user: generating a chirp signal with a Radar system, Primary user: LTE-TDD signal with a CBSD base station system, and corresponds to signal waveforms in the band 3.55-3.56 GHz (Ch1), 3.56-3.57 GHz (Ch2) respectively. There are a total of 12 classes, excluding the assumption that two of the 16 cases are used by CBSD base stations, depending on the presence or absence of two users in two channels. The labels of each data have the following meanings :

    0000 (0) : All off 0001 (1) : Ch2 - Radar on 0010 (2) : Ch2 - LTE on 0011 (3) : Ch2 – LTE, Radar on 0100 (4) : Ch1 – Radar on 0101 (5) : Ch1 – Radar on / Ch2 – Radar on 0110 (6) : Ch1 – Radar on /Ch2 – LTE on 0111 (7) : Ch1 – Radar on / Ch2 – LTE, Radar on 1000 (8) : Ch1 – LTE on 1001 (9) : Ch1 – LTE on / Ch2 – Radar on (X) 1010 (10) : Ch1 – LTE on / Ch2 – LTE on (X) 1011 (11) : Ch1 – LTE on / Ch2 – LTE, Radar on 1100 (12) : Ch1 – LTE, Radar on 1101 (13) : Ch1 – LTE, Radar on / Ch2 – Radar on (X) 1110 (14) : Ch1 – LTE, Radar on / Ch2 – LTE on (X) 1111 (15) : Ch1 – LTE, Radar on / Ch2 – LTE, Radar on

    Content

    This dataset has a total of 7 types consisting of one raw dataset expressed in two extensions, 4 processed datasets processed in different ways, and a label. Except for one of the datasets, all are Python version of numpy files, and the other is a csv file.

    (Raw) The raw data is a IQ data generated from testbeds created by imitating the SAS system of CBRS in the United States. In the testbeds, the primary user was made using the LabView communication tool and the USRP antenna (Radar), and the secondary user was made by manufacturing the CBSD base station. This has both csv format and numpy format exist.

    (Processed) All of these data except one are normalized to values between 0 and 255 and consist of spectrogram, scalogram, and IQ data. The other one is a spectrogram dataset which is not normalized. They are measured between 250us. In the case of spectrograms and scalograms, the figure formed at 3.56 GHz to 3.57 GHz corresponds to channel 1, and at 3.55 GHz to 3.56 GHz corresponds to channel 2. Among them, signals transmitted from the CBSD base station are output in the form of LTE-TDD signals, and signals transmitted from the Radar system are output in the form of Chirp signals.

    (Label) All of the above five data share one label. This label has a numpy format.

  19. CDC Dataset

    • kaggle.com
    zip
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mayur Bhorkar (2024). CDC Dataset [Dataset]. https://www.kaggle.com/datasets/mayurbhorkar/cdc-dataset/data
    Explore at:
    zip(1218211538 bytes)Available download formats
    Dataset updated
    Jun 22, 2024
    Authors
    Mayur Bhorkar
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The data source for this project work is a large collection of raw data publicly available on CDC website. “CDC is the nation’s leading science-based, data-driven, service organisation that protects the public’s health. In 1984, the CDC established the Behavioral Risk Factor Surveillance System (BRFSS). The BRFSS is the nation’s premier system of health-related telephone surveys that collect state data about U.S. residents regarding their health-related risk behaviours, chronic health conditions, and use of preventive services.” (CDC - BRFSS Annual Survey Data, 2020).

    I have referred to a set of data collected between the years 2005 and 2021 and it contains more than 7 Million records in total (7,143,987 to be exact). For each year there are around 300 to 400 features available in the dataset, but not all of them are needed for this project, as some of them are irrelevant to my work. I have shortlisted a total of 22 features which are relevant for designing and developing my ML models and I have explained them in detail in the below table.

    The codebook link (of the year 2021) explains below columns in more details - https://www.cdc.gov/brfss/annual_data/2021/pdf/codebook21_llcp-v2-508.pdf

    All datasets are obtained from CDC website wherein they are available in Zip format containing a SAS format file with .xpt extension. So, I downloaded all the zip files, extracted them and then converted each one of them into a .csv format so I could easily fetch the records in my project code. I used below command in Anaconda Prompt to convert .xpt extension file into .csv extension file,

    C:\users\mayur\Downloads> python -m xport LLCP2020.xpt > LLCP2020.csv

  20. g

    Sister Study - Neighborhood greenery, depressive symptoms, and redlining |...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sister Study - Neighborhood greenery, depressive symptoms, and redlining | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_sister-study-neighborhood-greenery-depressive-symptoms-and-redlining/
    Explore at:
    Description

    This dataset contains health outcome (depressive symptoms defined by CES-D 10), neighborhood greenery (percent tree cover within 500m and 2000m from residences), historical HOLC grades, and sociodemographic factors (age, race/ethnicity, marital status, education, employment status, income, use of depression medication) for 3555 Sister Study participants. This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: Please submit data request through https://sisterstudy.niehs.nih.gov/English/coll-data.htm. Format: The Sister Study data are released in SAS format. This dataset is associated with the following publication: Tsai, W., M. Nash, D. Rosenbaum, S. Prince, A. D'Aloisio, M. Mehaffey, D. Sandler, T. Buckley, and A. Neale. Association of Redlining and Natural Environment with Depressive Symptoms in Women in the Sister Study. ENVIRONMENTAL HEALTH PERSPECTIVES. National Institute of Environmental Health Sciences (NIEHS), Research Triangle Park, NC, USA, 131(10): 107009, (2022).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
Organization logoOrganization logo

Data file in SAS format

Explore at:
10 scholarly articles cite this dataset (View in Google Scholar)
txtAvailable download formats
Dataset updated
Jan 19, 2016
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Guillaume Béraud
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

data file in SAS format

Search
Clear search
Close search
Google apps
Main menu