52 datasets found
  1. Data file in SAS format

    • figshare.com
    txt
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Guillaume Béraud
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    data file in SAS format

  2. A

    Provider Specific Data for Public Use in SAS Format

    • data.amerigeoss.org
    html
    Updated Jul 29, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States[old] (2019). Provider Specific Data for Public Use in SAS Format [Dataset]. https://data.amerigeoss.org/da_DK/dataset/provider-specific-data-for-public-use-in-sas-format-0d063
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 29, 2019
    Dataset provided by
    United States[old]
    Description

    The Fiscal Intermediary maintains the Provider Specific File (PSF). The file contains information about the facts specific to the provider that affects computations for the Prospective Payment System. The Provider Specific files in SAS format are located in the Download section below for the following provider-types, Inpatient, Skilled Nursing Facility, Home Health Agency, Hospice, Inpatient Rehab, Long Term Care, Inpatient Psychiatric Facility

  3. m

    Model-derived synthetic aperture sonar (SAS) data in Generic Data Format...

    • marine-geo.org
    Updated Sep 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Model-derived synthetic aperture sonar (SAS) data in Generic Data Format (GDF) [Dataset]. https://www.marine-geo.org/tools/files/31898
    Explore at:
    Dataset updated
    Sep 24, 2024
    Description

    The simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.

  4. m

    Global Burden of Disease analysis dataset of noncommunicable disease...

    • data.mendeley.com
    Updated Apr 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Cundiff (2023). Global Burden of Disease analysis dataset of noncommunicable disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.10
    Explore at:
    Dataset updated
    Apr 6, 2023
    Authors
    David Cundiff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.

    The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.

    These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis. The data include the following: 1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc). 2. A text file to import the analysis database into SAS 3. The SAS code to format the analysis database to be used for analytics 4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6 5. SAS code for deriving the multiple regression formula in Table 4. 6. SAS code for deriving the multiple regression formula in Table 5 7. SAS code for deriving the multiple regression formula in Supplementary Table 7
    8. SAS code for deriving the multiple regression formula in Supplementary Table 8 9. The Excel files that accompanied the above SAS code to produce the tables

    For questions, please email davidkcundiff@gmail.com. Thanks.

  5. f

    SAS scripts for supplementary data.

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Jul 13, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M. (2015). SAS scripts for supplementary data. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001869731
    Explore at:
    Dataset updated
    Jul 13, 2015
    Authors
    Geronimo, Jerome T.; Fletcher, Craig A.; Bellinger, Dwight A.; Whitaker, Julia; Vieira, Giovana; Garner, Joseph P.; George, Nneka M.
    Description

    The raw data for each of the analyses are presented. Baseline severity difference (probands only) (Figure A in S1 Dataset), Repeated measures analysis of change in lesion severity (Figure B in S1 Dataset). Logistic regression of survivorship (Figure C in S1 Dataset). Time to cure (Figure D in S1 Dataset). Each data set is given as a SAS code for the data itself, and the equivalent analysis to that performed in JMP (and reported in the text). Data are presented in SAS format as this is a simple text format. The data and code were generated as direct exports from JMP, and additional SAS code added as needed (for instance, JMP does not export code for post-hoc tests). Note, however, that SAS rounds to less precision than JMP, and can give slightly different results, especially for REML methods. (DOCX)

  6. d

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • demo-b2find.dkrz.de
    Updated Sep 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. http://demo-b2find.dkrz.de/dataset/da423f51-0a3c-540f-8ee8-830d0c9e9ef0
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  7. SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  8. d

    SAS-2 Photon Events Catalog

    • catalog.data.gov
    Updated Sep 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). SAS-2 Photon Events Catalog [Dataset]. https://catalog.data.gov/dataset/sas-2-photon-events-catalog
    Explore at:
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  9. c

    SAS-2 Map Product Catalog

    • s.cnmilf.com
    • catalog.data.gov
    Updated Sep 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). SAS-2 Map Product Catalog [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  10. ViC Dataset: IQ signal visualization for CBRS, SAS

    • kaggle.com
    zip
    Updated Oct 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyelin Nam (2021). ViC Dataset: IQ signal visualization for CBRS, SAS [Dataset]. https://www.kaggle.com/hyelinnam/vic-dataset-iq-signal-visualization-for-cbrs
    Explore at:
    zip(3014659649 bytes)Available download formats
    Dataset updated
    Oct 15, 2021
    Authors
    Hyelin Nam
    Description

    Context

    ViC dataset is a collection for implementing a Dynamic Spectrum Access(DSA) system testbed in the CBRS band in the USA. This data is a DSA system which consists of a 2-tier user : Incident user: generating a chirp signal with a Radar system, Primary user: LTE-TDD signal with a CBSD base station system, and corresponds to signal waveforms in the band 3.55-3.56 GHz (Ch1), 3.56-3.57 GHz (Ch2) respectively. There are a total of 12 classes, excluding the assumption that two of the 16 cases are used by CBSD base stations, depending on the presence or absence of two users in two channels. The labels of each data have the following meanings :

    0000 (0) : All off 0001 (1) : Ch2 - Radar on 0010 (2) : Ch2 - LTE on 0011 (3) : Ch2 – LTE, Radar on 0100 (4) : Ch1 – Radar on 0101 (5) : Ch1 – Radar on / Ch2 – Radar on 0110 (6) : Ch1 – Radar on /Ch2 – LTE on 0111 (7) : Ch1 – Radar on / Ch2 – LTE, Radar on 1000 (8) : Ch1 – LTE on 1001 (9) : Ch1 – LTE on / Ch2 – Radar on (X) 1010 (10) : Ch1 – LTE on / Ch2 – LTE on (X) 1011 (11) : Ch1 – LTE on / Ch2 – LTE, Radar on 1100 (12) : Ch1 – LTE, Radar on 1101 (13) : Ch1 – LTE, Radar on / Ch2 – Radar on (X) 1110 (14) : Ch1 – LTE, Radar on / Ch2 – LTE on (X) 1111 (15) : Ch1 – LTE, Radar on / Ch2 – LTE, Radar on

    Content

    This dataset has a total of 7 types consisting of one raw dataset expressed in two extensions, 4 processed datasets processed in different ways, and a label. Except for one of the datasets, all are Python version of numpy files, and the other is a csv file.

    (Raw) The raw data is a IQ data generated from testbeds created by imitating the SAS system of CBRS in the United States. In the testbeds, the primary user was made using the LabView communication tool and the USRP antenna (Radar), and the secondary user was made by manufacturing the CBSD base station. This has both csv format and numpy format exist.

    (Processed) All of these data except one are normalized to values between 0 and 255 and consist of spectrogram, scalogram, and IQ data. The other one is a spectrogram dataset which is not normalized. They are measured between 250us. In the case of spectrograms and scalograms, the figure formed at 3.56 GHz to 3.57 GHz corresponds to channel 1, and at 3.55 GHz to 3.56 GHz corresponds to channel 2. Among them, signals transmitted from the CBSD base station are output in the form of LTE-TDD signals, and signals transmitted from the Radar system are output in the form of Chirp signals.

    (Label) All of the above five data share one label. This label has a numpy format.

  11. e

    The simple and new SAS and R codes to estimate optimum and base selection...

    • ebi.ac.uk
    Updated Jun 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mehdi rahimi (2022). The simple and new SAS and R codes to estimate optimum and base selection indices to choice superior genotypes in plants and animals breeding program [Dataset]. https://www.ebi.ac.uk/biostudies/studies/S-BSST853
    Explore at:
    Dataset updated
    Jun 10, 2022
    Authors
    mehdi rahimi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The SAS code (Supplementary File 1) and R program code (Supplementary File 2). For the analysis to proceed, this code requires an input data file (Supplementary File 3-5) prepared in excel format (CSV). Data can be stored in any format such as xlsx, txt, xls and others. Economic values in the SAS code are entered manually in the code, but in the R code are stored in an Excel file (Supplementary File 6).

  12. Safe Schools Research Initiative, Texas, 2015-2017

    • icpsr.umich.edu
    • catalog.data.gov
    Updated Nov 29, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noyola, Orlando (2018). Safe Schools Research Initiative, Texas, 2015-2017 [Dataset]. http://doi.org/10.3886/ICPSR36988.v1
    Explore at:
    Dataset updated
    Nov 29, 2018
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Noyola, Orlando
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/36988/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36988/terms

    Time period covered
    2015 - 2017
    Area covered
    Texas, United States
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed.This study sought to examine any major changes in schools in the past two years as an evaluation of the Safe and Civil Schools Initiative. Students, faculty, and administrators were asked questions on topics including school safety, climate, and the discipline process.This collection includes 6 SAS data files: "psja_schools.sas7bdat" with 66 variables and 15 cases, "psja_schools_v01.sas7bdat" with 104 variables and 15 cases, "psja_staff.sas7bdat" with 39 variables and 2,921 cases, "psja_staff_v01.sas7bdat" with 202 variables and 2,398 cases, "psja_students.sas7bdat" with 97 variables and 4,382 cases, and "psja_students_v01.sas7bdat" with 332 variables and 4,267 cases. Additionally, the collection includes 1 SAS formats catalog "formats.sas7bcat", and 10 SAS syntax files.

  13. E

    SAS: Semantic Artist Similarity Dataset

    • live.european-language-grid.eu
    txt
    Updated Oct 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). SAS: Semantic Artist Similarity Dataset [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/7418
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 28, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Semantic Artist Similarity dataset consists of two datasets of artists entities with their corresponding biography texts, and the list of top-10 most similar artists within the datasets used as ground truth. The dataset is composed by a corpus of 268 artists and a slightly larger one of 2,336 artists, both gathered from Last.fm in March 2015. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter corpus we use the similarity between artists as provided by the Last.fm API. For every artist there is a list with the top-10 most related artists. In the MIREX dataset there are 188 artists with at least 10 similar artists, the other 80 artists have less than 10 similar artists. In the Last.fm API dataset all artists have a list of 10 similar artists. There are 4 files in the dataset.mirex_gold_top10.txt and lastfmapi_gold_top10.txt have the top-10 lists of artists for every artist of both datasets. Artists are identified by MusicBrainz ID. The format of the file is one line per artist, with the artist mbid separated by a tab with the list of top-10 related artists identified by their mbid separated by spaces.artist_mbid \t artist_mbid_top10_list_separated_by_spaces mb2uri_mirex and mb2uri_lastfmapi.txt have the list of artists. In each line there are three fields separated by tabs. First field is the MusicBrainz ID, second field is the last.fm name of the artist, and third field is the DBpedia uri.artist_mbid \t lastfm_name \t dbpedia_uri There are also 2 folders in the dataset with the biography texts of each dataset. Each .txt file in the biography folders is named with the MusicBrainz ID of the biographied artist. Biographies were gathered from the Last.fm wiki page of every artist.Using this datasetWe would highly appreciate if scientific publications of works partly based on the Semantic Artist Similarity dataset quote the following publication:Oramas, S., Sordo M., Espinosa-Anke L., & Serra X. (In Press). A Semantic-based Approach for Artist Similarity. 16th International Society for Music Information Retrieval Conference.We are interested in knowing if you find our datasets useful! If you use our dataset please email us at mtg-info@upf.edu and tell us about your research. https://www.upf.edu/web/mtg/semantic-similarity

  14. u

    Data from: Thrifty Food Plan Cost Estimates for Alaska and Hawaii

    • agdatacommons.nal.usda.gov
    pdf
    Updated Nov 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin Meyers Mathieu (2025). Data from: Thrifty Food Plan Cost Estimates for Alaska and Hawaii [Dataset]. http://doi.org/10.15482/USDA.ADC/1529439
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Nov 22, 2025
    Dataset provided by
    Ag Data Commons
    Authors
    Kevin Meyers Mathieu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Alaska, Hawaii
    Description

    This online supplement contains data files and computer code, enabling the public to reproduce the results of the analysis described in the report titled “Thrifty Food Plan Cost Estimates for Alaska and Hawaii” published by USDA FNS in July 2023. The report is available at: https://www.fns.usda.gov/cnpp/tfp-akhi. The online supplement contains a user guide, which describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis; a data file in CSV format, which contains the most detailed information on food price differentials between the mainland U.S. and Alaska and Hawaii derived from Circana (formerly Information Resources Inc) retail scanner data as could be released without disclosing proprietary information; SAS and R code, which use the provided data file to reproduce the results of the report; and an excel spreadsheet containing the reproduced results from the SAS or R code. For technical inquiries, contact: FNS.FoodPlans@usda.gov. Resources in this dataset:

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement User Guide File name: TFPCostEstimatesForAlaskaAndHawaii-UserGuide.pdf Resource description: The online supplement user guide describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Data File File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementDataFile.csv Resource description: The online supplement data file contains food price differentials between the mainland United States and Anchorage and Honolulu derived from Circana (formerly Information Resources Inc) retail scanner data. The data was aggregated to prevent disclosing proprietary information.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement R Code File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementRCode.R Resource description: The online supplement R code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the R programming language.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement SAS Code (zipped) File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementSASCode.zip Resource description: The online supplement SAS code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the SAS programming language. This SAS file is provided in zip format for compatibility with Ag Data Commons; users will need to unzip the file prior to its use.

    Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Reproduced Results File name: TFPCostEstimatesforAlaskaandHawaii-ReproducedResults.xlsx Resource description: The online supplement reproduced results are output from either the online supplement R or SAS code and contain the results of the analysis described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report.

  15. T

    Emerging Pathogens Initiative (EPI)

    • data.va.gov
    • datahub.va.gov
    • +1more
    csv, xlsx, xml
    Updated Sep 12, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Emerging Pathogens Initiative (EPI) [Dataset]. https://www.data.va.gov/w/39pc-24dr/default?cur=52JNa-etkA2
    Explore at:
    xlsx, xml, csvAvailable download formats
    Dataset updated
    Sep 12, 2019
    Description

    The Emerging Pathogens Initiative (EPI) database contains emerging pathogens information from the local Veterans Affairs Medical Centers (VAMCs). The EPI software package allows the VA to track emerging pathogens on the national level without additional data entry at the local level. The results from aggregation of data can be shared with the appropriate public health authorities including non-VA and the private health care sector allowing national planning, formulation of intervention strategies, and resource allocations. EPI is designed to automatically collect data on emerging diseases for Veterans Affairs Central Office (VACO) to analyze. The data is sent to the Austin Information Technology Center (AITC) from all Veterans Health Information Systems and Technology Architecture (VistA) systems for initial processing and combination with related workload data. VACO data retrieval and analysis is then carried out. The AITC creates two file structures both in Statistical Analysis Software (SAS) file format, which are used as a source of data for the Veterans Affairs Headquarters (VAHQ) Infectious Diseases Program Office. These files are manipulated and used for analysis and reporting by the National Infectious Diseases Service. Emerging Pathogens (as characterized by VACO) act as triggers for data acquisition activities in the automated program. The system retrieves relevant, predetermined, patient-specific information in the form of a Health Level Seven (HL7) message that is transmitted to the central data repository at the AITC. Once at that location, the data is converted to a SAS dataset for analysis by the VACO National Infectious Diseases Service. Before data transmission an Emerging Pathogens Verification Report is produced for the local sites to review, verify, and make corrections as needed. After data transmission to the AITC it is added to the EPI database.

  16. d

    Data from: Late instar monarch caterpillars sabotage milkweed to acquire...

    • search.dataone.org
    • datasetcatalog.nlm.nih.gov
    • +1more
    Updated Jul 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Georg Petschenka; Anja Betz; Robert Bischoff (2025). Late instar monarch caterpillars sabotage milkweed to acquire toxins, not to disarm plant defence [Dataset]. http://doi.org/10.5061/dryad.qnk98sfns
    Explore at:
    Dataset updated
    Jul 27, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Georg Petschenka; Anja Betz; Robert Bischoff
    Time period covered
    Jul 24, 2023
    Description

    Sabotaging milkweed by monarch caterpillars (Danaus plexippus) is a famous textbook example of disarming plant defence. By severing leaf veins, monarchs are thought to prevent the flow of toxic latex to their feeding site. Here, we show that sabotaging by monarch caterpillars is not only an avoidance strategy. While young caterpillars appear to avoid latex, late-instar caterpillars actively ingest exuding latex, presumably to increase sequestration of cardenolides used for defence against predators. Comparisons with caterpillars of the related but non-sequestering common crow butterfly (Euploea core) revealed three lines of evidence supporting our hypothesis. First, monarch caterpillars sabotage inconsistently and therefore the behaviour is not obligatory to feed on milkweed, whereas sabotaging precedes each feeding event in Euploea caterpillars. Second, monarch caterpillars shift their behaviour from latex avoidance in younger to eager drinking in later stages, whereas Euploea caterpil..., , , Readme for the statistical documentation for the publication: Monarchs sabotage milkweed to acquire toxins, not to disarm plant defense Authors: Anja Betz, Robert Bischoff, Georg Petschenka

    For the statistical documentation, we provide the following files: This readme gives a brief outline of the different files and data provided in the statistical documentation Subfolders for each experiment containing

    • Excel files with just the data, SAS code files for analysis of each dataset with comments SAS dataset files (sas7bdat) a data dictionary.txt that defines all variables of all datasets

    Disclaimer: Excel automatically formats numbers. We do not take any responsibility for automatic formatting of the numbers by Excel. This might lead to different results, if the Excel files are used for analysis. The sas7bdat files, or data at the start of the individual sas-analysis files should be resistant to automatic formatting, so we suggest using them for analysis.

    The datasets co...

  17. H

    Infogroup US Historical Business Data

    • dataverse.harvard.edu
    application/gzip +4
    Updated Apr 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). Infogroup US Historical Business Data [Dataset]. http://doi.org/10.7910/DVN/PNOFKI
    Explore at:
    application/x-gzip(981236468), csv(8714), application/gzip(621979196), pdf(41531), tsv(13094)Available download formats
    Dataset updated
    Apr 17, 2020
    Dataset provided by
    Harvard Dataverse
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKIhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/10.0/customlicense?persistentId=doi:10.7910/DVN/PNOFKI

    Time period covered
    1997 - 2019
    Area covered
    United States
    Description

    InfoGroup’s Historical Business Backfile consists of geo-coded records of millions of US businesses and other organizations that contain basic information on each entity, such as: contact information, industry description, annual revenues, number of employees, year established, and other data. Each annual file consists of a “snapshot” of InfoGroup’s data as of the last day of each year, creating a time series of data 1997-2019. Access is restricted to current Harvard University community members. Use of Infogroup US Historical Business Data is subject to the terms and conditions of a license agreement (effective March 16, 2016) between Harvard and Infogroup Inc. and subject to applicable laws. Most data files are available in either .csv or .sas format. All data files are compressed into an archive in .gz, or GZIP, format. Extraction software such as 7-Zip is required to unzip these archives.

  18. Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets...

    • wiley.figshare.com
    html
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Philippi (2023). Supplement 1. SAS macro for adaptive cluster sampling and Aletris data sets from the example. [Dataset]. http://doi.org/10.6084/m9.figshare.3524501.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Thomas Philippi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List ACS.zip -- .zip file containing SAS macro and example code, and example Aletris bracteata data sets. acs.sas chekika_ACS_estimation.sas chekika_1.csv chekika_2.csv philippi.3.1.zip

    Description "acs.sas" is a SAS macro for computing Horvitz-Thompson and Hansen-Horwitz estimates of population size for adaptive cluster sampling with random initial sampling. This version uses ugly base SAS code and does not require SQL or SAS products other than Base SAS, and should work with versions 8.2 onward (tested with versions 9.0 and 9.1). "chekika_ACS_estimation.sas" is example SAS code calling the acs macro to analyze the Chekika Aletris bracteata example data sets. "chekika_1.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 1-m2 quadrats. "chekika_2.csv" is an example data set in ASCII comma-delimited format from adaptive cluster sampling of A. bracteata at Chekika, Everglades National Park, with 4-m2 quadrats. "philippi.3.1.zip" metadata file generated by morpho, including both xml and css.

  19. ANES 1964 Time Series Study - Archival Version

    • search.gesis.org
    Updated Nov 10, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Michigan. Survey Research Center. Political Behavior Program (2015). ANES 1964 Time Series Study - Archival Version [Dataset]. http://doi.org/10.3886/ICPSR07235
    Explore at:
    Dataset updated
    Nov 10, 2015
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    GESIS search
    Authors
    University of Michigan. Survey Research Center. Political Behavior Program
    License

    https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de441277https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de441277

    Description

    Abstract (en): This study is part of a time-series collection of national surveys fielded continuously since 1952. The election studies are designed to present data on Americans' social backgrounds, enduring political predispositions, social and political values, perceptions and evaluations of groups and candidates, opinions on questions of public policy, and participation in political life. A Black supplement of 263 respondents, who were asked the same questions that were administered to the national cross-section sample, is included with the national cross-section of 1,571 respondents. In addition to the usual content, the study contains data on opinions about the Supreme Court, political knowledge, and further information concerning racial issues. Voter validation data have been included as an integral part of the election study, providing objective information from registration and voting records or from respondents' past voting behavior. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Performed recodes and/or calculated derived variables.; Checked for undocumented or out-of-range codes.. United States citizens of voting age living in private households in the continental United States. A representative cross-section sample, consisting of 1,571 respondents, plus a Black supplement sample of 263 respondents. 2015-11-10 The study metadata was updated.1999-12-14 The data for this study are now available in SAS transport and SPSS export formats, in addition to the ASCII data file. Variables in the dataset have been renumbered to the following format: 2-digit (or 2-character) year prefix + 4 digits + [optional] 1-character suffix. Dataset ID and version variables have also been added. In addition, SAS and SPSS data definition statements have been created for this collection, and the data collection instruments are now available as a PDF file. face-to-face interview, telephone interviewThe SAS transport file was created using the SAS CPORT procedure.

  20. CDC Dataset

    • kaggle.com
    zip
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mayur Bhorkar (2024). CDC Dataset [Dataset]. https://www.kaggle.com/datasets/mayurbhorkar/cdc-dataset/data
    Explore at:
    zip(1218211538 bytes)Available download formats
    Dataset updated
    Jun 22, 2024
    Authors
    Mayur Bhorkar
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The data source for this project work is a large collection of raw data publicly available on CDC website. “CDC is the nation’s leading science-based, data-driven, service organisation that protects the public’s health. In 1984, the CDC established the Behavioral Risk Factor Surveillance System (BRFSS). The BRFSS is the nation’s premier system of health-related telephone surveys that collect state data about U.S. residents regarding their health-related risk behaviours, chronic health conditions, and use of preventive services.” (CDC - BRFSS Annual Survey Data, 2020).

    I have referred to a set of data collected between the years 2005 and 2021 and it contains more than 7 Million records in total (7,143,987 to be exact). For each year there are around 300 to 400 features available in the dataset, but not all of them are needed for this project, as some of them are irrelevant to my work. I have shortlisted a total of 22 features which are relevant for designing and developing my ML models and I have explained them in detail in the below table.

    The codebook link (of the year 2021) explains below columns in more details - https://www.cdc.gov/brfss/annual_data/2021/pdf/codebook21_llcp-v2-508.pdf

    All datasets are obtained from CDC website wherein they are available in Zip format containing a SAS format file with .xpt extension. So, I downloaded all the zip files, extracted them and then converted each one of them into a .csv format so I could easily fetch the records in my project code. I used below command in Anaconda Prompt to convert .xpt extension file into .csv extension file,

    C:\users\mayur\Downloads> python -m xport LLCP2020.xpt > LLCP2020.csv

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Guillaume Béraud (2016). Data file in SAS format [Dataset]. http://doi.org/10.6084/m9.figshare.1466915.v1
Organization logoOrganization logo

Data file in SAS format

Explore at:
10 scholarly articles cite this dataset (View in Google Scholar)
txtAvailable download formats
Dataset updated
Jan 19, 2016
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Guillaume Béraud
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

data file in SAS format

Search
Clear search
Close search
Google apps
Main menu