44 datasets found
  1. SAS code used to analyze data and a datafile with metadata glossary

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Nov 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). SAS code used to analyze data and a datafile with metadata glossary [Dataset]. https://catalog.data.gov/dataset/sas-code-used-to-analyze-data-and-a-datafile-with-metadata-glossary
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).

  2. H

    SAS code

    • dataverse.harvard.edu
    Updated Oct 25, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manja Jensen (2021). SAS code [Dataset]. http://doi.org/10.7910/DVN/ZIMHAT
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 25, 2021
    Dataset provided by
    Harvard Dataverse
    Authors
    Manja Jensen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    SAS code. This replicate the numbers and tables in the research article “Using a Deliberative Poll on breast cancer screening to assess and improve the decision quality of laypeople” by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma, and John Brodersen

  3. m

    Global Burden of Disease analysis dataset of noncommunicable disease...

    • data.mendeley.com
    Updated Apr 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Cundiff (2023). Global Burden of Disease analysis dataset of noncommunicable disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.10
    Explore at:
    Dataset updated
    Apr 6, 2023
    Authors
    David Cundiff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This formatted dataset (AnalysisDatabaseGBD) originates from raw data files from the Institute of Health Metrics and Evaluation (IHME) Global Burden of Disease Study (GBD2017) affiliated with the University of Washington. We are volunteer collaborators with IHME and not employed by IHME or the University of Washington.

    The population weighted GBD2017 data are on male and female cohorts ages 15-69 years including noncommunicable diseases (NCDs), body mass index (BMI), cardiovascular disease (CVD), and other health outcomes and associated dietary, metabolic, and other risk factors. The purpose of creating this population-weighted, formatted database is to explore the univariate and multiple regression correlations of health outcomes with risk factors. Our research hypothesis is that we can successfully model NCDs, BMI, CVD, and other health outcomes with their attributable risks.

    These Global Burden of disease data relate to the preprint: The EAT-Lancet Commission Planetary Health Diet compared with Institute of Health Metrics and Evaluation Global Burden of Disease Ecological Data Analysis. The data include the following: 1. Analysis database of population weighted GBD2017 data that includes over 40 health risk factors, noncommunicable disease deaths/100k/year of male and female cohorts ages 15-69 years from 195 countries (the primary outcome variable that includes over 100 types of noncommunicable diseases) and over 20 individual noncommunicable diseases (e.g., ischemic heart disease, colon cancer, etc). 2. A text file to import the analysis database into SAS 3. The SAS code to format the analysis database to be used for analytics 4. SAS code for deriving Tables 1, 2, 3 and Supplementary Tables 5 and 6 5. SAS code for deriving the multiple regression formula in Table 4. 6. SAS code for deriving the multiple regression formula in Table 5 7. SAS code for deriving the multiple regression formula in Supplementary Table 7
    8. SAS code for deriving the multiple regression formula in Supplementary Table 8 9. The Excel files that accompanied the above SAS code to produce the tables

    For questions, please email davidkcundiff@gmail.com. Thanks.

  4. f

    Dataset for: Sequential trials in the context of competing risks: concepts...

    • wiley.figshare.com
    txt
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corine Baayen; Christelle Volteau; Cyril Flamant; Paul Blanche (2023). Dataset for: Sequential trials in the context of competing risks: concepts and case study, with R and SAS code [Dataset]. http://doi.org/10.6084/m9.figshare.7991189.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Wiley
    Authors
    Corine Baayen; Christelle Volteau; Cyril Flamant; Paul Blanche
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Sequential designs and competing risks methodology are both well established. Their combined use has recently received some attention from a theoretical perspective, but their joint application in practice has been discussed less. The aim of this paper is to provide the applied statistician with a basic understanding of both sequential design theory and competing risks methodology and how to combine them in practice. Relevant references to more detailed theoretical discussions are provided and all discussions are illustrated using a real case study. Extensive R and SAS code is provided in the online supplementary material.

  5. u

    WIC Participant and Program Characteristics 2016

    • agdatacommons.nal.usda.gov
    txt
    Updated Jan 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    USDA Food and Nutrition Service, Office of Policy Support (2025). WIC Participant and Program Characteristics 2016 [Dataset]. http://doi.org/10.15482/USDA.ADC/1518495
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 22, 2025
    Dataset provided by
    Ag Data Commons
    Authors
    USDA Food and Nutrition Service, Office of Policy Support
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Description of the experiment setting: location, influential climatic conditions, controlled conditions (e.g. temperature, light cycle) In 1986, the Congress enacted Public Laws 99-500 and 99-591, requiring a biennial report on the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). In response to these requirements, FNS developed a prototype system that allowed for the routine acquisition of information on WIC participants from WIC State Agencies. Since 1992, State Agencies have provided electronic copies of these data to FNS on a biennial basis. FNS and the National WIC Association (formerly National Association of WIC Directors) agreed on a set of data elements for the transfer of information. In addition, FNS established a minimum standard dataset for reporting participation data. For each biennial reporting cycle, each State Agency is required to submit a participant-level dataset containing standardized information on persons enrolled at local agencies for the reference month of April. The 2016 Participant and Program Characteristics (PC2016) is the thirteenth data submission to be completed using the WIC PC reporting system. In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations. Processing methods and equipment used Specifications on formats (“Guidance for States Providing Participant Data”) were provided to all State agencies in January 2016. This guide specified 20 minimum dataset (MDS) elements and 11 supplemental dataset (SDS) elements to be reported on each WIC participant. Each State Agency was required to submit all 20 MDS items and any SDS items collected by the State agency.   Study date(s) and duration The information for each participant was from the participants’ most current WIC certification as of April 2016. Due to management information constraints, Connecticut provided data for a month other than April 2016, specifically August 16 – September 15, 2016. Study spatial scale (size of replicates and spatial scale of study area) In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations. Level of true replication Unknown Sampling precision (within-replicate sampling or pseudoreplication) State Agency Data Submissions. PC2016 is a participant dataset consisting of 8,815,472 active records. The records, submitted to USDA by the State Agencies, comprise a census of all WIC enrollees, so there is no sampling involved in the collection of this data. PII Analytic Datasets. State agency files were combined to create a national census participant file of approximately 8.8 million records. The census dataset contains potentially personally identifiable information (PII) and is therefore not made available to the public. National Sample Dataset. The public use SAS analytic dataset made available to the public has been constructed from a nationally representative sample drawn from the census of WIC participants, selected by participant category. The nationally representative sample is composed of 60,003 records. The distribution by category is 5,449 pregnant women, 4,661 breastfeeding women, 3,904 postpartum women, 13,999 infants, and 31,990 children. Level of subsampling (number and repeat or within-replicate sampling) The proportionate (or self-weighting) sample was drawn by WIC participant category: pregnant women, breastfeeding women, postpartum women, infants, and children. In this type of sample design, each WIC participant has the same probability of selection across all strata. Sampling weights are not needed when the data are analyzed. In a proportionate stratified sample, the largest stratum accounts for the highest percentage of the analytic sample. Study design (before–after, control–impacts, time series, before–after-control–impacts) None – Non-experimental Description of any data manipulation, modeling, or statistical analysis undertaken Each entry in the dataset contains all MDS and SDS information submitted by the State agency on the sampled WIC participant. In addition, the file contains constructed variables used for analytic purposes. To protect individual privacy, the public use file does not include State agency, local agency, or case identification numbers. Description of any gaps in the data or other limiting factors Due to management information constraints, Connecticut provided data for a month other than April 2016, specifically August 16 – September 15, 2016.   Outcome measurement methods and equipment used None Resources in this dataset:Resource Title: WIC Participant and Program Characteristics 2016. File Name: wicpc_2016_public.csvResource Description: The 2016 Participant and Program Characteristics (PC2016) is the thirteenth data submission to be completed using the WIC PC reporting system. In April 2016, there were 90 State agencies: the 50 States, American Samoa, the District of Columbia, Guam, the Northern Mariana Islands, Puerto Rico, the American Virgin Islands, and 34 Indian tribal organizations.Resource Software Recommended: SAS, version 9.4,url: https://www.sas.com/en_us/software/sas9.html Resource Title: WIC Participant and Program Characteristics 2016 Codebook. File Name: WICPC2016_PUBLIC_CODEBOOK.xlsxResource Software Recommended: SAS, version 9.4,url: https://www.sas.com/en_us/software/sas9.html Resource Title: WIC Participant and Program Characteristics 2016 - Zip File with SAS, SPSS and STATA data. File Name: WIC_PC_2016_SAS_SPSS_STATA_Files.zipResource Description: WIC Participant and Program Characteristics 2016 - Zip File with SAS, SPSS and STATA data

  6. d

    SAS Programs - Claims-Based Frailty Index

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kim, Dae Hyun; Gautam, Nileesa (2024). SAS Programs - Claims-Based Frailty Index [Dataset]. http://doi.org/10.7910/DVN/HM8DOI
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Kim, Dae Hyun; Gautam, Nileesa
    Description

    This SAS program calculates CFI for each patient from analytic data files containing information on patient identifiers, ICD-9-CM diagnosis codes (version 32), ICD-10-CM Diagnosis Codes (version 2020), CPT codes, and HCPCS codes. NOTE: When downloading, store "CFI_ICD9CM_V32.tab", "CFI_ICD10CM_V2020.tab", and "PX_CODES.tab" as csv files (these files are originally stored as csv files, but Dataverse automatically converts them to tab files). Please read "Frailty-Index-SAS-code-Guide" before proceeding. Interpretation, validation data, and annotated references are provided in "Research Background - Claims-Based Frailty Index".

  7. f

    fdata-02-00004-g0001_Matching Cases and Controls Using SAS® Software.tif

    • frontiersin.figshare.com
    tiff
    Updated Jun 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg (2023). fdata-02-00004-g0001_Matching Cases and Controls Using SAS® Software.tif [Dataset]. http://doi.org/10.3389/fdata.2019.00004.s003
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    Frontiers
    Authors
    Laura Quitzau Mortensen; Kristoffer Andresen; Jakob Burcharth; Hans-Christian Pommergaard; Jacob Rosenberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Matching is frequently used in observational studies, especially in medical research. However, only a small number of articles with matching programs for the SAS software (SAS Institute Inc., Cary, NC, USA) are available, even less are usable for inexperienced users of SAS software. This article presents a matching program for the SAS software and links to an online repository for examples and test data. The program enables matching on several variables and includes in-depth explanation of the expressions used and how to customize the program. The selection of controls is randomized and automated, minimizing the risk of selection bias. Also, the program provides means for the researcher to test for incomplete matching.

  8. d

    SAS code

    • search.dataone.org
    Updated Nov 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jensen, Manja (2023). SAS code [Dataset]. http://doi.org/10.7910/DVN/NWXK9G
    Explore at:
    Dataset updated
    Nov 12, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Jensen, Manja
    Description

    This code replicate the numbers for the tables and figures in the article "How video information on mammography screening affects the recommendations of laypeople: a randomised controlled trial" by Manja D. Jensen, Kasper M. Hansen, Volkert Siersma and John Brodersen.

  9. R

    Sas Project Dataset

    • universe.roboflow.com
    zip
    Updated Oct 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    sas (2024). Sas Project Dataset [Dataset]. https://universe.roboflow.com/sas-fjl7j/sas-project
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 2, 2024
    Dataset authored and provided by
    sas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person Gun Knife Fire Bat Bounding Boxes
    Description

    SAS Project

    ## Overview
    
    SAS Project is a dataset for object detection tasks - it contains Person Gun Knife Fire Bat annotations for 7,868 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. R

    Sas Dataset

    • universe.roboflow.com
    zip
    Updated May 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    testeeeee (2023). Sas Dataset [Dataset]. https://universe.roboflow.com/testeeeee/sas-yypjc/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 27, 2023
    Dataset authored and provided by
    testeeeee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fdff Bounding Boxes
    Description

    SaS

    ## Overview
    
    SaS is a dataset for object detection tasks - it contains Fdff annotations for 1,040 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  11. e

    Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses....

    • b2find.eudat.eu
    Updated Jul 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Editing EU-SILC UDB Longitudinal Data for Differential Mortality Analyses. SAS code and documentation. - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/fe664224-25bf-592b-b964-9a478b722923
    Explore at:
    Dataset updated
    Jul 24, 2025
    Description

    This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.

  12. d

    Current Population Survey (CPS)

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damico, Anthony (2023). Current Population Survey (CPS) [Dataset]. http://doi.org/10.7910/DVN/AK4FDD
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Damico, Anthony
    Description

    analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D

  13. f

    Sample SAS code for the Monte Carlo Study

    • figshare.com
    Updated May 12, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Milica Miocevic (2016). Sample SAS code for the Monte Carlo Study [Dataset]. http://doi.org/10.6084/m9.figshare.3376093.v1
    Explore at:
    Dataset updated
    May 12, 2016
    Dataset provided by
    figshare
    Authors
    Milica Miocevic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These SAS files are sample code used for the Monte Carlo studies in a manuscript on statistical properties of four effect size measures for the mediated effect.Citation:Miočević, M., O’Rourke, H. P., MacKinnon, D. P., & Brown, H. C. (2016). The bias and efficiency of five effect size measures for mediation models. Under review at Behavior Research Methods.

  14. R

    Abnormal Sas Respiratory Dataset

    • universe.roboflow.com
    zip
    Updated Mar 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    university of chester (2024). Abnormal Sas Respiratory Dataset [Dataset]. https://universe.roboflow.com/university-of-chester-v3cwk/abnormal-sas-respiratory
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 14, 2024
    Dataset authored and provided by
    university of chester
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Respiratory Bounding Boxes
    Description

    Abnormal Sas Respiratory

    ## Overview
    
    Abnormal Sas Respiratory is a dataset for object detection tasks - it contains Respiratory annotations for 202 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  15. m

    SAS Code Spatial Optimization of Supply Chain Network for Nitrogen Based...

    • data.mendeley.com
    Updated Jan 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sumadhur Shakya (2023). SAS Code Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, Proc OptModel [Dataset]. http://doi.org/10.17632/ft8c9x894n.1
    Explore at:
    Dataset updated
    Jan 23, 2023
    Authors
    Sumadhur Shakya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SAS Code for Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, using Proc OptModel. the code specifies set of random values to run the mixed integer stochastic spatial optimization model repeatedly and collect results for each simulation that are then compiled and exported to be projected in GIS (geographic information systems). Certain supply nodes (fertilizer plants) are specified to work at either 70 percent of their capacities or more. Capacities for nodes of supply (fertilizer plants), demand (county centroids), transhipment nodes (transfer points-mode may change), and actual distance travelled are specified over arcs.

  16. B

    Clinical mastitis datasets and SAS code - Fréchette et al., 2021

    • borealisdata.ca
    Updated Jul 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Dufour (2025). Clinical mastitis datasets and SAS code - Fréchette et al., 2021 [Dataset]. http://doi.org/10.5683/SP2/KIEMHY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    Borealis
    Authors
    Simon Dufour
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The datasets and their legends, for an observational cohort study described by Fréchette et al., 2021. The SAS code used to conduct the analyses described in the article is also reported.

  17. H

    Consumer Expenditure Survey (CE)

    • dataverse.harvard.edu
    Updated May 30, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Damico (2013). Consumer Expenditure Survey (CE) [Dataset]. http://doi.org/10.7910/DVN/UTNJAH
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 30, 2013
    Dataset provided by
    Harvard Dataverse
    Authors
    Anthony Damico
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    analyze the consumer expenditure survey (ce) with r the consumer expenditure survey (ce) is the primo data source to understand how americans spend money. participating households keep a running diary about every little purchase over the year. those diaries are then summed up into precise expenditure categories. how else are you gonna know that the average american household spent $34 (±2) on bacon, $826 (±17) on cellular phones, and $13 (±2) on digital e-readers in 2011? an integral component of the market basket calculation in the consumer price index, this survey recently became available as public-use microdata and they're slowly releasing historical files back to 1996. hooray! for a t aste of what's possible with ce data, look at the quick tables listed on their main page - these tables contain approximately a bazillion different expenditure categories broken down by demographic groups. guess what? i just learned that americans living in households with $5,000 to $9,999 of annual income spent an average of $283 (±90) on pets, toys, hobbies, and playground equipment (pdf page 3). you can often get close to your statistic of interest from these web tables. but say you wanted to look at domestic pet expenditure among only households with children between 12 and 17 years old. another one of the thirteen web tables - the consumer unit composition table - shows a few different breakouts of households with kids, but none matching that exact population of interest. the bureau of labor statistics (bls) (the survey's designers) and the census bureau (the survey's administrators) have provided plenty of the major statistics and breakouts for you, but they're not psychic. if you want to comb through this data for specific expenditure categories broken out by a you-defined segment of the united states' population, then let a little r into your life. fun starts now. fair warning: only analyze t he consumer expenditure survey if you are nerd to the core. the microdata ship with two different survey types (interview and diary), each containing five or six quarterly table formats that need to be stacked, merged, and manipulated prior to a methodologically-correct analysis. the scripts in this repository contain examples to prepare 'em all, just be advised that magnificent data like this will never be no-assembly-required. the folks at bls have posted an excellent summary of what's av ailable - read it before anything else. after that, read the getting started guide. don't skim. a few of the descriptions below refer to sas programs provided by the bureau of labor statistics. you'll find these in the C:\My Directory\CES\2011\docs directory after you run the download program. this new github repository contains three scripts: 2010-2011 - download all microdata.R lo op through every year and download every file hosted on the bls's ce ftp site import each of the comma-separated value files into r with read.csv depending on user-settings, save each table as an r data file (.rda) or stat a-readable file (.dta) 2011 fmly intrvw - analysis examples.R load the r data files (.rda) necessary to create the 'fmly' table shown in the ce macros program documentation.doc file construct that 'fmly' table, using five quarters of interviews (q1 2011 thru q1 2012) initiate a replicate-weighted survey design object perform some lovely li'l analysis examples replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using unimputed variables replicate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t -tests using unimputed variables create an rsqlite database (to minimize ram usage) containing the five imputed variable files, after identifying which variables were imputed based on pdf page 3 of the user's guide to income imputation initiate a replicate-weighted, database-backed, multiply-imputed survey design object perform a few additional analyses that highlight the modified syntax required for multiply-imputed survey designs replicate the %mean_variance() macro found in "ce macros.sas" and provide some examples of calculating descriptive statistics using imputed variables repl icate the %compare_groups() macro found in "ce macros.sas" and provide some examples of performing t-tests using imputed variables replicate the %proc_reg() and %proc_logistic() macros found in "ce macros.sas" and provide some examples of regressions and logistic regressions using both unimputed and imputed variables replicate integrated mean and se.R match each step in the bls-provided sas program "integr ated mean and se.sas" but with r instead of sas create an rsqlite database when the expenditure table gets too large for older computers to handle in ram export a table "2011 integrated mean and se.csv" that exactly matches the contents of the sas-produced "2011 integrated mean and se.lst" text file click here to view these three scripts for...

  18. m

    Raw data in SPSS and syntax codes for SAS for THC in oral fluid: A rapid...

    • data.mendeley.com
    Updated May 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scott Macdonald (2023). Raw data in SPSS and syntax codes for SAS for THC in oral fluid: A rapid review and meta-analysis [Dataset]. http://doi.org/10.17632/bw4xyw6zzj.1
    Explore at:
    Dataset updated
    May 17, 2023
    Authors
    Scott Macdonald
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data in SPSS and syntax codes for SAS for the article: "THC in oral fluid: A rapid review and meta-analysis."

  19. n

    Global Burden of Disease analysis dataset of cardiovascular disease...

    • narcis.nl
    • data.mendeley.com
    Updated Jun 23, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cundiff, D (via Mendeley Data) (2021). Global Burden of Disease analysis dataset of cardiovascular disease outcomes, risk factors, and SAS codes [Dataset]. http://doi.org/10.17632/g6b39zxck4.4
    Explore at:
    Dataset updated
    Jun 23, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Cundiff, D (via Mendeley Data)
    Description

    This formatted dataset originates from raw data files from the Institute of Health Metrics and Evaluation Global Burden of Disease (GBD2017). It is population weighted worldwide data on male and female cohorts ages 15-69 years including cardiovascular disease early death and associated dietary, metabolic and other risk factors. The purpose of creating this formatted database is to explore the univariate and multiple regression correlations of cardiovascular early deaths and other health outcomes with risk factors. Our research hypothesis is that we can successfully apply artificial intelligence to model cardiovascular disease outcomes with risk factors. We found that fat-soluble vitamin containing foods (animal products) and added fats are negatively correlated with CVD early deaths worldwide but positively correlated with CVD early deaths in high fat-soluble vitamin cohorts. We interpret this as showing that optimal cardiovascular outcomes come with moderate (not low and not high) intakes of animal foods and added fats. You are invited to download the dataset, the associated SAS code to access the dataset, and the tables that have resulted from the analysis. Please comment on the article by indicating what you found by exploring the dataset with the provided SAS codes. Please say whether or not you found the outputs from the SAS codes accurately reflected the tables provided and the tables in the published article. If you use our data to reproduce our findings and comment on your findings on the MedRxIV website (https://www.medrxiv.org/content/10.1101/2021.04.17.21255675v4) and would like to be recognized, we will be happy to list you as a contributor when the article is summited to JAMA. For questions, please email davidkcundiff@gmail.com. Thanks.

  20. Global export data of Sas Hard Disk Drives And HSN Code 84717090

    • volza.com
    csv
    Updated Sep 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Volza FZ LLC (2025). Global export data of Sas Hard Disk Drives And HSN Code 84717090 [Dataset]. https://www.volza.com/exports-global/global-export-data-of-sas+hard+disk+drives-and-hscode-84717090
    Explore at:
    csvAvailable download formats
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    Authors
    Volza FZ LLC
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Count of exporters, Sum of export value, 2014-01-01/2021-09-30, Count of export shipments
    Description

    77 Global export shipment records of Sas Hard Disk Drives And HSN Code 84717090 with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. EPA Office of Research and Development (ORD) (2020). SAS code used to analyze data and a datafile with metadata glossary [Dataset]. https://catalog.data.gov/dataset/sas-code-used-to-analyze-data-and-a-datafile-with-metadata-glossary
Organization logo

SAS code used to analyze data and a datafile with metadata glossary

Explore at:
Dataset updated
Nov 12, 2020
Dataset provided by
United States Environmental Protection Agencyhttp://www.epa.gov/
Description

We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).

Search
Clear search
Close search
Google apps
Main menu