51 datasets found
  1. Data from: Longitudinal Post-Coital DNA Recovery 2010-2014 [UNITED STATES]

    • catalog.data.gov
    • icpsr.umich.edu
    Updated Mar 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). Longitudinal Post-Coital DNA Recovery 2010-2014 [UNITED STATES] [Dataset]. https://catalog.data.gov/dataset/longitudinal-post-coital-dna-recovery-2010-2014-united-states-a14dd
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    National Institute of Justicehttp://nij.ojp.gov/
    Area covered
    United States
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. This study sought to apply current and advanced Y-STR DNA technology in forensic laboratories to a large in vivo population of proxy-couples, to provide groundwork for future inquiry about the conditions affecting DNA recovery in the living patient, to determine timing for evidence collection, and to attempt to identify variables influencing DNA recovery. The objective of this research was to create the evidence base supporting or limiting the expansion of the 72-hour period for evidence collection. Another objective was to identify conditions that might influence the recovery of DNA, and therefore influence policies related to sample collection from the complex post-coital environment. The collection includes 6 SPSS data files: AlleleRecovery Jun 2014 Allrec.sav (n=70; 34 variables) AlleleRecovery Jun 2014 Used for descriptve analysis.sav (n=66; 58 variables) Condom_collections-baseline-d9-Jun2014 Allrec without open-ended-ICPSR.sav (n=70; 66 variables) DNADemogFemalesJun2014- without open-ended AllRec-ICPSR.sav (n=73; 67 variables) DNADemogFemalesJun2014- without open-ended -For analysis with group variables-ICPSR.sav (n=66; 73 variables) DNADemogMalesJun2014- without open-ended AllRec-ICPSR.sav (n=73; 46 variables) and 1 SAS data file (dnalong.sas7bdat (n=264; 7 variables)). Data from a focus group of subject matter experts which convened to identify themes from their practice are not included with this collection.

  2. SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal

    • data.nasa.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). SAS-2 Map Product Catalog - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sas-2-map-product-catalog
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This database is a collection of maps created from the 28 SAS-2 observation files. The original observation files can be accessed within BROWSE by changing to the SAS2RAW database. For each of the SAS-2 observation files, the analysis package FADMAP was run and the resulting maps, plus GIF images created from these maps, were collected into this database. Each map is a 60 x 60 pixel FITS format image with 1 degree pixels. The user may reconstruct any of these maps within the captive account by running FADMAP from the command line after extracting a file from within the SAS2RAW database. The parameters used for selecting data for these product map files are embedded keywords in the FITS maps themselves. These parameters are set in FADMAP, and for the maps in this database are set as 'wide open' as possible. That is, except for selecting on each of 3 energy ranges, all other FADMAP parameters were set using broad criteria. To find more information about how to run FADMAP on the raw event's file, the user can access help files within the SAS2RAW database or can use the 'fhelp' facility from the command line to gain information about FADMAP. This is a service provided by NASA HEASARC .

  3. S

    Sub-state Autonomy Scale (SAS)

    • sodha.be
    pdf, tsv
    Updated Apr 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Social Sciences and Digital Humanities Archive – SODHA (2022). Sub-state Autonomy Scale (SAS) [Dataset]. http://doi.org/10.34934/DVN/LSXXZV
    Explore at:
    pdf(205511), tsv(2715336)Available download formats
    Dataset updated
    Apr 28, 2022
    Dataset provided by
    Social Sciences and Digital Humanities Archive – SODHA
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset comprises the data collected for the Sub-state Autonomy Scale (SAS). The SAS is an indicator measuring the autonomy demands and statutes of sub-state communities in kind (whether competences are administrative or legislative), in degree (how much each dimension is present) and by competences (as a function of the extent of comprised policy domains). Definitions: -By 'sub-state community', I refer to sub-state entities within countries for which autonomous institutions have been demanded by a significant regionalist or traditional (centrist, liberal or socialist main-stream) political party (>5%) or to which autonomous institutions have been conferred. -By 'autonomy statutes', I refer to the legal autonomy prerogatives obtained by sub-state communities. -For 'autonomy demands', I distinguish between the legal autonomy prerogatives demanded by the regionalist party with the highest vote share and those demanded by the traditional party with the largest autonomy demand. Detailed conceptual presentation: see the Regional Studies article cited below (the open access author version can be found in the files section). Specifications: -Unit of analysis: sub-state communities by yearly intervals. -Country coverage: Belgium, Spain, United Kingdom (31 sub-state communities). -Time coverage: 1707-2020 (starting dates vary across sub-state communities). *For the full list of sub-state communities and their respective time coverage, see the codebook. Citation and acknowledgement: when using the data, please cite the Regional Studies article listed below. Latest version: 1.0 [01.02.2022].

  4. e

    Eximpedia Export Import Trade

    • eximpedia.app
    Updated Jan 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Eximpedia Export Import Trade [Dataset]. https://www.eximpedia.app/
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    Eximpedia Export Import Trade Data
    Eximpedia PTE LTD
    Authors
    Seair Exim
    Area covered
    Colombia
    Description

    Open Mind Colombia S A S Company Export Import Records. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  5. The Pedestrian Crash Data Study (PCDS)

    • odgavaprod.ogopendata.com
    • data.virginia.gov
    • +6more
    zip
    Updated May 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S Department of Transportation (2024). The Pedestrian Crash Data Study (PCDS) [Dataset]. https://odgavaprod.ogopendata.com/dataset/the-pedestrian-crash-data-study-pcds
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 1, 2024
    Authors
    U.S Department of Transportation
    Description

    The Pedestrian Crash Data Study (PCDS) collected detailed data on motor vehicle vs pedestrian crashes.

  6. e

    Eximpedia Export Import Trade

    • eximpedia.app
    Updated Feb 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Eximpedia Export Import Trade [Dataset]. https://www.eximpedia.app/
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    Eximpedia Export Import Trade Data
    Eximpedia PTE LTD
    Authors
    Seair Exim
    Area covered
    Barbados, Montserrat, Cayman Islands, Tuvalu, Saint Helena, Faroe Islands, Vanuatu, Zambia, Rwanda, Tajikistan, Gran Colombia
    Description

    Great Colombia Opening Sas Company Export Import Records. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  7. 500 Cities: Local Data for Better Health, 2016 release

    • catalog.data.gov
    • healthdata.gov
    • +5more
    Updated Feb 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). 500 Cities: Local Data for Better Health, 2016 release [Dataset]. https://catalog.data.gov/dataset/500-cities-local-data-for-better-health-2016-release
    Explore at:
    Dataset updated
    Feb 3, 2025
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Description

    This is the complete dataset for the 500 Cities project 2016 release. This dataset includes 2013, 2014 model-based small area estimates for 27 measures of chronic disease related to unhealthy behaviors (5), health outcomes (13), and use of preventive services (9). Data were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. The project was funded by the Robert Wood Johnson Foundation (RWJF) in conjunction with the CDC Foundation. It represents a first-of-its kind effort to release information on a large scale for cities and for small areas within those cities. It includes estimates for the 500 largest US cities and approximately 28,000 census tracts within these cities. These estimates can be used to identify emerging health problems and to inform development and implementation of effective, targeted public health prevention activities. Because the small area model cannot detect effects due to local interventions, users are cautioned against using these estimates for program or policy evaluations. Data sources used to generate these measures include Behavioral Risk Factor Surveillance System (BRFSS) data (2013, 2014), Census Bureau 2010 census population data, and American Community Survey (ACS) 2009-2013, 2010-2014 estimates. More information about the methodology can be found at www.cdc.gov/500cities. Note: During the process of uploading the 2015 estimates, CDC found a data discrepancy in the published 500 Cities data for the 2014 city-level obesity crude prevalence estimates caused when reformatting the SAS data file to the open data format. . The small area estimation model and code were correct. This data discrepancy only affected the 2014 city-level obesity crude prevalence estimates on the Socrata open data file, the GIS-friendly data file, and the 500 Cities online application. The other obesity estimates (city-level age-adjusted and tract-level) and the Mapbooks were not affected. No other measures were affected. The correct estimates are update in this dataset on October 25, 2017.

  8. c

    SAS-2 Photon Events Catalog

    • s.cnmilf.com
    • catalog.data.gov
    Updated Aug 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). SAS-2 Photon Events Catalog [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/sas-2-photon-events-catalog
    Explore at:
    Dataset updated
    Aug 22, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

  9. Cotton Open End Yarn Import Data | Grupo Saka Sas Calle

    • seair.co.in
    Updated Mar 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2024). Cotton Open End Yarn Import Data | Grupo Saka Sas Calle [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 1, 2024
    Dataset provided by
    Seair Info Solutions
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  10. H

    Current Population Survey (CPS)

    • dataverse.harvard.edu
    • search.dataone.org
    Updated May 30, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Damico (2013). Current Population Survey (CPS) [Dataset]. http://doi.org/10.7910/DVN/AK4FDD
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 30, 2013
    Dataset provided by
    Harvard Dataverse
    Authors
    Anthony Damico
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D

  11. g

    Share of low-emission vehicles in fleet renewal (OPEN SAS organisation) |...

    • gimi9.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Share of low-emission vehicles in fleet renewal (OPEN SAS organisation) | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_6515740df9a3c9b208e510ae/
    Explore at:
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset meets the specifications of the scheme “Part of low-emission vehicles in fleet renewal” available on schema.data.gouv.fr

  12. s

    Fresh Lime Import Data of Ci Open Markets Colombia Sas Exporter to USA

    • seair.co.in
    Updated Apr 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2025). Fresh Lime Import Data of Ci Open Markets Colombia Sas Exporter to USA [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Apr 19, 2025
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    United States, Colombia
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  13. d

    Cadastral Information for Sas Mighe 32

    • datasets.ai
    • open.canada.ca
    0, 61
    Updated Sep 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natural Resources Canada | Ressources naturelles Canada (2024). Cadastral Information for Sas Mighe 32 [Dataset]. https://datasets.ai/datasets/c367a220-8d29-4e95-ae04-7033982be1c4
    Explore at:
    61, 0Available download formats
    Dataset updated
    Sep 11, 2024
    Dataset authored and provided by
    Natural Resources Canada | Ressources naturelles Canada
    Description

    This data provides the integrated cadastral framework for the specified Canada Land. The cadastral framework consists of active and superseded cadastral parcel, roads, easements, administrative areas, active lines, points and annotations. The cadastral lines form the boundaries of the parcels. COGO attributes are associated to the lines and depict the adjusted framework of the cadastral fabric. The cadastral annotations consist of lot numbers, block numbers, township numbers, etc. The cadastral framework is compiled from Canada Lands Survey Records (CLSR), Registration Plans (RS) and Location Sketches (LS) archived in the Canada Lands Survey Records.

  14. d

    Health and Retirement Study (HRS)

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damico, Anthony (2023). Health and Retirement Study (HRS) [Dataset]. http://doi.org/10.7910/DVN/ELEKOY
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Damico, Anthony
    Description

    analyze the health and retirement study (hrs) with r the hrs is the one and only longitudinal survey of american seniors. with a panel starting its third decade, the current pool of respondents includes older folks who have been interviewed every two years as far back as 1992. unlike cross-sectional or shorter panel surveys, respondents keep responding until, well, death d o us part. paid for by the national institute on aging and administered by the university of michigan's institute for social research, if you apply for an interviewer job with them, i hope you like werther's original. figuring out how to analyze this data set might trigger your fight-or-flight synapses if you just start clicking arou nd on michigan's website. instead, read pages numbered 10-17 (pdf pages 12-19) of this introduction pdf and don't touch the data until you understand figure a-3 on that last page. if you start enjoying yourself, here's the whole book. after that, it's time to register for access to the (free) data. keep your username and password handy, you'll need it for the top of the download automation r script. next, look at this data flowchart to get an idea of why the data download page is such a righteous jungle. but wait, good news: umich recently farmed out its data management to the rand corporation, who promptly constructed a giant consolidated file with one record per respondent across the whole panel. oh so beautiful. the rand hrs files make much of the older data and syntax examples obsolete, so when you come across stuff like instructions on how to merge years, you can happily ignore them - rand has done it for you. the health and retirement study only includes noninstitutionalized adults when new respondents get added to the panel (as they were in 1992, 1993, 1998, 2004, and 2010) but once they're in, they're in - respondents have a weight of zero for interview waves when they were nursing home residents; but they're still responding and will continue to contribute to your statistics so long as you're generalizing about a population from a previous wave (for example: it's possible to compute "among all americans who were 50+ years old in 1998, x% lived in nursing homes by 2010"). my source for that 411? page 13 of the design doc. wicked. this new github repository contains five scripts: 1992 - 2010 download HRS microdata.R loop through every year and every file, download, then unzip everything in one big party impor t longitudinal RAND contributed files.R create a SQLite database (.db) on the local disk load the rand, rand-cams, and both rand-family files into the database (.db) in chunks (to prevent overloading ram) longitudinal RAND - analysis examples.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create tw o database-backed complex sample survey object, using a taylor-series linearization design perform a mountain of analysis examples with wave weights from two different points in the panel import example HRS file.R load a fixed-width file using only the sas importation script directly into ram with < a href="http://blog.revolutionanalytics.com/2012/07/importing-public-data-with-sas-instructions-into-r.html">SAScii parse through the IF block at the bottom of the sas importation script, blank out a number of variables save the file as an R data file (.rda) for fast loading later replicate 2002 regression.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create a database-backed complex sample survey object, using a taylor-series linearization design exactly match the final regression shown in this document provided by analysts at RAND as an update of the regression on pdf page B76 of this document . click here to view these five scripts for more detail about the health and retirement study (hrs), visit: michigan's hrs homepage rand's hrs homepage the hrs wikipedia page a running list of publications using hrs notes: exemplary work making it this far. as a reward, here's the detailed codebook for the main rand hrs file. note that rand also creates 'flat files' for every survey wave, but really, most every analysis you c an think of is possible using just the four files imported with the rand importation script above. if you must work with the non-rand files, there's an example of how to import a single hrs (umich-created) file, but if you wish to import more than one, you'll have to write some for loops yourself. confidential to sas, spss, stata, and sudaan users: a tidal wave is coming. you can get water up your nose and be dragged out to sea, or you can grab a surf board. time to transition to r. :D

  15. f

    Supplement 1. MATLAB and SAS code necessary to replicate the simulation...

    • wiley.figshare.com
    • datasetcatalog.nlm.nih.gov
    html
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeffrey A. Evans; Adam S. Davis; S. Raghu; Ashok Ragavendran; Douglas A. Landis; Douglas W. Schemske (2023). Supplement 1. MATLAB and SAS code necessary to replicate the simulation models and other demographic analyses presented in the paper. [Dataset]. http://doi.org/10.6084/m9.figshare.3517478.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Wiley
    Authors
    Jeffrey A. Evans; Adam S. Davis; S. Raghu; Ashok Ragavendran; Douglas A. Landis; Douglas W. Schemske
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    File List Code_and_Data_Supplement.zip (md5: dea8636b921f39c9d3fd269e44b6228c) Description The supplementary material provided includes all code and data files necessary to replicate the simulation models other demographic analyses presented in the paper. MATLAB code is provided for the simulations, and SAS code is provided to show how model parameters (vital rates) were estimated.

      The principal programs are Figure_3_4_5_Elasticity_Contours.m and Figure_6_Contours_Stochastic_Lambda.m which perform the elasticity analyses and run the stochastic simulation, respectively.
    
    
      The files are presented in a zipped folder called Code_and_Data_Supplement. When uncompressed, users may run the MATLAB programs by opening them from within this directory. Subdirectories contain the data files and supporting MATLAB functions necessary to complete execution. The programs are written to find the necessary supporting functions in the Code_and_Data_Supplement directory. If users copy these MATLAB files to a different directory, they must add the Code_and_Data_Supplement directory and its subdirectories to their search path to make the supporting files available.
    
    
      More details are provided in the README.txt file included in the supplement.
    
    
      The file and directory structure of entire zipped supplement is shown below.
    
      Folder PATH listing
    Code_and_Data_Supplement
    |  Figure_3_4_5_Elasticity_Contours.m
    |  Figure_6_Contours_Stochastic_Lambda.m
    |  Figure_A1_RefitG2.m
    |  Figure_A2_PlotFecundityRegression.m
    |  README.txt
    |  
    +---FinalDataFiles
    +---Make Tables
    |    README.txt
    |    Table_lamANNUAL.csv
    |    Table_mgtProbPredicted.csv
    |    
    +---ParameterEstimation
    |  |  Categorical Model output.xls
    |  |  
    |  +---Fecundity
    |  |    Appendix_A3_Fecundity_Breakpoint.sas
    |  |    fec_Cat_Indiv.sas
    |  |    Mean_Fec_Previous_Study.m
    |  |    
    |  +---G1
    |  |    G1_Cat.sas
    |  |    
    |  +---G2
    |  |    G2_Cat.sas
    |  |    
    |  +---Model Ranking
    |  |    Categorical Model Ranking.xls
    |  |    
    |  +---Seedlings
    |  |    sdl_Cat.sas
    |  |    
    |  +---SS
    |  |    SS_Cat.sas
    |  |    
    |  +---SumSrv
    |  |    sum_Cat.sas
    |  |    
    |  \---WinSrv
    |      modavg.m
    |      winCatModAvgfitted.m
    |      winCatModAvgLinP.m
    |      winCatModAvgMu.m
    |      win_Cat.sas
    |      
    +---ProcessedDatafiles
    |    fecdat_gm_param_est_paper.mat
    |    hierarchical_parameters.mat
    |    refitG2_param_estimation.mat
    |    
    \---Required_Functions
      |  hline.m
      |  hmstoc.m
      |  Jeffs_Figure_Settings.m
      |  Jeffs_startup.m
      |  newbootci.m
      |  sem.m
      |  senstuff.m
      |  vline.m
      |  
      +---export_fig
      |    change_value.m
      |    eps2pdf.m
      |    export_fig.m
      |    fix_lines.m
      |    ghostscript.m
      |    license.txt
      |    pdf2eps.m
      |    pdftops.m
      |    print2array.m
      |    print2eps.m
      |    
      +---lowess
      |    license.txt
      |    lowess.m
      |    
      +---Multiprod_2009
      |  |  Appendix A - Algorithm.pdf
      |  |  Appendix B - Testing speed and memory usage.pdf
      |  |  Appendix C - Syntaxes.pdf
      |  |  license.txt
      |  |  loc2loc.m
      |  |  MULTIPROD Toolbox Manual.pdf
      |  |  multiprod.m
      |  |  multitransp.m
      |  |  
      |  \---Testing
      |    |  arraylab13.m
      |    |  arraylab131.m
      |    |  arraylab132.m
      |    |  arraylab133.m
      |    |  genop.m
      |    |  multiprod13.m
      |    |  readme.txt
      |    |  sysrequirements_for_testing.m
      |    |  testing_memory_usage.m
      |    |  testMULTIPROD.m
      |    |  timing_arraylab_engines.m
      |    |  timing_matlab_commands.m
      |    |  timing_MX.m
      |    |  
      |    \---Data
      |        Memory used by MATLAB statements.xls
      |        Timing results.xlsx
      |        timing_MX.txt
      |        
      +---province
      |    PROVINCE.DBF
      |    province.prj
      |    PROVINCE.SHP
      |    PROVINCE.SHX
      |    README.txt
      |    
      +---SubAxis
      |    parseArgs.m
      |    subaxis.m
      |    
      +---suplabel
      |    license.txt
      |    suplabel.m
      |    suplabel_test.m
      |    
      \---tight_subplot
          license.txt
          tight_subplot.m
    
  16. d

    Documentation of Minnesota farmland value calculations for 2021

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William F Lazarus (2023). Documentation of Minnesota farmland value calculations for 2021 [Dataset]. http://doi.org/10.7910/DVN/GBQINF
    Explore at:
    Dataset updated
    Nov 12, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    William F Lazarus
    Area covered
    Minnesota
    Description

    Documentation (Word file), SAS 9.4 program files, Excel spreadsheets, HTML, GIF, and PDFs used in generating a staff paper and a web-based database of Minnesota farmland sales prices and acreages by township for 2021. If you don't have SAS and would like to view the .sas program files, one approach is to make a copy of the file, rename it with a .txt extension, and open it in Notepad. The SAS database files can also be exported using R if you don't have SAS.

  17. f

    ODM Data Analysis—A tool for the automatic validation, monitoring and...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    mp4
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas (2023). ODM Data Analysis—A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data [Dataset]. http://doi.org/10.1371/journal.pone.0199242
    Explore at:
    mp4Available download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionA required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data.MethodsThe system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application’s performance and functionality.ResultsThe system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects.DiscussionMedical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  18. W

    HT - SAS - Base-line household survey - Vietnam

    • cloud.csiss.gmu.edu
    getdata
    Updated Jul 15, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Open Africa (2021). HT - SAS - Base-line household survey - Vietnam [Dataset]. https://cloud.csiss.gmu.edu/uddi/hu/dataset/groups/htsas-hs-vietnam
    Explore at:
    getdataAvailable download formats
    Dataset updated
    Jul 15, 2021
    Dataset provided by
    Open Africa
    License

    http://www.opendefinition.org/licenses/cc-by-sahttp://www.opendefinition.org/licenses/cc-by-sa

    Area covered
    Vietnam
    Description

    Humidtropics - Systems Analysis and Synthesis - Base-line household survey - Vietnam

    This dataset contains household base-line data in Vietnam action sites that can be used for the development of typologies, farm simulation models and prepare impact assessments.

    WARNING: Data is still being reviewed and maybe changed at any time.

    We remind users that data downloadable from the portal is for analysis ONLY. Any cleaning happening to these files WILL NOT affect the database. Errors and inconsistencies MUST be reported to the project staff in charge of data collection and cleaning.

  19. Archive of Census Related Products (ACRP): 1980 SAS Transport Files -...

    • data.nasa.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    Updated Apr 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Archive of Census Related Products (ACRP): 1980 SAS Transport Files - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/archive-of-census-related-products-acrp-1980-sas-transport-files
    Explore at:
    Dataset updated
    Apr 23, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The 1980 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population demographics from the 1980 Summary Tape File (STF3A) database and are organized by state. The population data includes education levels, ethnicity, income distribution, nativity, labor force status, means of transportation and family structure while the housing data embodies size, state and structure of housing Unit, value of the Unit, tenure and occupancy status in housing Unit, source of water, sewage disposal, availability of telephone, heating and air conditioning, kitchen facilities, rent, mortgage status and monthly owner costs. This portion of the ACRP is produced by the Columbia University Center for International Earth Science Information Network (CIESIN).

  20. g

    SAS-2 Photon Events Catalog | gimi9.com

    • gimi9.com
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). SAS-2 Photon Events Catalog | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_sas-2-photon-events-catalog/
    Explore at:
    Dataset updated
    Feb 1, 2001
    Description

    The SAS2RAW database is a log of the 28 SAS-2 observation intervals and contains target names, sky coordinates start times and other information for all 13056 photons detected by SAS-2. The original data came from 2 sources. The photon information was obtained from the Event Encyclopedia, and the exposures were derived from the original "Orbit Attitude Live Time" (OALT) tapes stored at NASA/GSFC. These data sets were combined into FITS format images at HEASARC. The images were formed by making the center pixel of a 512 x 512 pixel image correspond to the RA and DEC given in the event file. Each photon's RA and DEC was converted to a relative pixel in the image. This was done by using Aitoff projections. All the raw data from the original SAS-2 binary data files are now stored in 28 FITS files. These images can be accessed and plotted using XIMAGE and other columns of the FITS file extensions can be plotted with the FTOOL FPLOT. This is a service provided by NASA HEASARC .

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
National Institute of Justice (2025). Longitudinal Post-Coital DNA Recovery 2010-2014 [UNITED STATES] [Dataset]. https://catalog.data.gov/dataset/longitudinal-post-coital-dna-recovery-2010-2014-united-states-a14dd
Organization logo

Data from: Longitudinal Post-Coital DNA Recovery 2010-2014 [UNITED STATES]

Related Article
Explore at:
Dataset updated
Mar 12, 2025
Dataset provided by
National Institute of Justicehttp://nij.ojp.gov/
Area covered
United States
Description

These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. This study sought to apply current and advanced Y-STR DNA technology in forensic laboratories to a large in vivo population of proxy-couples, to provide groundwork for future inquiry about the conditions affecting DNA recovery in the living patient, to determine timing for evidence collection, and to attempt to identify variables influencing DNA recovery. The objective of this research was to create the evidence base supporting or limiting the expansion of the 72-hour period for evidence collection. Another objective was to identify conditions that might influence the recovery of DNA, and therefore influence policies related to sample collection from the complex post-coital environment. The collection includes 6 SPSS data files: AlleleRecovery Jun 2014 Allrec.sav (n=70; 34 variables) AlleleRecovery Jun 2014 Used for descriptve analysis.sav (n=66; 58 variables) Condom_collections-baseline-d9-Jun2014 Allrec without open-ended-ICPSR.sav (n=70; 66 variables) DNADemogFemalesJun2014- without open-ended AllRec-ICPSR.sav (n=73; 67 variables) DNADemogFemalesJun2014- without open-ended -For analysis with group variables-ICPSR.sav (n=66; 73 variables) DNADemogMalesJun2014- without open-ended AllRec-ICPSR.sav (n=73; 46 variables) and 1 SAS data file (dnalong.sas7bdat (n=264; 7 variables)). Data from a focus group of subject matter experts which convened to identify themes from their practice are not included with this collection.

Search
Clear search
Close search
Google apps
Main menu