Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The sample SAS and Stata code provided here is intended for use with certain datasets in the National Neighborhood Data Archive (NaNDA). NaNDA (https://www.openicpsr.org/openicpsr/nanda) contains some datasets that measure neighborhood context at the ZIP Code Tabulation Area (ZCTA) level. They are intended for use with survey or other individual-level data containing ZIP codes. Because ZIP codes do not exactly match ZIP code tabulation areas, a crosswalk is required to use ZIP-code-level geocoded datasets with ZCTA-level datasets from NaNDA. A ZIP-code-to-ZCTA crosswalk was previously available on the UDS Mapper website, which is no longer active. An archived copy of the ZIP-code-to-ZCTA crosswalk file has been included here. Sample SAS and Stata code are provided for merging the UDS mapper crosswalk with NaNDA datasets.
Facebook
TwitterThis dataset consists of DOHGS merged data from the 19 research flights with the C-130 over the Southeast U.S. between June 1 and July 15, 2013, as part of the Southeast Atmosphere Study (SAS). Merged data files have been created, combining all observations on the C-130 to a common time base for each flight. Version R5 (created Jan 21, 2015) of the merges includes all data available as of Jan 12. Start and stop times taken from the DOHGS file, midtime calculated from them. Averaging and missing value treatment as in 1-min merge.
Facebook
TwitterA novel features set that uniquely characterizes the object’s shape, and takes into account the object’s highlight-shadow geometrical relations.
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/31421/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/31421/terms
The New York City Department of Health and Mental Hygiene, with support from the National Center for Health Statistics, conducted the New York City Health and Nutrition Examination Survey (NYC HANES) to improve disease surveillance and establish citywide estimates for several previously unmeasured health conditions from which reduction targets could be set and incorporated into health policy planning initiatives. NYC HANES also provides important new information about the prevalence and control of chronic disease precursors, such as undiagnosed hypertension, hypercholesterolemia, and impaired fasting glucose, which allow chronic disease programs to monitor more proximate health events and rapidly evaluate primary intervention efforts. Study findings are used by the public health community in New York City, as well as by researchers and clinicians, to better target resources to the health needs of the population. The NYC HANES data consist of the following six datasets: (1) Study Participant File (SPfile), (2) Computer-Assisted Personal Interview (CAPI), (3) Audio Computer-Assisted Self-Interview (ACASI), (4) Composite International Diagnostic Interview(CIDI), (5) Examination Component, and (6) Laboratory Component. The Study Participant File contains variables necessary for all analyses, therefore, when using the other datasets, they should be merged to this file. Variable P_ID is the unique identifier used to merge all datasets. Merging information from multiple NYC HANES datasets using SP_ID ensures that the appropriate information for each SP is linked correctly. (SAS datasets must be sorted by SP_ID prior to merging.) Please note that NYC HANES datasets may not have the same number of records for each component because some participants did not complete each component. Demographic variables include race/ethnicity, Hispanic origin, age, body weight, gender, education level, marital status, and country of birth.
Facebook
TwitterThe dataset was collected via a combination of the following: 1. manual extraction of EHR-based data followed by entry into REDCap and then analysis and further processing in SAS 9.4; 2. Data pull of Epic EHR-based data from Clarity database using standard programming techniques, followed by processing in SAS 9.4 and merging with data from REDCap; 3. Collection of data directly from participants via telephone with entry into REDCap and further processing in SAS 9.4; 4. Collection of process measures from study team tracking records followed by entry into REDCap and further processing in SAS 9.4. One file in the dataset contains aggregate data generated following merging of Clarity data pull-origin dataset with a REDCap dataset and further manual processing. Recruitment for the randomized trial began at an epilepsy clinic visit, with EHR-embedded validated anxiety and depression instruments, followed by automated EHR-based research screening consent and eligibility assessment. Full...
Facebook
Twitteranalyze the health and retirement study (hrs) with r the hrs is the one and only longitudinal survey of american seniors. with a panel starting its third decade, the current pool of respondents includes older folks who have been interviewed every two years as far back as 1992. unlike cross-sectional or shorter panel surveys, respondents keep responding until, well, death d o us part. paid for by the national institute on aging and administered by the university of michigan's institute for social research, if you apply for an interviewer job with them, i hope you like werther's original. figuring out how to analyze this data set might trigger your fight-or-flight synapses if you just start clicking arou nd on michigan's website. instead, read pages numbered 10-17 (pdf pages 12-19) of this introduction pdf and don't touch the data until you understand figure a-3 on that last page. if you start enjoying yourself, here's the whole book. after that, it's time to register for access to the (free) data. keep your username and password handy, you'll need it for the top of the download automation r script. next, look at this data flowchart to get an idea of why the data download page is such a righteous jungle. but wait, good news: umich recently farmed out its data management to the rand corporation, who promptly constructed a giant consolidated file with one record per respondent across the whole panel. oh so beautiful. the rand hrs files make much of the older data and syntax examples obsolete, so when you come across stuff like instructions on how to merge years, you can happily ignore them - rand has done it for you. the health and retirement study only includes noninstitutionalized adults when new respondents get added to the panel (as they were in 1992, 1993, 1998, 2004, and 2010) but once they're in, they're in - respondents have a weight of zero for interview waves when they were nursing home residents; but they're still responding and will continue to contribute to your statistics so long as you're generalizing about a population from a previous wave (for example: it's possible to compute "among all americans who were 50+ years old in 1998, x% lived in nursing homes by 2010"). my source for that 411? page 13 of the design doc. wicked. this new github repository contains five scripts: 1992 - 2010 download HRS microdata.R loop through every year and every file, download, then unzip everything in one big party impor t longitudinal RAND contributed files.R create a SQLite database (.db) on the local disk load the rand, rand-cams, and both rand-family files into the database (.db) in chunks (to prevent overloading ram) longitudinal RAND - analysis examples.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create tw o database-backed complex sample survey object, using a taylor-series linearization design perform a mountain of analysis examples with wave weights from two different points in the panel import example HRS file.R load a fixed-width file using only the sas importation script directly into ram with < a href="http://blog.revolutionanalytics.com/2012/07/importing-public-data-with-sas-instructions-into-r.html">SAScii parse through the IF block at the bottom of the sas importation script, blank out a number of variables save the file as an R data file (.rda) for fast loading later replicate 2002 regression.R connect to the sql database created by the 'import longitudinal RAND contributed files' program create a database-backed complex sample survey object, using a taylor-series linearization design exactly match the final regression shown in this document provided by analysts at RAND as an update of the regression on pdf page B76 of this document . click here to view these five scripts for more detail about the health and retirement study (hrs), visit: michigan's hrs homepage rand's hrs homepage the hrs wikipedia page a running list of publications using hrs notes: exemplary work making it this far. as a reward, here's the detailed codebook for the main rand hrs file. note that rand also creates 'flat files' for every survey wave, but really, most every analysis you c an think of is possible using just the four files imported with the rand importation script above. if you must work with the non-rand files, there's an example of how to import a single hrs (umich-created) file, but if you wish to import more than one, you'll have to write some for loops yourself. confidential to sas, spss, stata, and sudaan users: a tidal wave is coming. you can get water up your nose and be dragged out to sea, or you can grab a surf board. time to transition to r. :D
Facebook
Twitter Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The sample SAS and Stata code provided here is intended for use with certain datasets in the National Neighborhood Data Archive (NaNDA). NaNDA (https://www.openicpsr.org/openicpsr/nanda) contains some datasets that measure neighborhood context at the ZIP Code Tabulation Area (ZCTA) level. They are intended for use with survey or other individual-level data containing ZIP codes. Because ZIP codes do not exactly match ZIP code tabulation areas, a crosswalk is required to use ZIP-code-level geocoded datasets with ZCTA-level datasets from NaNDA. A ZIP-code-to-ZCTA crosswalk was previously available on the UDS Mapper website, which is no longer active. An archived copy of the ZIP-code-to-ZCTA crosswalk file has been included here. Sample SAS and Stata code are provided for merging the UDS mapper crosswalk with NaNDA datasets.