Facebook
TwitterSummary data for the studies used in the meta-analysis of local adaptation (Table 1 from the publication)This table contains the data used in this published meta-analysis. The data were originally extracted from the publications listed in the table. The file corresponds to Table 1 in the original publication.tb1.xlsSAS script used to perform meta-analysesThis file contains the essential elements of the SAS script used to perform meta-analyses published in Hoeksema & Forde 2008. Multi-factor models were fit to the data using weighted maximum likelihood estimation of parameters in a mixed model framework, using SAS PROC MIXED, in which the species traits and experimental design factors were considered fixed effects, and a random between-studies variance component was estimated. Significance (at alpha = 0.05) of individual factors in these models was determined using randomization procedures with 10,000 iterations (performed with a combination of macros in SAS), in which effect sizes a...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Pregnancy is a condition of broad interest across many medical and health services research domains, but one not easily identified in healthcare claims data. Our objective was to establish an algorithm to identify pregnant women and their pregnancies in claims data. We identified pregnancy-related diagnosis, procedure, and diagnosis-related group codes, accounting for the transition to International Statistical Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) diagnosis and procedure codes, in health encounter reporting on 10/1/2015. We selected women in Merative MarketScan commercial databases aged 15–49 years with pregnancy-related claims, and their infants, during 2008–2019. Pregnancies, pregnancy outcomes, and gestational ages were assigned using the constellation of service dates, code types, pregnancy outcomes, and linkage to infant records. We describe pregnancy outcomes and gestational ages, as well as maternal age, census region, and health plan type. In a sensitivity analysis, we compared our algorithm-assigned date of last menstrual period (LMP) to fertility procedure-based LMP (date of procedure + 14 days) among women with embryo transfer or insemination procedures. Among 5,812,699 identified pregnancies, most (77.9%) were livebirths, followed by spontaneous abortions (16.2%); 3,274,353 (72.2%) livebirths could be linked to infants. Most pregnancies were among women 25–34 years (59.1%), living in the South (39.1%) and Midwest (22.4%), with large employer-sponsored insurance (52.0%). Outcome distributions were similar across ICD-9 and ICD-10 eras, with some variation in gestational age distribution observed. Sensitivity analyses supported our algorithm’s framework; algorithm- and fertility procedure-derived LMP estimates were within a week of each other (mean difference: -4 days [IQR: -13 to 6 days]; n = 107,870). We have developed an algorithm to identify pregnancies, their gestational age, and outcomes, across ICD-9 and ICD-10 eras using administrative data. This algorithm may be useful to reproductive health researchers investigating a broad range of pregnancy and infant outcomes.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
It is commonly believed that if a two-way analysis of variance (ANOVA) is carried out in R, then reported p-values are correct. This article shows that this is not always the case. Results can vary from non-significant to highly significant, depending on the choice of options. The user must know exactly which options result in correct p-values, and which options do not. Furthermore, it is commonly supposed that analyses in SAS and R of simple balanced experiments using mixed-effects models result in correct p-values. However, the simulation study of the current article indicates that frequency of Type I error deviates from the nominal value. The objective of this article is to compare SAS and R with respect to correctness of results when analyzing small experiments. It is concluded that modern functions and procedures for analysis of mixed-effects models are sometimes not as reliable as traditional ANOVA based on simple computations of sums of squares.
Facebook
TwitterProcedures Services In Colombia Sas Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Facebook
TwitterMultienvironment trials (METs) enable the evaluation of the same genotypes under a v ariety of environments and management conditions. We present META (Multi Environment Trial Analysis), a suite of 31 SAS programs that analyze METs with complete or incomplete block designs, with or without adjustment by a covariate. The entire program is run through a graphical user interface. The program can produce boxplots or histograms for all traits, as well as univariate statistics. It also calculates best linear unbiased estimators (BLUEs) and best linear unbiased predictors for the main response variable and BLUEs for all other traits. For all traits, it calculates variance components by restricted maximum likelihood, least significant difference, coefficient of variation, and broad-sense heritability using PROC MIXED. The program can analyze each location separately, combine the analysis by management conditions, or combine all locations. The flexibility and simplicity of use of this program makes it a valuable tool for analyzing METs in breeding and agronomy. The META program can be used by any researcher who knows only a few fundamental principles of SAS.
Facebook
TwitterList of 56 characters used for cluster analysis and their significance levels from univariate test statistics using CANDISC procedure (SAS software).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS Code for Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, using Proc OptModel. the code specifies set of random values to run the mixed integer stochastic spatial optimization model repeatedly and collect results for each simulation that are then compiled and exported to be projected in GIS (geographic information systems). Certain supply nodes (fertilizer plants) are specified to work at either 70 percent of their capacities or more. Capacities for nodes of supply (fertilizer plants), demand (county centroids), transhipment nodes (transfer points-mode may change), and actual distance travelled are specified over arcs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS PROC used to evaluate SSMT data
Facebook
TwitterResults from PROC MIXED (SAS) analysis of effects of inoculum origin on plant biomass production of mid-successional plant species relative to the sterilized control treatment.
Facebook
TwitterFile List Code_and_Data_Supplement.zip (md5: dea8636b921f39c9d3fd269e44b6228c) Description The supplementary material provided includes all code and data files necessary to replicate the simulation models other demographic analyses presented in the paper. MATLAB code is provided for the simulations, and SAS code is provided to show how model parameters (vital rates) were estimated. The principal programs are Figure_3_4_5_Elasticity_Contours.m and Figure_6_Contours_Stochastic_Lambda.m which perform the elasticity analyses and run the stochastic simulation, respectively. The files are presented in a zipped folder called Code_and_Data_Supplement. When uncompressed, users may run the MATLAB programs by opening them from within this directory. Subdirectories contain the data files and supporting MATLAB functions necessary to complete execution. The programs are written to find the necessary supporting functions in the Code_and_Data_Supplement directory. If users copy these MATLAB files to a different directory, they must add the Code_and_Data_Supplement directory and its subdirectories to their search path to make the supporting files available. More details are provided in the README.txt file included in the supplement. The file and directory structure of entire zipped supplement is shown below. Folder PATH listing Code_and_Data_Supplement | Figure_3_4_5_Elasticity_Contours.m | Figure_6_Contours_Stochastic_Lambda.m | Figure_A1_RefitG2.m | Figure_A2_PlotFecundityRegression.m | README.txt | +---FinalDataFiles +---Make Tables | README.txt | Table_lamANNUAL.csv | Table_mgtProbPredicted.csv | +---ParameterEstimation | | Categorical Model output.xls | | | +---Fecundity | | Appendix_A3_Fecundity_Breakpoint.sas | | fec_Cat_Indiv.sas | | Mean_Fec_Previous_Study.m | | | +---G1 | | G1_Cat.sas | | | +---G2 | | G2_Cat.sas | | | +---Model Ranking | | Categorical Model Ranking.xls | | | +---Seedlings | | sdl_Cat.sas | | | +---SS | | SS_Cat.sas | | | +---SumSrv | | sum_Cat.sas | | | ---WinSrv | modavg.m | winCatModAvgfitted.m | winCatModAvgLinP.m | winCatModAvgMu.m | win_Cat.sas | +---ProcessedDatafiles | fecdat_gm_param_est_paper.mat | hierarchical_parameters.mat | refitG2_param_estimation.mat | ---Required_Functions | hline.m | hmstoc.m | Jeffs_Figure_Settings.m | Jeffs_startup.m | newbootci.m | sem.m | senstuff.m | vline.m | +---export_fig | change_value.m | eps2pdf.m | export_fig.m | fix_lines.m | ghostscript.m | license.txt | pdf2eps.m | pdftops.m | print2array.m | print2eps.m | +---lowess | license.txt | lowess.m | +---Multiprod_2009 | | Appendix A - Algorithm.pdf | | Appendix B - Testing speed and memory usage.pdf | | Appendix C - Syntaxes.pdf | | license.txt | | loc2loc.m | | MULTIPROD Toolbox Manual.pdf | | multiprod.m | | multitransp.m | | | ---Testing | | arraylab13.m | | arraylab131.m | | arraylab132.m | | arraylab133.m | | genop.m | | multiprod13.m | | readme.txt | | sysrequirements_for_testing.m | | testing_memory_usage.m | | testMULTIPROD.m | | timing_arraylab_engines.m | | timing_matlab_commands.m | | timing_MX.m | | | ---Data | Memory used by MATLAB statements.xls | Timing results.xlsx | timing_MX.txt | +---province | PROVINCE.DBF | province.prj | PROVINCE.SHP | PROVINCE.SHX | README.txt | +---SubAxis | parseArgs.m | subaxis.m | +---suplabel | license.txt | suplabel.m | suplabel_test.m | ---tight_subplot license.txt tight_subplot.m
Facebook
TwitterThe focus of this report is to describe the statistical inference procedures used to produce design-based estimates as presented in the 2013 detailed tables, the 2013 mental health detailed tables, the 2013 national findings report, and the 2013 mental health findings report. Thestatistical procedures and information found in this report can also be generally applied to analyses based on the public use file as well as the restricted-use file available through the data portal. This report is organized as follows: Section 2 provides background informationconcerning the 2013 NSDUH; Section 3 discusses the prevalence rates and how they were calculated, including specifics on topics such as mental illness, major depressive episode, and serious psychological distress; Section 4 briefly discusses how missing item responses of variables that are not imputed may lead to biased estimates; Section 5 discusses sampling errors and how they were calculated; Section 6 describes the degrees of freedom that were used when comparing estimates; and Section 7 discusses how the statistical significance of differences between estimates was determined. Section 8 discusses confidence interval estimation, and Section 9 describes how past year incidence of drug use was computed. Finally, Section 10 discusses the conditions under which estimates with low precision were suppressed. Appendix A contains examples that demonstrate how to conduct various statistical procedures documented within this report using SAS® and SUDAAN® Software for Statistical Analysis of Correlated Data (RTI International, 2012) along with separate examples using Stata® software.
Facebook
TwitterThe simulated synthetic aperture sonar (SAS) data presented here was generated using PoSSM [Johnson and Brown 2018]. The data is suitable for bistatic, coherent signal processing and will form acoustic seafloor imagery. Included in this data package is simulated sonar data in Generic Data Format (GDF) files, a description of the GDF file contents, example SAS imagery, and supporting information about the simulated scenes. In total, there are eleven 60 m x 90 m scenes, labeled scene00 through scene10, with scene00 provided with the scatterers in isolation, i.e. no seafloor texture. This is provided for beamformer testing purposes and should result in an image similar to the one labeled "PoSSM-scene00-scene00-starboard-0.tif" in the Related Data Sets tab. The ten other scenes have varying degrees of model variation as described in "Description_of_Simulated_SAS_Data_Package.pdf". A description of the data and the model is found in the associated document called "Description_of_Simulated_SAS_Data_Package.pdf" and a description of the format in which the raw binary data is stored is found in the related document "PSU_GDF_Format_20240612.pdf". The format description also includes MATLAB code that will effectively parse the data to aid in signal processing and image reconstruction. It is left to the researcher to develop a beamforming algorithm suitable for coherent signal and image processing. Each 60 m x 90 m scene is represented by 4 raw (not beamformed) GDF files, labeled sceneXX-STARBOARD-000000 through 000003. It is possible to beamform smaller scenes from any one of these 4 files, i.e. the four files are combined sequentially to form a 60 m x 90 m image. Also included are comma separated value spreadsheets describing the locations of scatterers and objects of interest within each scene. In addition to the binary GDF data, a beamformed GeoTIFF image and a single-look complex (SLC, science file) data of each scene is provided. The SLC data (science) is stored in the Hierarchical Data Format 5 (https://www.hdfgroup.org/), and appended with ".hdf5" to indicate the HDF5 format. The data are stored as 32-bit real and 32-bit complex values. A viewer is available that provides basic graphing, image display, and directory navigation functions (https://www.hdfgroup.org/downloads/hdfview/). The HDF file contains all the information necessary to reconstruct a synthetic aperture sonar image. All major and contemporary programming languages have library support for encoding/decoding the HDF5 format. Supporting documentation that outlines positions of the seafloor scatterers is included in "Scatterer_Locations_Scene00.csv", while the locations of the objects of interest for scene01-scene10 are included in "Object_Locations_All_Scenes.csv". Portable Network Graphic (PNG) images that plot the location of objects of all the objects of interest in each scene in Along-Track and Cross-Track notation are provided.
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864
Abstract (en): The purpose of this data collection is to provide an official public record of the business of the federal courts. The data originate from 94 district and 12 appellate court offices throughout the United States. Information was obtained at two points in the life of a case: filing and termination. The termination data contain information on both filing and terminations, while the pending data contain only filing information. For the appellate and civil data, the unit of analysis is a single case. The unit of analysis for the criminal data is a single defendant. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Checked for undocumented or out-of-range codes.. All federal court cases, 1970-2000. 2012-05-22 All parts are being moved to restricted access and will be available only using the restricted access procedures.2005-04-29 The codebook files in Parts 57, 94, and 95 have undergone minor edits and been incorporated with their respective datasets. The SAS files in Parts 90, 91, 227, and 229-231 have undergone minor edits and been incorporated with their respective datasets. The SPSS files in Parts 92, 93, 226, and 228 have undergone minor edits and been incorporated with their respective datasets. Parts 15-28, 34-56, 61-66, 70-75, 82-89, 96-105, 107, 108, and 115-121 have had identifying information removed from the public use file and restricted data files that still include that information have been created. These parts have had their SPSS, SAS, and PDF codebook files updated to reflect the change. The data, SPSS, and SAS files for Parts 34-37 have been updated from OSIRIS to LRECL format. The codebook files for Parts 109-113 have been updated. The case counts for Parts 61-66 and 71-75 have been corrected in the study description. The LRECL for Parts 82, 100-102, and 105 have been corrected in the study description.2003-04-03 A codebook was created for Part 105, Civil Pending, 1997. Parts 232-233, SAS and SPSS setup files for Civil Data, 1996-1997, were removed from the collection since the civil data files for those years have corresponding SAS and SPSS setup files.2002-04-25 Criminal data files for Parts 109-113 have all been replaced with updated files. The updated files contain Criminal Terminations and Criminal Pending data in one file for the years 1996-2000. Part 114, originally Criminal Pending 2000, has been removed from the study and the 2000 pending data are now included in Part 113.2001-08-13 The following data files were revised to include plaintiff and defendant information: Appellate Terminations, 2000 (Part 107), Appellate Pending, 2000 (Part 108), Civil Terminations, 1996-2000 (Parts 103, 104, 115-117), and Civil Pending, 2000 (Part 118). The corresponding SAS and SPSS setup files and PDF codebooks have also been edited.2001-04-12 Criminal Terminations (Parts 109-113) data for 1996-2000 and Criminal Pending (Part 114) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.2001-03-26 Appellate Terminations (Part 107) and Appellate Pending (Part 108) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.1997-07-16 The data for 18 of the Criminal Data files were matched to the wrong part numbers and names, and now have been corrected. Funding insitution(s): United States Department of Justice. Office of Justice Programs. Bureau of Justice Statistics. (1) Several, but not all, of these record counts include a final blank record. Researchers may want to detect this occurrence and eliminate this record before analysis. (2) In July 1984, a major change in the recording and disposition of an appeal occurred, and several data fields dealing with disposition were restructured or replaced. The new structure more clearly delineates mutually exclusive dispositions. Researchers must exercise care in using these fields for comparisons. (3) In 1992, the Administrative Office of the United States Courts changed the reporting period for statistical data. Up to 1992, the reporting period...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS code to reproduce the simulation study and the analysis of the urine osmolarity example. (ZIP)
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de441277https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de441277
Abstract (en): This study is part of a time-series collection of national surveys fielded continuously since 1952. The election studies are designed to present data on Americans' social backgrounds, enduring political predispositions, social and political values, perceptions and evaluations of groups and candidates, opinions on questions of public policy, and participation in political life. A Black supplement of 263 respondents, who were asked the same questions that were administered to the national cross-section sample, is included with the national cross-section of 1,571 respondents. In addition to the usual content, the study contains data on opinions about the Supreme Court, political knowledge, and further information concerning racial issues. Voter validation data have been included as an integral part of the election study, providing objective information from registration and voting records or from respondents' past voting behavior. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Performed recodes and/or calculated derived variables.; Checked for undocumented or out-of-range codes.. United States citizens of voting age living in private households in the continental United States. A representative cross-section sample, consisting of 1,571 respondents, plus a Black supplement sample of 263 respondents. 2015-11-10 The study metadata was updated.1999-12-14 The data for this study are now available in SAS transport and SPSS export formats, in addition to the ASCII data file. Variables in the dataset have been renumbered to the following format: 2-digit (or 2-character) year prefix + 4 digits + [optional] 1-character suffix. Dataset ID and version variables have also been added. In addition, SAS and SPSS data definition statements have been created for this collection, and the data collection instruments are now available as a PDF file. face-to-face interview, telephone interviewThe SAS transport file was created using the SAS CPORT procedure.
Facebook
Twitterhttps://archive.data.jhu.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7281/T1/PXEROLhttps://archive.data.jhu.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7281/T1/PXEROL
This is the limited access database for the Study to Understand Fall Reduction and Vitamin D in You (STURDY) randomized response-adaptive clinical trial. The database includes baseline, treatment and post randomization data. This Database includes a set of files pertaining to the full study population (688 randomized participants plus screenees who were not randomized) and a set of files pertaining to the burn-in cohort (the 406 participants randomized prior to the first adjustment of the randomization probabilities). The Database also includes files that support the analyses included in the primary outcome paper published by the Annals of Internal Medicine (2021;174:(2):145-156). Each data file in the Database corresponds to a specific data collection form or type of data. This documentation notebook includes a SAS PROC CONTENTS listing for each SAS file and a copy of the relevant form if applicable. Each variable on each SAS data file has an associated SAS label. Several STURDY documents, including the final versions of the screening and trial consent statements, the Protocol, and the Manual of Procedures, are included with this documentation notebook to assist with understanding and navigation of STURDY data. Notes on analysis questions and issues are also included, as is a list of STURDY publications.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Protein-Protein, Genetic, and Chemical Interactions for SAS-6 (Caenorhabditis elegans) curated by BioGRID (https://thebiogrid.org); DEFINITION: sas-6 is a member of an evolutionarily conserved family of proteins containing a coiled-coil region and a novel PISA (present in SAS-6) motif conserved in similar proteins in at least eight other species that contain basal bodies or centrioles; SAS-6 activity, along with that of ZYG-1, SAS-4, SAS-5, and SPD-2, is essential for centriole duplication; SAS-6 functions in a dose-dependent manner and localizes to centriolar cylinders; SAS-6 co-localizes to centrioles with SAS-4 and SAS-5, and is mutually dependent upon SAS-5, with which it interacts physically, for centriolar localization; SAS-6 localization also requires the activity of the ZYG-1 kinase; in addition to the early embryo, SAS-6 is also detected in sperm; phosphorylation of SAS-6 is critical for centriole formation and thus for faithful cell division; the kinase ZYG-1 phosphorylates SAS-6 at serine 123 and this phosphorylation event is crucial for robust centriole formation and to ensure the maintenance of SAS-6 at the emerging centriole.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Users are able to access data related discharge information on all emergency department visits. Data is focused on but not limited to emergency room diagnoses, procedures, demographics, and payment source. Background The State Emergency Department Databases (SEDD) is focused on capturing discharge information on all emergency department visits that do not result in an admission, (Information on patients initially seen in the emergency room and then admitted to the hospital is included in the State Inpatient Databases (SID)). The SEDD contains emergency department information from 27 states. The SEDD contain more than 100 clinical and non-clinical variables included in a hospital dis charge abstract, such as: diagnoses, procedures, patient demographics, expected payment source and total charges. User functionality Users must pay to access the SEDD database. SEDD files from 1999-2009 are available through the HCUP Central Distributor. The SEDD data set can be run on desktop computers with a CD-ROM reader, and comes in ASCII format. The data on the CD set require a statistical software package such as SAS or SPSS to use for analytic purposes. The data set comes with full documentation. SAS and SPSS users are provided programs for converting ASCII files. Data Notes Data is available from 1999-2009. The website does not indicate when new data will be updated. Twenty-seven States now currently participate in the SEDD including Arizona, California, Connecticut, Florida, Georgia, Hawaii, Indiana, Iowa, Kansas, Maine, Maryland, Massachusetts, Minnesota, Missouri, Nebraska, New Hampshire, New Jersey, New York, North Carolina, Ohio, Rhode Island, South Carolina, South Dakota, Tennessee, Utah, Vermont, and Wisconsin.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example of the code used to account for statistical significances for phenotype and other variables.
Facebook
TwitterSummary data for the studies used in the meta-analysis of local adaptation (Table 1 from the publication)This table contains the data used in this published meta-analysis. The data were originally extracted from the publications listed in the table. The file corresponds to Table 1 in the original publication.tb1.xlsSAS script used to perform meta-analysesThis file contains the essential elements of the SAS script used to perform meta-analyses published in Hoeksema & Forde 2008. Multi-factor models were fit to the data using weighted maximum likelihood estimation of parameters in a mixed model framework, using SAS PROC MIXED, in which the species traits and experimental design factors were considered fixed effects, and a random between-studies variance component was estimated. Significance (at alpha = 0.05) of individual factors in these models was determined using randomization procedures with 10,000 iterations (performed with a combination of macros in SAS), in which effect sizes a...