Facebook
TwitterSummary data for the studies used in the meta-analysis of local adaptation (Table 1 from the publication)This table contains the data used in this published meta-analysis. The data were originally extracted from the publications listed in the table. The file corresponds to Table 1 in the original publication.tb1.xlsSAS script used to perform meta-analysesThis file contains the essential elements of the SAS script used to perform meta-analyses published in Hoeksema & Forde 2008. Multi-factor models were fit to the data using weighted maximum likelihood estimation of parameters in a mixed model framework, using SAS PROC MIXED, in which the species traits and experimental design factors were considered fixed effects, and a random between-studies variance component was estimated. Significance (at alpha = 0.05) of individual factors in these models was determined using randomization procedures with 10,000 iterations (performed with a combination of macros in SAS), in which effect sizes a...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first block of codes calls PROC MIXED with the QTL effect being treated as a random effect. The second block of codes calls PROC MIXED with the QTL effect being treated as a fixed effect. (SAS)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS Code for Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, using Proc OptModel. the code specifies set of random values to run the mixed integer stochastic spatial optimization model repeatedly and collect results for each simulation that are then compiled and exported to be projected in GIS (geographic information systems). Certain supply nodes (fertilizer plants) are specified to work at either 70 percent of their capacities or more. Capacities for nodes of supply (fertilizer plants), demand (county centroids), transhipment nodes (transfer points-mode may change), and actual distance travelled are specified over arcs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS PROC used to evaluate SSMT data
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Facebook
TwitterResults from PROC MIXED (SAS) analysis of effects of inoculum origin on plant biomass production of mid-successional plant species relative to the sterilized control treatment.
Facebook
TwitterMultienvironment trials (METs) enable the evaluation of the same genotypes under a v ariety of environments and management conditions. We present META (Multi Environment Trial Analysis), a suite of 31 SAS programs that analyze METs with complete or incomplete block designs, with or without adjustment by a covariate. The entire program is run through a graphical user interface. The program can produce boxplots or histograms for all traits, as well as univariate statistics. It also calculates best linear unbiased estimators (BLUEs) and best linear unbiased predictors for the main response variable and BLUEs for all other traits. For all traits, it calculates variance components by restricted maximum likelihood, least significant difference, coefficient of variation, and broad-sense heritability using PROC MIXED. The program can analyze each location separately, combine the analysis by management conditions, or combine all locations. The flexibility and simplicity of use of this program makes it a valuable tool for analyzing METs in breeding and agronomy. The META program can be used by any researcher who knows only a few fundamental principles of SAS.
Facebook
TwitterPregnancy is a condition of broad interest across many medical and health services research domains, but one not easily identified in healthcare claims data. Our objective was to establish an algorithm to identify pregnant women and their pregnancies in claims data. We identified pregnancy-related diagnosis, procedure, and diagnosis-related group codes, accounting for the transition to International Statistical Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) diagnosis and procedure codes, in health encounter reporting on 10/1/2015. We selected women in Merative MarketScan commercial databases aged 15–49 years with pregnancy-related claims, and their infants, during 2008–2019. Pregnancies, pregnancy outcomes, and gestational ages were assigned using the constellation of service dates, code types, pregnancy outcomes, and linkage to infant records. We describe pregnancy outcomes and gestational ages, as well as maternal age, census region, and health plan type. In a sensitivity analysis, we compared our algorithm-assigned date of last menstrual period (LMP) to fertility procedure-based LMP (date of procedure + 14 days) among women with embryo transfer or insemination procedures. Among 5,812,699 identified pregnancies, most (77.9%) were livebirths, followed by spontaneous abortions (16.2%); 3,274,353 (72.2%) livebirths could be linked to infants. Most pregnancies were among women 25–34 years (59.1%), living in the South (39.1%) and Midwest (22.4%), with large employer-sponsored insurance (52.0%). Outcome distributions were similar across ICD-9 and ICD-10 eras, with some variation in gestational age distribution observed. Sensitivity analyses supported our algorithm’s framework; algorithm- and fertility procedure-derived LMP estimates were within a week of each other (mean difference: -4 days [IQR: -13 to 6 days]; n = 107,870). We have developed an algorithm to identify pregnancies, their gestational age, and outcomes, across ICD-9 and ICD-10 eras using administrative data. This algorithm may be useful to reproductive health researchers investigating a broad range of pregnancy and infant outcomes.
Facebook
TwitterProcedures Services In Colombia Sas Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example of the code used to account for statistical significances for phenotype and other variables.
Facebook
Twitterhttps://creativecommons.org/share-your-work/public-domain/pdmhttps://creativecommons.org/share-your-work/public-domain/pdm
This data collection contains Supplemental Nutrition Assistance Program (SNAP) SAS proc contents (metadata only) files for Arizona (AZ), Hawaii (HI), Illinois (IL), Kentucky (KY), New Jersey (NJ), New York (NY), Oregon (OR), Tennessee (TN), and Virginia (VA).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The Longitudinal Study of Ocular Complications of AIDS was a 15-year multi-center observational study which collected demographic, medical history, treatment, and vision-related data at quarterly visits from 2,392 patients with AIDS. Each SAS dataset in this collection relates to the cumulative patient-visits from a particular LSOCA form. For example, va.sas7bdat is the SAS dataset for the visual acuity data. Use the appropriate LSOCA form and SAS labels from the SAS PROC CONTENTS to decode each data item.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Parameter estimates for the generalized H2 model (SAS output).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The period of early ontogeny constitutes a time when the physical immaturity of an organism is highly susceptible to external stimuli. Thus, early development plays a major role in shaping later adult behavior. The aim of the study was to check whether stimulating puppies at this early stage in life with sound would improve their responsiveness towards unfamiliar noises during the selection process of the police behavioral test for puppies. The cohort comprised 37 puppies from the litters of three mothers. At the commencement of the experiment the dogs were aged 16 days, rising to the age of 32 days at its close. The mothers and litters of the treatment group were either exposed to radio broadcasts, (see below; three litters totaling 19 puppies), while the control group was not exposed to any radio programs (eight litters totaling 18 puppies). All three mothers had previously experienced both auditory circumstances, as described herein. Ordinary radio broadcasts were played to the puppies in the treatment group three times a day for 20 minute periods, always during feeding time. The cohort was subjected to the so-called Puppy Test, i.e. analysis of the potential of each animal, once the dogs had reached the age of 7 weeks. Such tests included exposure to a sudden noise caused by a shovel (100 dB), noise when alone in a room, and response to loud distracting stimuli (the latter two at 70 dB). Said tasks were rated by the same analyst on a scale of 0–5 points; the better the response of the dog, the higher the score given. The differences between the treatment and control groups were analyzed via Mixed Models (PROC MIXED) in SAS. The animals comprising the treatment group responded with a higher score to the sudden noise caused by the shovel than the control dogs (P
Facebook
TwitterIn this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. Reference: O. J. Mengshoel, S. Poll, and T. Kurtoglu. "Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft." Proc. of the IJCAI-09 Workshop on Self-* and Autonomous Systems (SAS): Reasoning and Integration Challenges, 2009 BibTex Reference: @inproceedings{mengshoel09developing, title = {Developing Large-Scale {Bayesian} Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft}, author = {Mengshoel, O. J. and Poll, S. and Kurtoglu, T.}, booktitle = {Proc. of the IJCAI-09 Workshop on Self-$\star$ and Autonomous Systems (SAS): Reasoning and Integration Challenges}, year={2009} }
Facebook
TwitterWe tested whether the probability of a visit was a function oftreatment (dietary N content as a continuous variable) using logistic regression in SAS (PROC GLIMMIX with a binomial distribution and logit link function, SAS 9.4). Day (fixed effect), site (random effect) and feeding station nested within site (random effect) were also included in the model. We then analysed the effect of treatment (dietary N content as a continuous variable) on visit length (min), each behaviour (% of total time) and GUD (count) separately using the generalized linear mixed model (GLMM) procedure in SAS (PROC GLIMMIX with lognormal distribution and identity link function, SAS 9.4). Day (1-4) was included in the models as a fixed effect, and site and feeding station (nested within site) were random effects.To analyse our VOCs data we looked at the odours of the diets using a canonical analysis of principal coordinates(CAP) analysis in the PERMANOVA+ add-on of PRIMER v6to determine whether the multivariate VOC data could differentiate the diets along a continuous (dietary nitrogencontent) gradient, similar to analyses of VOCs from other plant/food material. We applied a dispersion weighting followed by square root transformation to the VOC peak area values, then performed CAP analysis on the Bray-Curtis resemblance matrix of the transformed data. To tease apart the contributing VOCs we then applied the CAPanalysis using diet as a class variable. We also isolated the specific volatile signature of the highest quality diet usingthe Random Forests (RF). We analysed the data with RF, using a one treatment-versus-the rest approach with the VSURF package (version 1.0.3) in R (version 3.1.2; R Core Team, 2015). Before analysis, TQPA data were transformed using the centred log ratio method using CoDaPack v. 2.01.15.
Facebook
Twitterhttps://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864https://search.gesis.org/research_data/datasearch-httpwww-da-ra-deoaip--oaioai-da-ra-de456864
Abstract (en): The purpose of this data collection is to provide an official public record of the business of the federal courts. The data originate from 94 district and 12 appellate court offices throughout the United States. Information was obtained at two points in the life of a case: filing and termination. The termination data contain information on both filing and terminations, while the pending data contain only filing information. For the appellate and civil data, the unit of analysis is a single case. The unit of analysis for the criminal data is a single defendant. ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection: Performed consistency checks.; Standardized missing values.; Checked for undocumented or out-of-range codes.. All federal court cases, 1970-2000. 2012-05-22 All parts are being moved to restricted access and will be available only using the restricted access procedures.2005-04-29 The codebook files in Parts 57, 94, and 95 have undergone minor edits and been incorporated with their respective datasets. The SAS files in Parts 90, 91, 227, and 229-231 have undergone minor edits and been incorporated with their respective datasets. The SPSS files in Parts 92, 93, 226, and 228 have undergone minor edits and been incorporated with their respective datasets. Parts 15-28, 34-56, 61-66, 70-75, 82-89, 96-105, 107, 108, and 115-121 have had identifying information removed from the public use file and restricted data files that still include that information have been created. These parts have had their SPSS, SAS, and PDF codebook files updated to reflect the change. The data, SPSS, and SAS files for Parts 34-37 have been updated from OSIRIS to LRECL format. The codebook files for Parts 109-113 have been updated. The case counts for Parts 61-66 and 71-75 have been corrected in the study description. The LRECL for Parts 82, 100-102, and 105 have been corrected in the study description.2003-04-03 A codebook was created for Part 105, Civil Pending, 1997. Parts 232-233, SAS and SPSS setup files for Civil Data, 1996-1997, were removed from the collection since the civil data files for those years have corresponding SAS and SPSS setup files.2002-04-25 Criminal data files for Parts 109-113 have all been replaced with updated files. The updated files contain Criminal Terminations and Criminal Pending data in one file for the years 1996-2000. Part 114, originally Criminal Pending 2000, has been removed from the study and the 2000 pending data are now included in Part 113.2001-08-13 The following data files were revised to include plaintiff and defendant information: Appellate Terminations, 2000 (Part 107), Appellate Pending, 2000 (Part 108), Civil Terminations, 1996-2000 (Parts 103, 104, 115-117), and Civil Pending, 2000 (Part 118). The corresponding SAS and SPSS setup files and PDF codebooks have also been edited.2001-04-12 Criminal Terminations (Parts 109-113) data for 1996-2000 and Criminal Pending (Part 114) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.2001-03-26 Appellate Terminations (Part 107) and Appellate Pending (Part 108) data for 2000 have been added to the data collection, along with corresponding SAS and SPSS setup files and PDF codebooks.1997-07-16 The data for 18 of the Criminal Data files were matched to the wrong part numbers and names, and now have been corrected. Funding insitution(s): United States Department of Justice. Office of Justice Programs. Bureau of Justice Statistics. (1) Several, but not all, of these record counts include a final blank record. Researchers may want to detect this occurrence and eliminate this record before analysis. (2) In July 1984, a major change in the recording and disposition of an appeal occurred, and several data fields dealing with disposition were restructured or replaced. The new structure more clearly delineates mutually exclusive dispositions. Researchers must exercise care in using these fields for comparisons. (3) In 1992, the Administrative Office of the United States Courts changed the reporting period for statistical data. Up to 1992, the reporting period...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mortality rates were calculated as defined in the text.Summary statistics Black cervical cancer mortality by year in thirteen U.S. states.
Facebook
TwitterThe data set is a crosswalk file for working with 2020 Census block group boundaries and Philadelphia Police Department district and police service areas (PSAs). Census blockgroup population centroids were situated in police geographies using SAS Proc GINSIDE. The data facilitate demographic approximations of the residential population within Philadelphia police districts and police service areas (PSAs).
Facebook
TwitterA water impoundment facility was used to control the duration of soil flooding (0, 45, or 90 days) and shade houses were used to control light availability (high = 72 %, intermediate = 33 %, or low = 2 % of ambient light) received by L. melissifolia established on native soil of the MAV. A completely randomized, split-plot design was used to evaluate the effects of soil flooding and light availability on L. melissifolia reproductive intensity and mode. Analyses were conducted on plot means using PROC GLIMMIX with an adjustment in the error term for the whole-plot factor (SAS 9.4, SAS Institute, Inc., Cary, North Carolina, USA). PROC UNIVARIATE was used to test data normality for each response variable, and residual errors were normalized with Box-Cox, natural log, or square root transformations where appropriate prior to the PROC GLIMMIX analyses. Significance was accepted at ∠= 0.05, and we used the least significant difference (LSD) test to separate significant treatment effect means...
Facebook
TwitterSummary data for the studies used in the meta-analysis of local adaptation (Table 1 from the publication)This table contains the data used in this published meta-analysis. The data were originally extracted from the publications listed in the table. The file corresponds to Table 1 in the original publication.tb1.xlsSAS script used to perform meta-analysesThis file contains the essential elements of the SAS script used to perform meta-analyses published in Hoeksema & Forde 2008. Multi-factor models were fit to the data using weighted maximum likelihood estimation of parameters in a mixed model framework, using SAS PROC MIXED, in which the species traits and experimental design factors were considered fixed effects, and a random between-studies variance component was estimated. Significance (at alpha = 0.05) of individual factors in these models was determined using randomization procedures with 10,000 iterations (performed with a combination of macros in SAS), in which effect sizes a...