Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS PROC used to evaluate SSMT data
Facebook
TwitterSummary data for the studies used in the meta-analysis of local adaptation (Table 1 from the publication)This table contains the data used in this published meta-analysis. The data were originally extracted from the publications listed in the table. The file corresponds to Table 1 in the original publication.tb1.xlsSAS script used to perform meta-analysesThis file contains the essential elements of the SAS script used to perform meta-analyses published in Hoeksema & Forde 2008. Multi-factor models were fit to the data using weighted maximum likelihood estimation of parameters in a mixed model framework, using SAS PROC MIXED, in which the species traits and experimental design factors were considered fixed effects, and a random between-studies variance component was estimated. Significance (at alpha = 0.05) of individual factors in these models was determined using randomization procedures with 10,000 iterations (performed with a combination of macros in SAS), in which effect sizes a...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Parameter estimates for the generalized H2 model (SAS output).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Facebook
TwitterResults from PROC MIXED (SAS) analysis of effects of inoculum origin on plant biomass production of mid-successional plant species relative to the sterilized control treatment.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS Code for Spatial Optimization of Supply Chain Network for Nitrogen Based Fertilizer in North America, by type, by mode of transportation, per county, for all major crops, using Proc OptModel. the code specifies set of random values to run the mixed integer stochastic spatial optimization model repeatedly and collect results for each simulation that are then compiled and exported to be projected in GIS (geographic information systems). Certain supply nodes (fertilizer plants) are specified to work at either 70 percent of their capacities or more. Capacities for nodes of supply (fertilizer plants), demand (county centroids), transhipment nodes (transfer points-mode may change), and actual distance travelled are specified over arcs.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The Longitudinal Study of Ocular Complications of AIDS was a 15-year multi-center observational study which collected demographic, medical history, treatment, and vision-related data at quarterly visits from 2,392 patients with AIDS. Each SAS dataset in this collection relates to the cumulative patient-visits from a particular LSOCA form. For example, va.sas7bdat is the SAS dataset for the visual acuity data. Use the appropriate LSOCA form and SAS labels from the SAS PROC CONTENTS to decode each data item.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mortality rates were calculated as defined in the text.Summary statistics Black cervical cancer mortality by year in thirteen U.S. states.
Facebook
TwitterMultienvironment trials (METs) enable the evaluation of the same genotypes under a v ariety of environments and management conditions. We present META (Multi Environment Trial Analysis), a suite of 31 SAS programs that analyze METs with complete or incomplete block designs, with or without adjustment by a covariate. The entire program is run through a graphical user interface. The program can produce boxplots or histograms for all traits, as well as univariate statistics. It also calculates best linear unbiased estimators (BLUEs) and best linear unbiased predictors for the main response variable and BLUEs for all other traits. For all traits, it calculates variance components by restricted maximum likelihood, least significant difference, coefficient of variation, and broad-sense heritability using PROC MIXED. The program can analyze each location separately, combine the analysis by management conditions, or combine all locations. The flexibility and simplicity of use of this program makes it a valuable tool for analyzing METs in breeding and agronomy. The META program can be used by any researcher who knows only a few fundamental principles of SAS.
Facebook
Twitterhttps://creativecommons.org/share-your-work/public-domain/pdmhttps://creativecommons.org/share-your-work/public-domain/pdm
This data collection contains Supplemental Nutrition Assistance Program (SNAP) SAS proc contents (metadata only) files for Arizona (AZ), Hawaii (HI), Illinois (IL), Kentucky (KY), New Jersey (NJ), New York (NY), Oregon (OR), Tennessee (TN), and Virginia (VA).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results from a Mixed Model Analysis of Variance in SAS PROC GLM to test differences in F1 scores across algorithms and models. The ANOVA model included test sets nested in models as a categorical random factor, and algorithms and models as fixed categorical factors. Scheffé adjustment for multiple comparisons was used to control type I error rate.
Facebook
TwitterThe data set is a crosswalk file for working with 2020 Census block group boundaries and Philadelphia Police Department district and police service areas (PSAs). Census blockgroup population centroids were situated in police geographies using SAS Proc GINSIDE. The data facilitate demographic approximations of the residential population within Philadelphia police districts and police service areas (PSAs).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example of the code used to account for statistical significances for phenotype and other variables.
Facebook
TwitterWe tested whether the probability of a visit was a function oftreatment (dietary N content as a continuous variable) using logistic regression in SAS (PROC GLIMMIX with a binomial distribution and logit link function, SAS 9.4). Day (fixed effect), site (random effect) and feeding station nested within site (random effect) were also included in the model. We then analysed the effect of treatment (dietary N content as a continuous variable) on visit length (min), each behaviour (% of total time) and GUD (count) separately using the generalized linear mixed model (GLMM) procedure in SAS (PROC GLIMMIX with lognormal distribution and identity link function, SAS 9.4). Day (1-4) was included in the models as a fixed effect, and site and feeding station (nested within site) were random effects.To analyse our VOCs data we looked at the odours of the diets using a canonical analysis of principal coordinates(CAP) analysis in the PERMANOVA+ add-on of PRIMER v6to determine whether the multivariate VOC data could differentiate the diets along a continuous (dietary nitrogencontent) gradient, similar to analyses of VOCs from other plant/food material. We applied a dispersion weighting followed by square root transformation to the VOC peak area values, then performed CAP analysis on the Bray-Curtis resemblance matrix of the transformed data. To tease apart the contributing VOCs we then applied the CAPanalysis using diet as a class variable. We also isolated the specific volatile signature of the highest quality diet usingthe Random Forests (RF). We analysed the data with RF, using a one treatment-versus-the rest approach with the VSURF package (version 1.0.3) in R (version 3.1.2; R Core Team, 2015). Before analysis, TQPA data were transformed using the centred log ratio method using CoDaPack v. 2.01.15.
Facebook
TwitterThe files submitted here contains data collected for the thesis titled "Relative preference for pecking blocks and its association with keel status and eggshell quality in laying hens housed in enriched cages." The purpose of this research was to determine pecking block preferences of White and Brown feathered laying hens strains, and if there is a time of day effect on pecking block use. We then investigated the association between pecking block preference, pecking block use, keel status, and eggshell quality. We also investigated if laying hens are consistent in their pecking block preference over time. Data on weekly pecking block disappearance, number of hens using pecking blocks across the day, eggshell quality and keel status in focal birds were also assessed. Data was analyzed using SAS Proc GLIMMIX, and consistency data was analyzed using SAS Proc Freq.
Facebook
Twitterhttps://archive.data.jhu.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7281/T1/PXEROLhttps://archive.data.jhu.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7281/T1/PXEROL
This is the limited access database for the Study to Understand Fall Reduction and Vitamin D in You (STURDY) randomized response-adaptive clinical trial. The database includes baseline, treatment and post randomization data. This Database includes a set of files pertaining to the full study population (688 randomized participants plus screenees who were not randomized) and a set of files pertaining to the burn-in cohort (the 406 participants randomized prior to the first adjustment of the randomization probabilities). The Database also includes files that support the analyses included in the primary outcome paper published by the Annals of Internal Medicine (2021;174:(2):145-156). Each data file in the Database corresponds to a specific data collection form or type of data. This documentation notebook includes a SAS PROC CONTENTS listing for each SAS file and a copy of the relevant form if applicable. Each variable on each SAS data file has an associated SAS label. Several STURDY documents, including the final versions of the screening and trial consent statements, the Protocol, and the Manual of Procedures, are included with this documentation notebook to assist with understanding and navigation of STURDY data. Notes on analysis questions and issues are also included, as is a list of STURDY publications.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first block of codes calls PROC MIXED with the QTL effect being treated as a random effect. The second block of codes calls PROC MIXED with the QTL effect being treated as a fixed effect. (SAS)
Facebook
TwitterIn this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. Reference: O. J. Mengshoel, S. Poll, and T. Kurtoglu. "Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft." Proc. of the IJCAI-09 Workshop on Self-* and Autonomous Systems (SAS): Reasoning and Integration Challenges, 2009 BibTex Reference: @inproceedings{mengshoel09developing, title = {Developing Large-Scale {Bayesian} Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft}, author = {Mengshoel, O. J. and Poll, S. and Kurtoglu, T.}, booktitle = {Proc. of the IJCAI-09 Workshop on Self-$\star$ and Autonomous Systems (SAS): Reasoning and Integration Challenges}, year={2009} }
Facebook
TwitterThe
current study examined how racial/ethnic self-identification combines with
gender to shape self-reports of everyday discrimination among youth in the U.S.
as they transition to adulthood. Data came from seven waves of the Panel Study
of Income Dynamics Transition into Adulthood Supplement (TAS). The sample included
individuals with two or more observations who identified as White, Black, or Hispanic
(n=2,532). Data includes average everyday discrimination scale scores over 9 time periods (i.e., ages 18 to 27) as well as pattern variables for race/ethnicity and sex groups and family SES proxied by highest level of education in household at baseline. Developmental trajectories of everyday discrimination across ages 18
to 27 were estimated using multilevel longitudinal models with the SAS Proc
Mixed procedure.
Facebook
TwitterA water impoundment facility was used to control the duration of soil flooding (0, 45, or 90 days) and shade houses were used to control light availability (high = 72 %, intermediate = 33 %, or low = 2 % of ambient light) received by L. melissifolia established on native soil of the MAV. A completely randomized, split-plot design was used to evaluate the effects of soil flooding and light availability on L. melissifolia reproductive intensity and mode. Analyses were conducted on plot means using PROC GLIMMIX with an adjustment in the error term for the whole-plot factor (SAS 9.4, SAS Institute, Inc., Cary, North Carolina, USA). PROC UNIVARIATE was used to test data normality for each response variable, and residual errors were normalized with Box-Cox, natural log, or square root transformations where appropriate prior to the PROC GLIMMIX analyses. Significance was accepted at ∠= 0.05, and we used the least significant difference (LSD) test to separate significant treatment effect means...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SAS PROC used to evaluate SSMT data