Facebook
TwitterOutput from programming code written to summarize data describing 2018 MCSP Trial monitoring sites acquired using a SOP 1 (see ServCat reference 103364) of FWS Legacy Regions 2 and 3. Monitoring sites were selected using a custom GRTS draw conducted by USGS in 2017, within monitoring areas associated with select NWRS stations. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA) NWRs and several locations near the town of Lamoni, Iowa and private lands in northern Missouri.
Facebook
TwitterOutput from programming code written to summarize 2018 monarch butterfly abundance from monitoring data acquired using a modified Pollard walk at custom 2017 GRTS draw sites within select monitoring areas (see SOP 2 in ServCat reference 103367 for methods) of FWS Legacy Regions 2 and 3. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA) NWRs and several locations near the town of Lamoni, Iowa and northern Missouri. Input data file is named 'FWS_2018_MM_SOP2_for_SAS.csv' and is stored in ServCat reference 136485. See SM 5 (ServCat reference 103388) for dictionary of data fields in the input data file.
Facebook
TwitterOutput from programming code written to summarize data describing 2017 MCSP Trial monitoring sites acquired using a SOP 1 (see ServCat reference 103364) of FWS Legacy Regions 2 and 3. 2017 monitoring sites were selected using a custom GRTS draw conducted by USGS, within monitoring areas associated with select NWRS stations. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA), Necedah (WI) NWRs and several locations near the town of Lamoni, Iowa and private lands in northern Missouri.
Facebook
TwitterSAS statistical software output from the a priori model for elk best supported by the data.
Facebook
TwitterOutput from programming code written to summarize immature monarch butterfly, milkweed and nectar plant abundance from monitoring data acquired using a grid of 1 square-meter quadrats at custom 2017 GRTS draw sites within select monitoring areas (see SOP 3 in ServCat reference 103368 for methods) of FWS Legacy Regions 2 and 3. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA), Necedah (WI) NWRs and several locations near the town of Lamoni, Iowa and northern Missouri. Input data file is named 'FWS_2017_MonMonSOP3DS1_forSAS.csv' and is stored in ServCat reference 137700. See SM 5 (ServCat reference 103388) for dictionary of data fields in the input data file.
Facebook
TwitterOutput (results) from programming code written to summarize red-imported fire ant (RIFA) abundance from monitoring along transects at custom 2017 GRTS draw sites within select monitoring areas (see SOP 6 in ServCat reference 103385 for methods) of FWS Legacy Regions 2 and 3. Areas monitored included Balcones Canyonlands (TX) and Hagerman (TX) NWRs. The spreadsheet labeled as SOP 6 Metrics displays the different estimates in different worksheets. Each worksheet can be used for additional analysis.
Facebook
TwitterA SAS program for inputting data and estimating parameters under the model presented in Expression 2, and an interpretation of the output of the program.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Facebook
TwitterThis package contains two files designed to help read individual level DHS data into Stata. The first file addresses the problem that versions of Stata before Version 7/SE will read in only up to 2047 variables and most of the individual files have more variables than that. The file will read in the .do, .dct and .dat file and output new .do and .dct files with only a subset of the variables specified by the user. The second file deals with earlier DHS surveys in which .do and .dct file do not exist and only .sps and .sas files are provided. The file will read in the .sas and .sps files and output a .dct and .do file. If necessary the first file can then be run again to select a subset of variables.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The resaerch article is on effects of seed rate and interow spacing on the growth, phenological charactersitics , yiled and lodging seve
Facebook
TwitterSAS statistical software output from the a priori model for bison best supported by the data.
Facebook
TwitterSanchez Sas Diesel Output Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterOutput from programming code written to summarize monarch butterfly abundance from monitoring data acquired using a modified Pollard walk at custom 2017 GRTS draw sites within select monitoring areas (see SOP 2 in ServCat reference 103367 for methods) of FWS Legacy Regions 2 and 3. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA), Necedah (WI) NWRs and several locations near the town of Lamoni, Iowa and private lands in northern Missouri.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List NBvsPoi_FINAL.sas -- SAS code SSEAK98_FINAL.txt -- Harbor seal data used by SAS code Description The NBvsPoi_FINAL SAS program uses a SAS macro to analyze the data in SSEAK98_FINAL.txt. The SAS program and macro are commented for further explanation.
Facebook
TwitterThis workshop takes you on a quick tour of Stata, SPSS, and SAS. It examines a data file using each package. Is one more user friendly than the others? Are there significant differences in the codebooks created? This workshop also looks at creating a frequency and cross-tabulation table in each. Which output screen is easiest to read and interpret? The goal of this workshop is to give you an overview of these products and provide you with the information you need to determine whick package fits the requirements of you and your user.
Facebook
TwitterThis SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.
Facebook
TwitterFile List Code_and_Data_Supplement.zip (md5: dea8636b921f39c9d3fd269e44b6228c) Description The supplementary material provided includes all code and data files necessary to replicate the simulation models other demographic analyses presented in the paper. MATLAB code is provided for the simulations, and SAS code is provided to show how model parameters (vital rates) were estimated. The principal programs are Figure_3_4_5_Elasticity_Contours.m and Figure_6_Contours_Stochastic_Lambda.m which perform the elasticity analyses and run the stochastic simulation, respectively. The files are presented in a zipped folder called Code_and_Data_Supplement. When uncompressed, users may run the MATLAB programs by opening them from within this directory. Subdirectories contain the data files and supporting MATLAB functions necessary to complete execution. The programs are written to find the necessary supporting functions in the Code_and_Data_Supplement directory. If users copy these MATLAB files to a different directory, they must add the Code_and_Data_Supplement directory and its subdirectories to their search path to make the supporting files available. More details are provided in the README.txt file included in the supplement. The file and directory structure of entire zipped supplement is shown below. Folder PATH listing Code_and_Data_Supplement | Figure_3_4_5_Elasticity_Contours.m | Figure_6_Contours_Stochastic_Lambda.m | Figure_A1_RefitG2.m | Figure_A2_PlotFecundityRegression.m | README.txt | +---FinalDataFiles +---Make Tables | README.txt | Table_lamANNUAL.csv | Table_mgtProbPredicted.csv | +---ParameterEstimation | | Categorical Model output.xls | | | +---Fecundity | | Appendix_A3_Fecundity_Breakpoint.sas | | fec_Cat_Indiv.sas | | Mean_Fec_Previous_Study.m | | | +---G1 | | G1_Cat.sas | | | +---G2 | | G2_Cat.sas | | | +---Model Ranking | | Categorical Model Ranking.xls | | | +---Seedlings | | sdl_Cat.sas | | | +---SS | | SS_Cat.sas | | | +---SumSrv | | sum_Cat.sas | | | ---WinSrv | modavg.m | winCatModAvgfitted.m | winCatModAvgLinP.m | winCatModAvgMu.m | win_Cat.sas | +---ProcessedDatafiles | fecdat_gm_param_est_paper.mat | hierarchical_parameters.mat | refitG2_param_estimation.mat | ---Required_Functions | hline.m | hmstoc.m | Jeffs_Figure_Settings.m | Jeffs_startup.m | newbootci.m | sem.m | senstuff.m | vline.m | +---export_fig | change_value.m | eps2pdf.m | export_fig.m | fix_lines.m | ghostscript.m | license.txt | pdf2eps.m | pdftops.m | print2array.m | print2eps.m | +---lowess | license.txt | lowess.m | +---Multiprod_2009 | | Appendix A - Algorithm.pdf | | Appendix B - Testing speed and memory usage.pdf | | Appendix C - Syntaxes.pdf | | license.txt | | loc2loc.m | | MULTIPROD Toolbox Manual.pdf | | multiprod.m | | multitransp.m | | | ---Testing | | arraylab13.m | | arraylab131.m | | arraylab132.m | | arraylab133.m | | genop.m | | multiprod13.m | | readme.txt | | sysrequirements_for_testing.m | | testing_memory_usage.m | | testMULTIPROD.m | | timing_arraylab_engines.m | | timing_matlab_commands.m | | timing_MX.m | | | ---Data | Memory used by MATLAB statements.xls | Timing results.xlsx | timing_MX.txt | +---province | PROVINCE.DBF | province.prj | PROVINCE.SHP | PROVINCE.SHX | README.txt | +---SubAxis | parseArgs.m | subaxis.m | +---suplabel | license.txt | suplabel.m | suplabel_test.m | ---tight_subplot license.txt tight_subplot.m
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This online supplement contains data files and computer code, enabling the public to reproduce the results of the analysis described in the report titled “Thrifty Food Plan Cost Estimates for Alaska and Hawaii” published by USDA FNS in July 2023. The report is available at: https://www.fns.usda.gov/cnpp/tfp-akhi. The online supplement contains a user guide, which describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis; a data file in CSV format, which contains the most detailed information on food price differentials between the mainland U.S. and Alaska and Hawaii derived from Circana (formerly Information Resources Inc) retail scanner data as could be released without disclosing proprietary information; SAS and R code, which use the provided data file to reproduce the results of the report; and an excel spreadsheet containing the reproduced results from the SAS or R code. For technical inquiries, contact: FNS.FoodPlans@usda.gov. Resources in this dataset:
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement User Guide File name: TFPCostEstimatesForAlaskaAndHawaii-UserGuide.pdf Resource description: The online supplement user guide describes the contents of the online supplement in detail, provides a data dictionary, and outlines the methodology used in the analysis.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Data File File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementDataFile.csv Resource description: The online supplement data file contains food price differentials between the mainland United States and Anchorage and Honolulu derived from Circana (formerly Information Resources Inc) retail scanner data. The data was aggregated to prevent disclosing proprietary information.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement R Code File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementRCode.R Resource description: The online supplement R code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the R programming language.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement SAS Code (zipped) File name: TFPCostEstimatesforAlaskaandHawaii-OnlineSupplementSASCode.zip Resource description: The online supplement SAS code enables users to read in the online supplement data file and reproduce the results of the analysis as described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report using the SAS programming language. This SAS file is provided in zip format for compatibility with Ag Data Commons; users will need to unzip the file prior to its use.
Resource title: Thrifty Food Plan Cost Estimates for Alaska and Hawaii Online Supplement Reproduced Results File name: TFPCostEstimatesforAlaskaandHawaii-ReproducedResults.xlsx Resource description: The online supplement reproduced results are output from either the online supplement R or SAS code and contain the results of the analysis described in the Thrifty Food Plan Cost Estimates for Alaska and Hawaii report.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Retrospective dietary exposure assessments were conducted for two groups of pesticides that have acute effects on the nervous system:
brain and/or erythrocyte acetylcholinesterase inhibition (CAG-NAN);
functional alterations of the motor division (CAG-NAM).
The pesticides considered in this assessment were identified and characterised in the scientific report on the establishment of cumulative assessment groups of pesticides for their effects on the nervous system (here).
The exposure calculations used monitoring data collected by Member States under their official pesticide monitoring programmes in 2014, 2015 and 2016 and individual food consumption data from ten populations of consumers from different countries and from different age groups. Regarding the selection of relevant food commodities, the assessment included water, foods for infants and young children and 30 raw primary commodities of plant origin that are widely consumed within Europe.
Exposure estimates were obtained with SAS® software using a 2-dimensional Monte Carlo simulation, which is composed of an inner-loop execution and an outer-loop execution. Variability within the population is modelled through the inner-loop execution and is expressed as a percentile of the exposure distribution. The outer-loop execution is used to derive 95% confidence intervals around those percentiles (reflecting the sampling uncertainty of the input data).
Furthermore, calculations were carried out according to a tiered approach. While the first-tier calculations (Tier I) use very conservative assumptions for an efficient screening of the exposure with low risk for underestimation, the second-tier assessment (Tier II) includes assumptions that are more refined but still conservative. For each scenario, exposure estimates were obtained for different percentiles of the exposure distribution and the total margin of exposure (MOET, i.e. the ratio of the toxicological reference dose to the estimated exposure) was calculated at each percentile.
The input and output data for the exposure assessment are reported in the following annexes:
Annex A.1 – Input data for the exposure assessment of CAG-NAN
Annex A.2 – Input data for the exposure assessment of CAG-NAM
Annex B.1 – Output data from the Tier I exposure assessment of CAG-NAN
Annex B.2 – Output data from the Tier I exposure assessment of CAG-NAM
Annex C.1 – Output data from the Tier II exposure assessment of CAG-NAN
Annex C.2 – Output data from the Tier II exposure assessment of CAG-NAM
Further information on the data, methodologies and interpretation of the results are provided in the scientific report on the cumulative dietary exposure assessment of pesticides that have acute effects on the nervous system using SAS® software (here).
The results reported in this assessment only refer to the exposure and are not an estimation of the actual risks. These exposure estimates should therefore be considered as documentation for the final scientific report on the cumulative risk assessment of dietary exposure to pesticides for their effects on the nervous system (here). The latter combines the hazard assessment and exposure assessment into a consolidated risk characterisation, including all related uncertainties.
Facebook
TwitterOutput from programming code written to summarize fates of immature monarch butterflies collected and raised in captivity following SOP 4 (ServCat reference 103368). Collection and raising was conducted by crews from Neal Smith (IA), Necedah (WI) NWRs and near the town of Lamoni, Iowa. Results are given in tabular format in the excel file labeled as 2017 Metrics. Additional output from the SAS analysis code is given in the mht file.
Facebook
TwitterOutput from programming code written to summarize data describing 2018 MCSP Trial monitoring sites acquired using a SOP 1 (see ServCat reference 103364) of FWS Legacy Regions 2 and 3. Monitoring sites were selected using a custom GRTS draw conducted by USGS in 2017, within monitoring areas associated with select NWRS stations. Areas monitored included Balcones Canyonlands (TX), Hagerman (TX), Washita (OK), Neal Smith (IA) NWRs and several locations near the town of Lamoni, Iowa and private lands in northern Missouri.