We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).
This SAS code extracts data from EU-SILC User Database (UDB) longitudinal files and edits it such that a file is produced that can be further used for differential mortality analyses. Information from the original D, R, H and P files is merged per person and possibly pooled over several longitudinal data releases. Vital status information is extracted from target variables DB110 and RB110, and time at risk between the first interview and either death or censoring is estimated based on quarterly date information. Apart from path specifications, the SAS code consists of several SAS macros. Two of them require parameter specification from the user. The other ones are just executed. The code was written in Base SAS, Version 9.4. By default, the output file contains several variables which are necessary for differential mortality analyses, such as sex, age, country, year of first interview, and vital status information. In addition, the user may specify the analytical variables by which mortality risk should be compared later, for example educational level or occupational class. These analytical variables may be measured either at the first interview (the baseline) or at the last interview of a respondent. The output file is available in SAS format and by default also in csv format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Semantic Artist Similarity dataset consists of two datasets of artists entities with their corresponding biography texts, and the list of top-10 most similar artists within the datasets used as ground truth. The dataset is composed by a corpus of 268 artists and a slightly larger one of 2,336 artists, both gathered from Last.fm in March 2015. The former is mapped to the MIREX Audio and Music Similarity evaluation dataset, so that its similarity judgments can be used as ground truth. For the latter corpus we use the similarity between artists as provided by the Last.fm API. For every artist there is a list with the top-10 most related artists. In the MIREX dataset there are 188 artists with at least 10 similar artists, the other 80 artists have less than 10 similar artists. In the Last.fm API dataset all artists have a list of 10 similar artists. There are 4 files in the dataset.mirex_gold_top10.txt and lastfmapi_gold_top10.txt have the top-10 lists of artists for every artist of both datasets. Artists are identified by MusicBrainz ID. The format of the file is one line per artist, with the artist mbid separated by a tab with the list of top-10 related artists identified by their mbid separated by spaces.artist_mbid \t artist_mbid_top10_list_separated_by_spaces mb2uri_mirex and mb2uri_lastfmapi.txt have the list of artists. In each line there are three fields separated by tabs. First field is the MusicBrainz ID, second field is the last.fm name of the artist, and third field is the DBpedia uri.artist_mbid \t lastfm_name \t dbpedia_uri There are also 2 folders in the dataset with the biography texts of each dataset. Each .txt file in the biography folders is named with the MusicBrainz ID of the biographied artist. Biographies were gathered from the Last.fm wiki page of every artist.Using this datasetWe would highly appreciate if scientific publications of works partly based on the Semantic Artist Similarity dataset quote the following publication:Oramas, S., Sordo M., Espinosa-Anke L., & Serra X. (In Press). A Semantic-based Approach for Artist Similarity. 16th International Society for Music Information Retrieval Conference.We are interested in knowing if you find our datasets useful! If you use our dataset please email us at mtg-info@upf.edu and tell us about your research. https://www.upf.edu/web/mtg/semantic-similarity
This package contains two files designed to help read individual level DHS data into Stata. The first file addresses the problem that versions of Stata before Version 7/SE will read in only up to 2047 variables and most of the individual files have more variables than that. The file will read in the .do, .dct and .dat file and output new .do and .dct files with only a subset of the variables specified by the user. The second file deals with earlier DHS surveys in which .do and .dct file do not exist and only .sps and .sas files are provided. The file will read in the .sas and .sps files and output a .dct and .do file. If necessary the first file can then be run again to select a subset of variables.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The coefficients ai, bi, ci, di, ei, fi of the affine transforms wn (first column) for deterministic algorithm in Eq (10), and the probabilities pn (last column) for random iteration algorithm in Eq (9).
Millions of farmers in India have made significant contributions in providing food and nutrition to the entire nation, while also providing livelihoods to millions of people in the country. During the past five decades of planned economic development, India has moved from food-shortage and imports to self-sufficiency and exports. Food security and well being of the farmer appears to be major areas of concern of the planners and policy makers of Indian agriculture. In order to have a comprehensive picture of the farming community at the commencement of the third millennium, and to analyze the impact of the transformation induced by public policy, investments and technological change on the farmers' access to resources and income, as well as well-being; the Ministry of Agriculture decided to collect information on Indian farmers through a Situation Assessment Survey (SAS) and entrusted the job of conducting the survey to the National Sample Survey Organisation (NSSO).
The SAS 2003 is the first of its kind to be conducted by NSSO. Though information on a majority of items to be collected through SAS have been collected in some round or other of NSS, an integrated schedule - Schedule 33, covering some basic characteristics of farming households and their access to basic and modern farming resources was canvassed for the first time in SAS. Moreover, information on consumption of various goods and services in an abridged form were also collected to have an idea about the pattern of consumption expenditure of the farming households.
Schedule 33 was designed for collecting information on aspects relating to farming and other socio-economic characteristics of farming households. The information was collected in two visits to the same set of sample households. The first visit was made during January to August 2003 and the second, during September to December 2003. The survey was conducted in rural areas only. It was canvassed in the Central Sample except for the States of Maharashtra and Meghalaya where it was canvassed in both State and Central samples.
National Coverage
Households
Sample survey data [ssd]
A stratified multi-stage sampling design was adopted for the SAS 2003, 59th round. The First Stage Unit (FSU), also known as the primary sampling unit, was the census village in the rural sector and UFS block in the urban sector. The Ultimate Stage Units (USUs) were households in both sectors. Hamlet-group / sub-block constitute the intermediate stage, if these are formed in the selected area.
The list of villages (panchayat wards for Kerala) based on the Population Census of 1991 constituted the sampling frame for FSUs in rural areas, while the latest UFS frame was the sampling frame used for urban areas. For stratification of towns by size class, provisional population of towns as per Census 2001 was used. A detailed description of the sampling strrategy can be found in the estimation procedure document attached in the documentation/external resource.
Face-to-face paper [f2f]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is associated with the paper ''SAS: A speaker verification spoofing database containing diverse attacks': presents the first version of a speaker verification spoofing and anti-spoofing database, named SAS corpus. The corpus includes nine spoofing techniques, two of which are speech synthesis, and seven are voice conversion. We design two protocols, one for standard speaker verification evaluation, and the other for producing spoofing materials. Hence, they allow the speech synthesis community to produce spoofing materials incrementally without knowledge of speaker verification spoofing and anti-spoofing. To provide a set of preliminary results, we conducted speaker verification experiments using two state-of-the-art systems. Without any anti-spoofing techniques, the two systems are extremely vulnerable to the spoofing attacks implemented in our SAS corpus'. N.B. the files in the following fileset should also be taken as part of the same dataset as those provided here: Wu et al. (2017). Key files for Spoofing and Anti-Spoofing (SAS) corpus v1.0, [dataset]. University of Edinburgh. The Centre for Speech Technology Research (CSTR). http://hdl.handle.net/10283/2741
SAS technology exemplifies recent advances in geophysical survey technology that will revolutionize maritime archaeological remote sensing. Applied Signal Technology (AST) has combined their SAS with the MacArtney FOCUS-2 ROTV to create the ultimate towed acoustic imaging device, PROSAS Surveyor. Capable of an area coverage rate of 2.5 kilometer/hour with a resolution of 3 centimeters, PROSAS Surveyor will greatly expand capabilities to locate even the oldest archaeological sites on the continental shelf, particularly where sedimentation is limited. Large area seafloor mapping at a resolution capable of imaging very small targets is a tremendously expensive proposition for submerged land managers responsible for bottom lands from 30 to 300 meters in depth. Daily operating costs for a suitable research vessel and personnel limit the area that can be investigated. This project will for the first time apply commercially available SAS technology to the search for historic shipwrecks. The rapidity and resolution of this project's survey will be as much as a four-fold increase in area covered as compared to conventional marine archaeological remote sensing survey.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models
Dataset | 中文 | Paper | Code
🔍 Overview
SAS-Bench represents the first specialized benchmark for evaluating Large Language Models (LLMs) on Short Answer Scoring (SAS) tasks. Utilizing authentic questions from China's National College Entrance Examination (Gaokao)… See the full description on the dataset page: https://huggingface.co/datasets/aleversn/SAS-Bench.
https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.11588/DATA/3NKHAYhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.11588/DATA/3NKHAY
This dataset contains input structures and parameters for coarse-grained molecular dynamics simulation of SAS-6 protein oligomers as well as post-processing files and analysis scripts. Abstract of related publication: Discovering mechanisms governing organelle assembly is a fundamental pursuit in the life sciences. The centriole is an evolutionarily conserved organelle with a signature 9-fold symmetrical chiral arrangement of microtubules imparted onto the cilium it templates. The first structure in nascent centrioles is a cartwheel, which comprises stacked 9-fold symmetrical SAS-6 ring polymers and emerging orthogonal to a surface surrounding resident centrioles. The mechanisms through which SAS-6 polymerization ensures centriole organelle architecture remain elusive. We deployed photothermally-actuated off-resonance tapping high-speed atomic force microscopy (PORT-HS-AFM) to decipher surface SAS-6 self-assembly mechanisms. We discovered that the surface shifts the reaction equilibrium by ~104 compared to solution. Moreover, coarse-grained molecular dynamics simulations and PORT-HS-AFM revealed that the surface converts the inherently helical propensity of SAS-6 polymers into 9-fold rings with residual asymmetry, which may guide ring stacking and impart chiral features to centrioles and cilia. Overall, our work reveals fundamental design principles governing centriole assembly.
In order to have a comprehensive picture of the farming community and to analyze the impact of the transformation induced by public policy, investments and technological change on the farmers' access to resources and income as well as well-being of the farmer households it was decided to collect information on Indian farmers through “Situation Assessment Survey” (SAS). The areas of interest for conducting SAS would include economic well-being of farmer households as measured by consumer expenditure, income and productive assets, and indebtedness; their farming practices and preferences, resource availability, and their awareness of technological developments and access to modern technology in the field of agriculture. In this survey, detailed information would be collected on receipts and expenses of households' farm and non-farm businesses, to arrive at their income from these sources. Income from other sources would also be ascertained, and so would be the consumption expenditure of the households.
National, State, Rural, Urban
Houdeholds
All Households of the type : 1-self-employed in agriculture 2-self-employed in non-agriculture 3-regular wage/salary earning 4-casual labour in agriculture 5-casual labour in non-agriculture 6-others
Sample survey data [ssd]
Total sample size (FSUs): 8042 FSUs have been allocated for the central sample at all-India level. For the state sample, there are 8998 FSUs allocated for all-India. sample design: A stratified multi-stage design has been adopted for the 70th round survey. The first stage units (FSU) are the census villages (Panchayat wards in case of Kerala) in the rural sector and Urban Frame Survey (UFS) blocks in the urban sector. The ultimate stage units (USU) are households in both the sectors. In case of large FSUs, one intermediate stage of sampling is the selection of two hamlet-groups (hgs)/ sub-blocks (sbs) from each rural/ urban FSU.
Sampling Frame for First Stage Units: For the rural sector, the list of 2001 census villages updated by excluding the villages urbanised and including the towns de-urbanised after 2001 census (henceforth the term 'village' would mean Panchayat wards for Kerala) constitutes the sampling frame. For the urban sector, the latest updated list of UFS blocks (2007-12) is considered as the sampling frame.
Stratification:
(a) Stratum has been formed at district level. Within each district of a State/ UT, generally speaking, two basic strata have been formed: i) rural stratum comprising of all rural areas of the district and (ii) urban stratum comprising all the urban areas of the district. However, within the urban areas of a district, if there were one or more towns with population 10 lakhs or more as per population census 2011 in a district, each of them formed a separate basic stratum and the remaining urban areas of the district was considered as another basic stratum.
(b) However, a special stratum in the rural sector only was formed at State/UT level before district- strata were formed in case of each of the following 20 States/UTs: Andaman & Nicobar Islands, Andhra Pradesh, Assam, Bihar, Chhattisgarh, Delhi, Goa, Gujarat, Haryana, Jharkhand, Karnataka, Lakshadweep, Madhya Pradesh, Maharashtra, Odisha, Punjab, Rajasthan, Tamil Nadu, Uttar Pradesh and West Bengal. This stratum will comprise all the villages of the State with population less than 50 as per census 2001.
(c) In case of rural sectors of Nagaland one special stratum has been formed within the State consisting of all the interior and inaccessible villages. Similarly, for Andaman & Nicobar Islands, one more special stratum has been formed within the UT consisting of all inaccessible villages. Thus for Andaman & Nicobar Islands, two special strata have been formed at the UT level:
(i) special stratum 1 comprising all the interior and inaccessible villages (ii) special stratum 2 containing all the villages, other than those in special stratum 1, having population less than 50 as per census 2001.
Sub-stratification:
Rural sector: Different sub-stratifications are done for 'hilly' States and other States. Ten (10) States are considered as hilly States. They are: Jammu & Kashmir, Himachal Pradesh, Uttarakhand, Sikkim, Meghalaya, Tripura, Mizoram, Manipur, Nagaland and Arunachal Pradesh.
(a) sub-stratification for hilly States: If 'r' be the sample size allocated for a rural stratum, the number of sub-strata formed was 'r/2'. The villages within a district as per frame have been first arranged in ascending order of population. Then sub-strata 1 to 'r/2' have been demarcated in such a way that each sub-stratum comprised a group of villages of the arranged frame and have more or less equal population.
(b) sub-stratification for other States (non-hilly States except Kerala): The villages within a district as per frame were first arranged in ascending order of proportion of irrigated area in the cultivated area of the village. Then sub-strata 1 to 'r/2' have been demarcated in such a way that each sub-stratum comprised a group of villages of the arranged frame and have more or less equal cultivated area. The information on irrigated area and cultivated area was obtained from the village directory of census 2001.
(c) sub-stratification for Kerala: Although Kerala is a non-hilly State but because of non-availability of information on irrigation at FSU (Panchayat Ward) level, sub-stratification by proportion of irrigated area was not possible. Hence the procedure for sub-stratification was same as that of hilly States in case of Kerala.
Urban sector: There was no sub-stratification for the strata of million plus cities. For other strata, each district was divided into 2 sub-strata as follows:
sub-stratum 1: all towns of the district with population less than 50000 as per census 2011
sub-stratum 2: remaining non-million plus towns of the district
Allocation of total sample to States and UTs: The total number of sample FSUs have been allocated to the States and UTs in proportion to population as per census 2011 subject to a minimum sample allocation to each State/ UT.
Allocation to strata: Within each sector of a State/ UT, the respective sample size has been allocated to the different strata in proportion to the population as per census 2011. Allocations at stratum level are adjusted to multiples of 2 with a minimum sample size of 2.
Allocation to sub-strata:
1 Rural: Allocation is 2 for each sub-stratum in rural.
2 Urban: Stratum allocations have been distributed among the two sub-strata in proportion to the number of FSUs in the sub-strata. Minimum allocation for each sub-stratum is 2. Selection of FSUs: For the rural sector, from each stratum x sub-stratum, required number of sample villages has been selected by Simple Random Sampling Without Replacement (SRSWOR). For the urban sector, FSUs have been selected by using Simple Random Sampling Without Replacement (SRSWOR) from each stratum x sub-stratum. Both rural and urban samples were drawn in the form of two independent sub-samples and equal number of samples has been allocated among the two sub rounds.
For details reexternal refer to external resouce "Note on Sample Design and Estimation Procedure of NSS 70th Round" Page no.2
There was no deviation from the original sampling design.
Face-to-face [f2f]
There are 17 blocks in visit 1. In Visits 1 & 2, Each sample FSU will be visited twice during this round. Since the workload of the first visit (i.e. visit 1) will be more, the first visit will continue till the end of July 2013. Thus, period of the first visit will be January - July 2013 and that of the second visit (i.e. visit 2) will be August - December 2013.
Note: , , and are mean values (Mean ± SD) calculated for the first 2.5 cm of the model length. Max was also calculated for the first 2.5 cm of the model length.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Supplementary Materials and SAS data files for Studies 1-3. The first half of the SAS files include the Hayes Process Macro. The syntax and program commands for the specific data set can be found at the end of the file.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains all the materials needed to reproduce the results in "Which Panel Data Estimator Should I Use?: A Corrigendum and Extension". Please read the README document first. The results were obtained using SAS/IML software, and the files consist of SAS data sets and SAS programs.
The main objective of the Seasonal Agricultural Survey is to provide timely, accurate, reliable and comprehensive agricultural statistics that describe the structure of agriculture in Rwanda mainly in terms of land use, crop area, yield and crop production to monitor current agricultural and food supply conditions and to facilitate evidence-based decision making for the development of the agricultural sector.
In this regard, the National Institute of Statistics of Rwanda conducted the Seasonal Agriculture Survey (SAS) from September 2018 to august 2019 to gather up-to-date information for monitoring progress on agriculture programs and policies. This 2019 SAS covered Main agricultural seasons are Season A (which starts from September to February of the following year) and Season B (which starts from March to June). Season C is the small agricultural season mainly for vegetables and sweet potato grown in swamps and Irish potato grown in volcanic agro-ecological zone and provides data on farm characteristics (area, yield and production), agricultural practices, agricultural inputs and use of crop production
National coverage allowing district-level estimation of key indicators
This seasonal agriculture survey focused on the following units of analysis: Small scale agricultural farms and large scale farms
The SAS 2019 targeted potential agricultural land and large scale farmers
Sample survey data [ssd]
Out of 10 strata, only 4 are considered to represent the country land potential for agriculture, and they cover the total area of 1,787,571.2 hectares (ha). Those strata are: 1.0 (tea plantations), 1.1 (intensive agriculture land on hillsides), 2.0 (intensive agriculture land in marshlands) and 3.0 (rangelands). The remainder of land use strata represents all the non-agricultural land in Rwanda. Stratum 1.0, which represents tea plantations, is assumed to be well monitored through administrative records by the National Agriculture Export Board (NAEB), an institution whose main mission is to promote the agriculture export commodities. Thus, SAS is conducted on 3 strata (1.1; 2.0 & 3.0) to cover other major crops. Within district, the agriculture strata (1.1, 2.0 & 3.0) were divided into larger sampling units called first-step or primary sampling units (PSUs) (as shown in Figure 2). Strata 1.1 and 2.0 were divided into PSUs of around 100 ha while stratum 3.0 was divided into PSUs of around 500 ha. After sample size determination, a sample of PSUs was done by systematic sampling method with probability proportional to size, then a given number of PSUs to be selected for each stratum, was assigned in every district. In 2019, the 2018 SAS sample of 780 segments has been kept the same for SAS 2019 in Season A and B.
At first stage, 780 PSUs sampled countrywide were proportionally allocated in different levels of stratification (Hill side, marshland and rangeland strata) for 30 districts of Rwanda, to allow publication of results at district level. Sampled PSUs in each stratum were systematically selected from the frame with probability of selection proportional to the size of the PSU.
At the second stage 780 sampled PSUs were divided into secondary sampling units (SSUs) also called segments. Each segment is estimated to be around 10 ha for strata 1.1 and 2.0 and 50 ha for stratum 3.0 (as shown in Figure 3). For each PSU, only one SSU is selected by random sampling method without replacement. This is why for 2019 5 SAS season A and B, the same number of 780 SSUs was selected. In addition to this, a list frame of large-scale farmers (LSF), with at least 10 hectares of agricultural holdings, was done to complement the area frame just to cover crops mostly grown by large scale farmers and that cannot be easily covered in area frame
At the last sampling stage, in strata 1.1 and 2.0 each segment of an average size of 10 ha (100,000 Square meters) has been divided into around 1,000 grids squares of 100 Sq. meters each, while for stratum 3.0 around 5,000 grids squares of 100 Sq. meters each have been divided. A point was placed at the center of every grid square and named a grid point (A grid point is a geographical location at the center of every grid square). A random sample of 5% of the total grid points were selected in each segment of strata 1.1 and 2.0 whereas a random sample of 2% of total grid points was selected in each segment of stratum 3.0. Grids points are reporting units within a segment, where enumerators go to every grid point, locate and delineate the plots in which the grid falls, and collect records of land use and related information. The recorded information represents the characteristics of the whole segment which are extrapolated to the stratum level and hence the combination of strata within each district provides district area related statistics.
Face-to-face [f2f]
There were two types of questionnaires used for this survey namely screening questionnaire and plot questionnaire. A Screening questionnaire was used to collect information that enabled identification of a plot and its land use using the plot questionnaire. For point-sampling, the plot questionnaire is concerned with the collection of data on characteristics of crop identification, crop production and use of production, inputs (seeds, fertilizers and pesticides), agricultural practices and land tenure. All the surveys questionnaires used were published in English
The CAPI method of data collection allows the enumerators in the field to collect and enter data with their tablets and then synchronize to the server at headquarters where data are received by NISR staff, checked for consistency at NISR and thereafter transmitted to analysts for tabulation using STATA software, and reporting using office Excel and word as well.
Data collection was done in 780 segments and 222 large scale farmers holdings for Season A, whereas in Season C data was collected in 232 segments, response rate was 100% of the sample
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Retrospective dietary exposure assessments were conducted for two groups of pesticides that have chronic effects on the thyroid:
The pesticides considered in this assessment were identified and characterised in the scientific report on the establishment of cumulative assessment groups of pesticides for their effects on the thyroid (here).
The exposure calculations used monitoring data collected by Member States under their official pesticide monitoring programmes in 2014, 2015 and 2016 and individual food consumption data from ten populations of consumers from different countries and from different age groups. Regarding the selection of relevant food commodities, the assessment included water, foods for infants and young children and 30 raw primary commodities of plant origin that are widely consumed within Europe.
Exposure estimates were obtained with SAS® software using a 2-dimensional probabilistic method, which is composed of an inner-loop execution and an outer-loop execution. Variability within the population is modelled through the inner-loop execution and is expressed as a percentile of the exposure distribution. The outer-loop execution is used to derive 95% confidence intervals around those percentiles (reflecting the sampling uncertainty of the input data).
Furthermore, calculations were carried out according to a tiered approach. While the first-tier calculations (Tier I) use very conservative assumptions for an efficient screening of the exposure with low risk for underestimation, the second-tier assessment (Tier II) includes assumptions that are more refined but still conservative. For each scenario, exposure estimates were obtained for different percentiles of the exposure distribution and the total margin of exposure (MOET, i.e. the ratio of the toxicological reference dose to the estimated exposure) was calculated at each percentile.
The input and output data for the exposure assessment are reported in the following annexes:
Further information on the data, methodologies and interpretation of the results are provided in the scientific report on the cumulative dietary exposure assessment of pesticides that have chronic effects on the thyroid using SAS® software (here).
The results reported in this assessment only refer to the exposure and are not an estimation of the actual risks. These exposure estimates should therefore be considered as documentation for the final scientific report on the cumulative risk assessment of dietary exposure to pesticides for their effects on the thyroid (here). The latter combines the hazard assessment and exposure assessment into a consolidated risk characterisation, including all related uncertainties.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Many proteins consist of folded domains connected by regions with higher flexibility. The details of the resulting conformational ensemble play a central role in controlling interactions between domains and with binding partners. Small-Angle Scattering (SAS) is well-suited to study the conformational states adopted by proteins in solution. However, analysis is complicated by the limited information content in SAS data and care must be taken to avoid constructing overly complex ensemble models and fitting to noise in the experimental data. To address these challenges, we developed a method based on Bayesian statistics that infers conformational ensembles from a structural library generated by all-atom Monte Carlo simulations. The first stage of the method involves a fast model selection based on variational Bayesian inference that maximizes the model evidence of the selected ensemble. This is followed by a complete Bayesian inference of population weights in the selected ensemble. Experiments with simulated ensembles demonstrate that model evidence is capable of identifying the correct ensemble and that correct number of ensemble members can be recovered up to high level of noise. Using experimental data, we demonstrate how the method can be extended to include data from Nuclear Magnetic Resonance (NMR) and structural energies of conformers extracted from the all-atom energy functions. We show that the data from SAXS, NMR chemical shifts and energies calculated from conformers can work synergistically to improve the definition of the conformational ensemble.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary statistics for participants in interviews at month-one and month-six.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Baseline summary statistics of all patients that were enrolled in the study.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
File List
Jena_dataset.pdf
Worked example of model fitting for the Jena_dataset.pdf
Jena_dataset.sas
SAS code for analysis of Jena_dataset.sas
Jena_dataset.r
R code for analysis of Jena_dataset.r
Jena_data.csv
Jena data
Ireland_site_biodepth.csv
Data for Ireland_site_Biodepth.csv
Description The supplements are designed to assist the reader to implement the methods using the statistical packages SAS and R. The first supplement (Worked example of model fitting for theJena_dataset.pdf) provides a detailed description of the application and interpretation of a range of models using the Jena dataset. The second and third supplements (SAS code for analysis of Jena_dataset.sas) and (R code for analysis of Jena_dataset.r) provide SAS and R code to implement the method using the Jena dataset. The data for the two sites is provided in Jena_data.csv and Ireland_site_biodepth.csv. Hash values for supplements Jena_data.csv and Ireland_site_biodepth.csv calculated by HASHCALC: MD5 hash value for Jena_data.csv 6b86c280a15bbd4aae08b5b4c91363ee MD5 hash value for Ireland_site_biodepth.csv 9b60c32ceca9259e47d7ee42b9ae5f16
We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).