Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We show that the expected value of the largest order statistic in Gaussian samples can be accurately approximated as (0.2069 ln (ln (n))+0.942)4, where n∈[2,108] is the sample size, while the standard deviation of the largest order statistic can be approximated as −0.4205arctan(0.5556[ln(ln (n))−0.9148])+0.5675. We also provide an approximation of the probability density function of the largest order statistic which in turn can be used to approximate its higher order moments. The proposed approximations are computationally efficient, and improve previous approximations of the mean and standard deviation given by Chen and Tyler (1999).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median household incomes for various household sizes in State Line City, IN, as reported by the U.S. Census Bureau. The dataset highlights the variation in median household income with the size of the family unit, offering valuable insights into economic trends and disparities within different household sizes, aiding in data analysis and decision-making.
Key observations
https://i.neilsberg.com/ch/state-line-city-in-median-household-income-by-household-size.jpeg" alt="State Line City, IN median household income, by household size (in 2022 inflation-adjusted dollars)">
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Household Sizes:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for State Line City median household income. You can refer the same here
Version 1 is the current version of the dataset.This collection MODFDS_SDV_GLB_L3 provides level 3 standard deviation of climatological monthly frequency of dust storms (FDS) over land from 175°W to 175°E and 80°S to 80°N at a spatial resolution of 0.1˚ x 0.1˚. It is derived from Level 2, the Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue aerosol products Collection 6.1 from Terra (MOD04_L2). The dataset is the standard deviation of climatological monthly mean for each month over 2000 to 2022.The FDS is calculated as the number of days per month when the daily dust optical depth is greater than a threshold optical depth (e.g., 0.025) with two quality flags: the lowest (1) and highest (3). It is advised to use flag 1, which is of lower quality, over dust source regions, and flag 3 over remote areas or polluted regions. Eight thresholds (0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1, 2) are saved separately in eight files.If you have any questions, please read the README document first and post your question to the NASA Earthdata Forum (forum.earthdata.nasa.gov) or email the GES DISC Help Desk (gsfc-dl-help-disc@mail.nasa.gov).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average best fitness and standard deviation (presented within bracket) for the p53 negative feedback loop model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average best fitness and standard deviation (presented within bracket) for the arginine catabolism model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Within-subject standard deviations (cell entries) calculated for 36 different scenarios in the simulation study. The values are obtained by multiplying fixed ratios of within-subject to between-subject standard deviation (rows) with different between-subject standard deviations (columns).
This dataset identifies all regions in which the full 95% confidence interval is greater than 0.5 mg/m3 that were combined for the months available in each hemisphere for the blue mussel. The chlorophyll 2 data includes the mean chlorophyll 2 level per month, the standard deviation and the number of observations used to calculate the mean. Based on these values, the 95% upper and lower confidence levels about the mean for each month have been generated.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The first step to develop a quantitative structure–activity relationship (QSAR) model is to identify a set of chemicals with known activities/properties, which can be either collected from the published studies or measured experimentally. A key challenge in this process is how to determine which chemicals are used to train a QSAR model, and, of those chemicals, which should be prioritized in experimental trials to ensure that the obtained models have large applicability domains (ADs). In this study, we employ uncertainty-based active learning (AC) to address this challenge. We use the Gaussian process (GP) to develop QSAR models for three public datasets, Koc, solubility, and k•OH, each with a number of chemicals represented by molecular descriptors, in which the GP can offer prediction uncertainty (by means of standard deviation) for the model’s prediction. The training chemicals of each dataset are selected in two different ways: (1) random splitting (RS) and (2) uncertainty-based AC. Uncertainty-based AC iteratively identifies chemicals with the highest uncertainty and selects them for model training. We demonstrate that the chemicals selected by AC are more diverse than those selected by RS and that AC-based QSAR models have better generalizability than those derived from RS. We then use these two types of models to predict the properties of chemicals in the REACH dataset (>300,000 chemicals) and assess their ADs using five different AD determination methods. We demonstrate that the AD of AC-based QSAR models for all AD methods is significantly larger than those of RS-based models (up to 24 times larger). This study provides a novel method to enlarge the AD of QSAR models, which can guide model development and improve the property prediction reliability for more REACH dataset chemicals while minimizing the development cost and time.
Version 1 is the current version of the dataset.This collection MYDFDS_SDV_GLB_L3 provides level 3 standard deviation of climatological monthly frequency of dust storms (FDS) over land from 175°W to 175°E and 80°S to 80°N at a spatial resolution of 0.1˚ x 0.1˚. It is derived from Level 2, the Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue aerosol products Collection 6.1 from Aqua (MYD04_L2). The dataset is the standard deviation of climatological monthly mean for each month over 2003 to 2022.The FDS is calculated as the number of days per month when the daily dust optical depth is greater than a threshold optical depth (e.g., 0.025) with two quality flags: the lowest (1) and highest (3). It is advised to use flag 1, which is of lower quality, over dust source regions, and flag 3 over remote areas or polluted regions. Eight thresholds (0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1, 2) are saved separately in eight files.If you have any questions, please read the README document first and post your question to the NASA Earthdata Forum (forum.earthdata.nasa.gov) or email the GES DISC Help Desk (gsfc-dl-help-disc@mail.nasa.gov).
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Sediment depth is given in mbsf. All 14C errors are reported at 1 sigma. Reservoir age uncertainty is estimated to be ±100 years. The reported uncertainty (anal. uncert.) and reservoir age uncertainty (res. corr. uncert.) were added in quadrature to obtain a total 14C uncertainty for each date (total uncert.). The 1 sigma total 14C uncertainty has been added and subtracted from the reservoir-corrected 14C ages to provide 'Cariaco+' and 'Cariaco-', respectively. To match a sample 14C date, first add and subtract 1 sigma uncertainties (including reservoir age uncertainty, if applicable) from the sample 14C age, providing 'Sample+' and 'Sample-', respectively. The limits of the 14C age match are given by the shallowest depth at which 'Cariaco+' is greater than 'Sample-', and the deepest depth at which 'Sample+' is greater than 'Cariaco-'. The depths can then be translated to the sediment reflectance record for precise palaeoclimatic context.
To select sites that provide options to species in a changing climate, the project identifies those areas that have the highest landscape diversity as well as local connectedness. The landscape diversity metric derived for this project estimates the number of microclimates and climatic gradients available within a given area. It is measured by counting the variety of landforms, the elevation range, the diversity of soil types, and the density and configuration of wetlands present in a small area. The assumption is that microclimate diversity buffers against climatic effects, allowing species to persist in an area of high landscape diversity more effectively than in areas of lower diversity within the same setting. Local connectedness is defined as the number of barriers and the degree of fragmentation within an area. It is assumed that a highly permeable landscape promotes resilience by facilitating population movements and the reorganization of communities.The resilience score is equal to the landscape diversity score plus the local connectedness score divided by 2. To ensure that each of the scores is given equal weight, they are transformed to standardized normalized scores (z-scores) so that each has a mean of zero and a standard deviation of one (this prevents the factor with a larger mean or variance from having more influence). The resilience raster scores were multiplied by 1000 in order to convert the decimal SD to integers (e.g. a value of 1000 = 1 SD away from the mean). Rescaling the Results for Massachusetts:The regional resilience layer that is available at the TNC data clearinghouse is scaled across regions that are far larger than just Massachusetts, or even New England. If we used just that data layer, the values that we are using to prioritize for land protection would be those areas that are most important across a region that is far greater than just Massachusetts. For this reason, the resilience values were rescaled within Massachusetts at two levels: statewide and within each ecoregion. The Massachusetts statewide and ecoregion scores were also transformed into z-scores, so that each has a mean of zero and a standard deviation of one. The statewide and ecoregional results were “integrated” into a combined layer in the following way. The ecoregional score was always used unless the state score was 1) greater than 500 and 2) greater than the ecoregional score. This allows the user to identify the most resilient sites within each of the very different ecoregions in Massachusetts, but they can also be confident that if there is a statewide high priority that those areas will be identified as well.A note on the scores:
The Massachusetts re-scaled layer has values that vary from -8223 to 5058. These values are 1000 x the number of standard deviations away from the mean for the distribution of resilience values within Massachusetts. For example, a value of 2000 is 2 standard deviations away from the mean resilience value which was assigned a value of zero. The TNC project symbolized their results using 3 categories of above average resilience: 0.5 – 1 SD, 1 – 2 SD, and > 2 SD.Complete documentation for these data can be found in this report: Anderson, M.G., A. Barnett, M. Clark, C. Ferree, A. Olivero Sheldon, J. Prince. 2016. Resilient Sites for Terrestrial Conservation in Eastern North America. The Nature Conservancy, Eastern Conservation Science.
The best fit values of the signal strength modifier for the different processes. The uncertainties, corresponding to one standard deviation...
Burial of organic carbon in marine sediments has a profound influence in marine biogeochemical cycles and provides a sink for greenhouse gases such as CO2 and CH4. However, tracing organic carbon from primary production sources as well as its transformations in the sediment record remains challenging. Here we examine a novel but growing tool for tracing the biosynthetic origin of amino acid carbon skeletons, based on naturally occurring stable carbon isotope patterns in individual amino acids (d13C_AA). We focus on two important aspects for d13C_AA utility in sedimentary paleoarchives: first, the fidelity of source diagnostic of algal d13C_AA patterns across different oceanographic growth conditions, and second, the ability of d13C_AA patterns to record the degree of subsequent microbial amino acid synthesis after sedimentary burial. Using the marine diatom Thalassiosira weissflogii, we tested under controlled conditions how d13C_AA patterns respond to changing environmental conditions, including light, salinity, temperature, and pH. Our findings show that while differing oceanic growth conditions can change macromolecular cellular composition, d13C_AA isotopic patterns remain largely invariant. These results emphasize that d13C_AA patterns should accurately record biosynthetic sources across widely disparate oceanographic conditions. We also explored how d13C_AA patterns change as a function of age, total nitrogen and organic carbon content after burial, in a marine sediment core from a coastal upwelling area off Peru. Based on the four most informative amino acids for distinguishing between diatom and bacterial sources (i.e., isoleucine, lysine, leucine and tyrosine), bacterially derived amino acids ranged from 10 to 15 % in the sediment layers from the last 5000 years, and up to 35 % during the last glacial period. The greater bacterial contributions in older sediments indicate that bacterial activity and amino acid resynthesis progressed, approximately as a function of sediment age, to a substantially larger degree than suggested by changes in total organic nitrogen and carbon content. It is uncertain whether archaea may have contributed to sedimentary d13C_AA patterns we observe, and controlled culturing studies will be needed to investigate whether d13C_AA patterns can differentiate bacterial from archeal sources. Further research efforts are also needed to understand how closely d13C_AA patterns derived from hydrolyzable amino acids represent total sedimentary proteineincous material, and more broadly sedimentary organic nitrogen. Overall, however, both our culturing and sediment studies suggest that d13C_AA patterns in sediments will represent a novel proxy for understanding both primary production sources, and the direct bacterial role in the ultimate preservation of sedimentary organic matter.
This dataset identifies all regions in which the full 95% confidence interval is greater than 1 mg/m3 for all 12 months. The chlorophyll 2 data includes the mean chlorophyll 2 level per month, the standard deviation and the number of observations used to calculate the mean. Based on these values, the 95% upper and lower confidence levels about the mean for each month have been generated.
Ocean acidification is receiving increasing attention because of its potential to affect marine ecosystems. Rare CO2 vents offer a unique opportunity to investigate the response of benthic ecosystems to acidification. However, the benthic habitats investigated so far are mainly found at very shallow water (less than or equal to 5 m depth) and therefore are not representative of the broad range of continental shelf habitats. Here, we show that a decrease from pH 8.1 to 7.9 observed in a CO2 vent system at 40 m depth leads to a dramatic shift in highly diverse and structurally complex habitats. Forests of the kelp Laminaria rodriguezii usually found at larger depths (greater than 65 m) replace the otherwise dominant habitats (i.e. coralligenous outcrops and rhodolith beds), which are mainly characterized by calcifying organisms. Only the aragonite-calcifying algae are able to survive in acidified waters, while high-magnesium-calcite organisms are almost completely absent. Although a long-term survey of the venting area would be necessary to fully understand the effects of the variability of pH and other carbonate parameters over the structure and functioning of the investigated mesophotic habitats, our results suggest that in addition of significant changes at species level, moderate ocean acidification may entail major shifts in the distribution and dominance of key benthic ecosystems at regional scale, which could have broad ecological and socio-economic implications.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Model overviewThe indicative Australian Urban Development Risk Model is based on an assumption that recent-past trends in urban expansion (i.e the transition from non-urban land use to urban land use) will continue linearly, and that parameters associated with past expansion are valid predictors of future expansion. The model is underpinned by a conceptual logic, derived within ERIN, based on known datasets and their reasonable association with patterns of urbanisation. Specifically, we predict a higher urban development risk for non-urban locations with:proximity to existing high urban development areashigh increasing trend in street address densityland uses evidently prone to urbanisation andattractive geomorphology.The model is stratified by Australia’s 109 Significant Urban Areas and eight Greater Capital City Statistical Areas (ABS, 2016) and the model output is limited to these zones.Users should note there are likely to areas of high urban development risk beyond these zones, as discussed in the limitations section below.The model draws on multiple datasets to derive values for the parameters above and then combines them into a single index, with a value for every cell in a 9-second grid (about 1.2 million 250 x 250m cells). Derivation of parameter values is described below, followed by the approaches used to combine them into the index, classifying values for mapping, and combining with non-index masks to make the model spatially complete. Model parameters:The model is based on four parameters or predictor variables. For each parameter the field name (in the GIS data spatial attribute table) is provided in square brackets.1. Proximity to existing high urban development areas [NEAR_DIST]This parameter assumes continuation of 2006-2016 trends in urban development within a given Significant Urban Area or Greater Capital City Statistical Area. Locations close to an urban fringe which had expanded significantly during this period are at higher risk of urbanisation. Identifying past change from non-urban to urban involved comparing 2006 and 2016 ABS mesh blocks data. These datasets use a land use classification comprising 10 categories which were reclassified into urban(commercial, education, medical, industrial, residential, transport) and non-urban(parkland, water, primary production, other). Centre points of all 2016 urbanmesh blocks were compared with 2006 non-urbanmesh blocks to identify new urbanmesh blocks, or those which had changed from non-urbanto urban.These new urbanmesh blocks were used to attribute individual cells in the 9sec grid.Distances were then calculated between each new urbancell and its nearest 2006 urbanmesh block.Means (2006 dist. mean) and standard deviations for these distances were derived within each of Australia’s 109 Significant Urban Areas and eight Greater Capital City Statistical Areas (Areas). Larger mean values indicate greater urban expansion over the period for the Areain question. The standard deviations indicate how variable the expansion was within the Areaand, as shown in the table below, were used to account for uncertainty.To extrapolate a risk rating, all 2016 non-urbancells were converted to points and analysed for their distance to the closest 2016 urbancells. This distance was then compared to the relevant Areamean (2006 dist. mean) as described above. Where a 2016 non-urbancell is closer to an urbancell than the mean distance for conversion in the period 2006-2016, it is rated at higher risk, and particularly so for an Areawhere standard deviations are lower.The following table shows thresholds and parameter values. Conditions for cellParameter Value2016 dist. ≥[2006 dist. mean + standard deviation] 0.12016 dist. ≥[2006 dist. mean] but 0.42016 dist. ≥[2006 dist. mean - standard deviation] 0.72016 dist. –standard deviation 12. Increasing trend in street address density [setGnaf]A density of 60 addresses per 250m cell roughly equates to a ‘quarter acre block’urban landscape. This parameter assumes the continuation of trends in street address densification apparent during 2009-2016. Lower density locations (less than 30/cell) are considered non-urban and not at risk. Moderate density locations demonstrating significant increase during 2009-2016 are considered at high risk of urbanisation.The Geocoded National Address File (GNAF) dataset was used to derive both the number of addresses/cell for February 2016, and the increase from May 2009 to February 2016. The following thresholds and parameter values were applied.Conditions for cellParameter ValueLow density, low densification areas (ie all other than the following)02016 GNAF density ≥30 addresses/cell and density change (2009-2016) ≥2012016 GNAF density ≥40 addresses/cell and density change (2009-2016) ≥1012016 GNAF density ≥60 addresses/cell13.Land uses evidently prone to urbanisation [setLandUse]This parameter builds on the analysis used for Parameter 1. At assumes that past high likelihoods for urbanisation associated with certain land use types in different Areaswill persist. The 2016 new urbancells from Parameter 1, above, were compared to a 9 sec grid of the 2006 land use categories (derived from 2006 mesh blocks). For each land use, in each Area(ie a Significant Urban Areas or Greater Capital City Statistical Areas) the proportion urbanised was calculated as a number between zero and one. This number was directly applied as a parameter value for all 2016 non-urbancells. For example, a non-urbancell on a land use which had demonstrated a 60% chance of conversion to urbanin the period 2006-2016, would be scored at 0.6 for this parameter.4. Attractive geomorphology [setSlope]This parameter assumes that past preferences for urbanisation of lower slope areas will continue, given lower costs associated with developing such sites. Slope was calculated,from Geoscience Australia’s 1sec digital elevation model, as the mean slope across each 9sec cell. The following thresholds and parameter values were applied, and are based on a limited research effort into accessible building codes for new dwellings.Conditions for cellParameter ValueSlope ≥20 0.1Slope ≥12 and 0.4Slope ≥6 and 0.7Slope 1Derivation of the indexParameters were combined with equal weight on the assumption that each makes an equal contribution to our capacity to predict future urban expansion. However, individual parameter values are included in the GIS dataset to allow weights to be adjusted to suit particular analyses.Index Value = 0.25 x (proximity to high urban development) + 0.25 x (street address density) + 0.25 x (urbanising land use) + 0.25 x (attractive geomorphology)Classification of index values The index derives values for all cells between zero and one. These were classified into five equal-sized categories from “very low” to “very high”risk.Derivation of mapping unitsMapping units comprise the five risk categories, masked by non-index values for protected areas and existing-urban areas, as follows:Cells identified as protected, either through their inclusion in the Collaborative Australian Protected Areas Database or as ‘offset’areas in an EPBC Strategic Assessment area are ascribed a value of zero.Cells assessed as 2016 likely-urban for the NEAR_DIST parameter, attributed as ‘residential’in the Mesh Block layer,and with greater than 60 GNAF addresses are predicted to be existing-urban areas and ascribed with a value of 1.Non-index maskIndex valueRisk categorySuggested RGB for map colours: Protected from development38, 115, 00 0.2, Very low56, 148, 00.2 0.4, Low152, 230, 00.4 0.6, Moderate255, 255, 00.6 0.8, High255, 163, 430.8, Very high255, 0, 01, Existing urban239, 228, 190
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Reliable, predictable engineering of cellular behavior is one of the key goals of synthetic biology. As the field matures, biological engineers will become increasingly reliant on computer models that allow for the rapid exploration of design space prior to the more costly construction and characterization of candidate designs. The efficacy of such models, however, depends on the accuracy of their predictions, the precision of the measurements used to parametrize the models, and the tolerance of biological devices for imperfections in modeling and measurement. To better understand this relationship, we have derived an Engineering Error Inequality that provides a quantitative mathematical bound on the relationship between predictability of results, model accuracy, measurement precision, and device characteristics. We apply this relation to estimate measurement precision requirements for engineering genetic regulatory networks given current model and device characteristics, recommending a target standard deviation of 1.5-fold. We then compare these requirements with the results of an interlaboratory study to validate that these requirements can be met via flow cytometry with matched instrument channels and an independent calibrant. On the basis of these results, we recommend a set of best practices for quality control of flow cytometry data and discuss how these might be extended to other measurement modalities and applied to support further development of genetic regulatory network engineering.
This data set contains model output from the first multi-year run of the Bering Sea Ecosystem Study-Nutrient, Phytoplankton and Zooplankton (BEST-NPZ) model. The model was run from 1998 (spin-up year) through the end of 2009. CORE2 atmospheric forcing was used from 1998 through 2004, and Climate Forecast System Reanalysis (CFSR) forcing was used from 2004 through 2009. Mean and standard deviation for model output were determined for each of the sixteen Bering Sea Project regions. Model output are in three files, all Excel format. BEST-NPZ_40m_run1: Model state variables averaged over the upper 40 meters; BEST-NPZ_100m_run1: Model state variables averaged over the upper 100 meters; BEST-NPZ_production_run1: Production integrated over the upper 100 meters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Stimulation and movement interact non-linearly across multiple scales—a point empirically and quantitatively available through multifractal structure. Multifractal movements might implicate multifractal stimulation. Previous correlational modeling of accelerometry-measured torso movements during blindwalking indicated that multifractality in movement predicts distance-perceptual judgments. We now experimentally tested whether multifractal stimulation interacted with multifractal torso movement to support distance perception. We edited music-like sounds to manipulate multifractal non-linearity in auditory stimulation through headphones to participants perceiving distance by blindwalking—with eyes closed both during Lap-1 distance instruction and Lap-2 distance replication. Stimulation non-linearity significantly interacted with torso movements to predict distance-replication judgments. Low stimulation non-linearity showed no effect. Medium stimulation non-linearity interacted with Lap-2 torso non-linearity to produce over-/underestimation of shorter/longer distances. Higher auditory non-linearity led Lap-2 distance replication enlisted longer time scales, drawing on prior torso non-linearity, i.e. from Lap 1, and producing overestimation proportionally with a greater standard deviation of torso movements. Distance perception by blindwalking appears more accurate under medium rather than excessive non-linearity, resonating with previous evidence that walking engenders less multifractality than other movements. Movement-dependence of stimulation is also unsurprising to ecological-psychology wisdom. However, to our knowledge, present results constitute an unprecedented step toward elaborating movement-dependence to multifractal-geometrical approaches to perception.
In the near future, the marine environment is likely to be subjected to simultaneous increases in temperature and decreased pH. The potential effects of these changes on intertidal, meiofaunal assemblages were investigated using a mesocosm experiment. Artificial Substrate Units containing meiofauna from the extreme low intertidal zone were exposed for 60 days to eight experimental treatments (four replicates for each treatment) comprising four pH levels: 8.0 (ambient control), 7.7 & 7.3 (predicted changes associated with ocean acidification), and 6.7 (CO2 point-source leakage from geological storage), crossed with two temperatures: 12 °C (ambient control) and 16 °C (predicted). Community structure, measured using major meiofauna taxa was significantly affected by pH and temperature. Copepods and copepodites showed the greatest decline in abundance in response to low pH and elevated temperature. Nematodes increased in abundance in response to low pH and temperature rise, possibly caused by decreased predation and competition for food owing to the declining macrofauna density. Nematode species composition changed significantly between the different treatments, and was affected by both seawater acidification and warming. Estimated nematode species diversity, species evenness, and the maturity index, were substantially lower at 16 °C, whereas trophic diversity was slightly higher at 16 °C except at pH 6.7. This study has demonstrated that the combination of elevated levels of CO2 and ocean warming may have substantial effects on structural and functional characteristics of meiofaunal and nematode communities, and that single stressor experiments are unlikely to encompass the complexity of abiotic and biotic interactions. At the same time, ecological interactions may lead to complex community responses to pH and temperature changes in the interstitial environment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We show that the expected value of the largest order statistic in Gaussian samples can be accurately approximated as (0.2069 ln (ln (n))+0.942)4, where n∈[2,108] is the sample size, while the standard deviation of the largest order statistic can be approximated as −0.4205arctan(0.5556[ln(ln (n))−0.9148])+0.5675. We also provide an approximation of the probability density function of the largest order statistic which in turn can be used to approximate its higher order moments. The proposed approximations are computationally efficient, and improve previous approximations of the mean and standard deviation given by Chen and Tyler (1999).