This dataset consists of mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
## Example questions
Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.
Answer: 4
Question: Calculate -841880142.544 + 411127.
Answer: -841469015.544
Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).
Answer: 54*a - 30
It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:
The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books and is filtered where the book is A gyrokinetic calculation of transmission & reflection of the fast wave in the ion cyclotron range of frequencies, featuring 7 columns including author, BNB id, book, book publisher, and ISBN. The preview is ordered by publication date (descending).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Exampe figure detailing the calculation of climate tracking metrics for the contemporary data set using forest inventory and analysis data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the DFT calculations used in the paper "Anion-polarisation–directed short-range-order in antiperovskite Li2FeSO". This consists of three constituent parts: cluster expansion training calculations, relaxation of large shells, and Wannierisaton analysis of structures of interest. These calculations feed into an additional python workflow which is located on GitHub and is associated with the DOI 10.5281/zenodo.7828910 (see data link below).
Studies utilizing Global Positioning System (GPS) telemetry rarely result in 100% fix success rates (FSR). Many assessments of wildlife resource use do not account for missing data, either assuming data loss is random or because a lack of practical treatment for systematic data loss. Several studies have explored how the environment, technological features, and animal behavior influence rates of missing data in GPS telemetry, but previous spatially explicit models developed to correct for sampling bias have been specified to small study areas, on a small range of data loss, or to be species-specific, limiting their general utility. Here we explore environmental effects on GPS fix acquisition rates across a wide range of environmental conditions and detection rates for bias correction of terrestrial GPS-derived, large mammal habitat use. We also evaluate patterns in missing data that relate to potential animal activities that change the orientation of the antennae and characterize home-range probability of GPS detection for 4 focal species; cougars (Puma concolor), desert bighorn sheep (Ovis canadensis nelsoni), Rocky Mountain elk (Cervus elaphus ssp. nelsoni) and mule deer (Odocoileus hemionus). Part 1, Positive Openness Raster (raster dataset): Openness is an angular measure of the relationship between surface relief and horizontal distance. For angles less than 90 degrees it is equivalent to the internal angle of a cone with its apex at a DEM location, and is constrained by neighboring elevations within a specified radial distance. 480 meter search radius was used for this calculation of positive openness. Openness incorporates the terrain line-of-sight or viewshed concept and is calculated from multiple zenith and nadir angles-here along eight azimuths. Positive openness measures openness above the surface, with high values for convex forms and low values for concave forms (Yokoyama et al. 2002). We calculated positive openness using a custom python script, following the methods of Yokoyama et. al (2002) using a USGS National Elevation Dataset as input. Part 2, Northern Arizona GPS Test Collar (csv): Bias correction in GPS telemetry data-sets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix acquisition. We found terrain exposure and tall over-story vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The model's predictive ability was evaluated using two independent data-sets from stationary test collars of different make/model, fix interval programming, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. The model training data are provided here for fix attempts by hour. This table can be linked with the site location shapefile using the site field. Part 3, Probability Raster (raster dataset): Bias correction in GPS telemetry datasets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix aquistion. We found terrain exposure and tall overstory vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The models predictive ability was evaluated using two independent datasets from stationary test collars of different make/model, fix interval programing, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. We evaluated GPS telemetry datasets by comparing the mean probability of a successful GPS fix across study animals home-ranges, to the actual observed FSR of GPS downloaded deployed collars on cougars (Puma concolor), desert bighorn sheep (Ovis canadensis nelsoni), Rocky Mountain elk (Cervus elaphus ssp. nelsoni) and mule deer (Odocoileus hemionus). Comparing the mean probability of acquisition within study animals home-ranges and observed FSRs of GPS downloaded collars resulted in a approximatly 1:1 linear relationship with an r-sq= 0.68. Part 4, GPS Test Collar Sites (shapefile): Bias correction in GPS telemetry data-sets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix acquisition. We found terrain exposure and tall over-story vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The model's predictive ability was evaluated using two independent data-sets from stationary test collars of different make/model, fix interval programming, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. Part 5, Cougar Home Ranges (shapefile): Cougar home-ranges were calculated to compare the mean probability of a GPS fix acquisition across the home-range to the actual fix success rate (FSR) of the collar as a means for evaluating if characteristics of an animal’s home-range have an effect on observed FSR. We estimated home-ranges using the Local Convex Hull (LoCoH) method using the 90th isopleth. Data obtained from GPS download of retrieved units were only used. Satellite delivered data was omitted from the analysis for animals where the collar was lost or damaged because satellite delivery tends to lose as additional 10% of data. Comparisons with home-range mean probability of fix were also used as a reference for assessing if the frequency animals use areas of low GPS acquisition rates may play a role in observed FSRs. Part 6, Cougar Fix Success Rate by Hour (csv): Cougar GPS collar fix success varied by hour-of-day suggesting circadian rhythms with bouts of rest during daylight hours may change the orientation of the GPS receiver affecting the ability to acquire fixes. Raw data of overall fix success rates (FSR) and FSR by hour were used to predict relative reductions in FSR. Data only includes direct GPS download datasets. Satellite delivered data was omitted from the analysis for animals where the collar was lost or damaged because satellite delivery tends to lose approximately an additional 10% of data. Part 7, Openness Python Script version 2.0: This python script was used to calculate positive openness using a 30 meter digital elevation model for a large geographic area in Arizona, California, Nevada and Utah. A scientific research project used the script to explore environmental effects on GPS fix acquisition rates across a wide range of environmental conditions and detection rates for bias correction of terrestrial GPS-derived, large mammal habitat use.
https://object-store.os-api.cci2.ecmwf.int:443/cci2-preprod-catalogue/licences/creative-commons-attribution-4-0-international-public-licence/creative-commons-attribution-4-0-international-public-licence_78edae52daa6e91c3370229e180badad7d6e8e5e440957e4417cf288b6556922.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-preprod-catalogue/licences/creative-commons-attribution-4-0-international-public-licence/creative-commons-attribution-4-0-international-public-licence_78edae52daa6e91c3370229e180badad7d6e8e5e440957e4417cf288b6556922.pdf
ERA5–Drought is a global reconstruction of drought indices from 1940 to present. The dataset comprises two standardised drought indices: - the Standardised Precipitation Index (SPI) - the Standardised Precipitation-Evapotranspiration Index (SPEI). The SPI measures the precipitation deficit that accumulated over the preceding months and evaluates the deficit with respect to a reference period. The SPEI is an extension of the SPI and incorporates potential evapotranspiration to capture the impact of temperature on drought. SPI and SPEI values are in units of standard deviation from the standardised mean, i.e., negative values indicate drier-than-usual periods while positive values correspond to wetter-than-usual periods. Both indices can be used to identify the onset and the end of drought events as well as their severity. In ERA5–Drought, SPI and SPEI are calculated using precipitation and potential evapotranspiration from the fifth generation of the European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalyses (ERA5). ERA5 combines model data with observations from across the world to provide a globally complete and consistent description of the atmosphere. Drought indices are calculated for a range of accumulation windows (1/3/6/12/24/36/48 months) using the reference period from 1991–2020. All data is regridded to a regular grid of 0.25 degrees, making it suitable for many common applications. SPI and SPEI are calculated using both the ERA5 reanalysis (single realisation from the monthly means of daily means(moda) stream) and the ensemble of the reanalysis (10 realisations from the monthly means of daily means for ensemble members (edmo) stream), enabling uncertainty assessment of drought occurrence and intensity. The quality of the derived indices is evaluated using significance testing. The dataset currently covers 1940 to near-real time and is updated monthly. The consolidated data set is updated 2-3 months behind real time, while the intermediate data set is updated with 1 month of delay. New versions of the dataset are published as settings, such as the reference period, are updated or bug fixes are applied. Bug Fixes will be released using a minor revision (i.e. v1.1), while changes to the reference period will be released as major revisions (i.e. v2.0). Bug Fixes will be published to the Known Issues area on the Documentation tab. A more detailed description of the ERA5–Drought dataset and comparisons to other drought indices can be found in the associated dataset paper (see Documentation Tab). Information on access and usage examples, e.g. to calculate the area in drought, are provided in these guidelines. The dataset is produced by ECMWF.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In this report, we describe the development and validation of ABCG2, a new charge model with milestone free energy accuracy, while allowing instantaneous atomic charge assignment for arbitrary organic molecules. In combination with the second-generation general AMBER force field (GAFF2), ABCG2 led to a root-mean-square error (RMSE) of 0.99 kcal/mol on the hydration free energy calculation of all 642 solutes in the FreeSolv database, for the first time meeting the chemical accuracy threshold through physics-based molecular simulation against the golden-standard data set. Against the Minnesota Solvation Database, the solvation free energy calculation on 2068 pairs of a range of organic solutes in diverse solvents led to an RMSE of 0.89 kcal/mol. The 1913 data points of transfer free energies from the aqueous solution to organic solvents obtained an RMSE of 0.85 kcal/mol, corresponding to 0.63 log units for logP. The benchmark on densities of neat liquids for 1839 organic molecules and heat of vaporizations of 874 organic liquids achieved a comparable performance with the default restrained electrostatic potential (RESP) charge method of GAFF2. The fluctuations of assigned partial atomic charges over different input conformations from ABCG2 are demonstrated to be much smaller than those of RESP from statistics of 96 real drug molecules. The validation results demonstrated not only the accuracy but also the transferability and generality of the GAFF2/ABCG2 combination.
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf
ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather for the past 8 decades. Data is available from 1940 onwards. ERA5 replaces the ERA-Interim reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. This principle, called data assimilation, is based on the method used by numerical weather prediction centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way, but at reduced resolution to allow for the provision of a dataset spanning back several decades. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. This catalogue entry provides post-processed ERA5 hourly single-level data aggregated to daily time steps. In addition to the data selection options found on the hourly page, the following options can be selected for the daily statistic calculation:
The daily aggregation statistic (daily mean, daily max, daily min, daily sum*) The sub-daily frequency sampling of the original data (1 hour, 3 hours, 6 hours) The option to shift to any local time zone in UTC (no shift means the statistic is computed from UTC+00:00)
*The daily sum is only available for the accumulated variables (see ERA5 documentation for more details). Users should be aware that the daily aggregation is calculated during the retrieval process and is not part of a permanently archived dataset. For more details on how the daily statistics are calculated, including demonstrative code, please see the documentation. For more details on the hourly data used to calculate the daily statistics, please refer to the ERA5 hourly single-level data catalogue entry and the documentation found therein.
https://doi.org/10.5061/dryad.j3tx95xfb
This is a global database which comprises data on avian haemosporidian parasites from across the world. For each parasite lineage, we computed five metrics: phylogenetic host-range, environmental range, geographical range, and their mean local and total number of observations in the database.
The data that support the findings of this study are openly available in MalAvi at http://130.235.244.92/Malavi/ (Bensch et al. 2009).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This dataset contains the hydraulic conductivity and transmissivity data for the Queensland part of the Clarence-Moreton Basin. The data were organized by geological formations. The data …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This dataset contains the hydraulic conductivity and transmissivity data for the Queensland part of the Clarence-Moreton Basin. The data were organized by geological formations. The data were sourced from the Queensland state groundwater databases. Most records in the pumping test database do not have a hydraulic conductivity entry. The hydraulic conductivity was derived from the pumping test data by a two-step method to support the groundwater modelling: (i) transmissivity was estimated from the original test readings using the TGUESS approach; and (ii) hydraulic conductivity was calculated based on the estimated transmissivity and the screen information. The estimated hydraulic conductivity was mainly available for the alluvium and volcanics and varies in a range of six orders of magnitude. Dataset History This dataset was created through the following process: Filter the data using the spatial extent of the Clarence-Moreton Basin; Data quality check; Calculate transmissivity using the TGUESS approach; Compute hydraulic conductivity using the calculated transmissivity and the screen information in the state database; Aquifer assignment. Dataset Citation Bioregional Assessment Programme (2014) CLM - Hydraulic conductivity QLD. Bioregional Assessment Derived Dataset. Viewed 28 September 2017, http://data.bioregionalassessments.gov.au/dataset/1e181f78-d670-4e1b-ae74-67ee66e28e80. Dataset Ancestors Derived From Bioregional Assessment areas v02 Derived From Natural Resource Management (NRM) Regions 2010 Derived From Bioregional Assessment areas v03 Derived From QLD Department of Natural Resources and Mines Groundwater Database Extract 20142808 Derived From Bioregional Assessment areas v01 Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb) Derived From GEODATA TOPO 250K Series 3 Derived From NSW Catchment Management Authority Boundaries 20130917 Derived From Geological Provinces - Full Extent
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The elevation range measures the full range of elevations within a circular window and can be used as a representation of local relief.
The 300 m elevation range product was derived from the Smoothed Digital Elevation Model (DEM-S; ANZCW0703014016), which was derived from the 1 arc-second resolution SRTM data acquired by NASA in February 2000.
This collection includes data at 1 arc-second and 3 arc-second resolutions.
The 3 arc-second resolution product was generated from the 1 arc-second 300 m elevation range product and masked by the 3” water and ocean mask datasets. Lineage: Source data 1. 1 arc-second SRTM-derived Smoothed Digital Elevation Model (DEM-S; ANZCW0703014016). 2. 1 arc-second 300 m elevation range product 3. 3 arc-second resolution SRTM water body and ocean mask datasets
300 m focal range elevation calculation Elevation range is the full range of elevation within a circular window (Gallant and Wilson, 2000). Focal range using a 300 m window was calculated for each grid point from DEM-S using a 300 m kernel. The different spacing in the E-W and N-S directions due to the geographic projection of the data was accounted for by using the actual spacing in metres of the grid points, and recalculating the grid points included within the kernel extent for each 1o change in latitude.
The 300 m focal range elevation calculation was performed on 1° x 1° tiles, with overlaps to ensure correct values at tile edges.
The 3 arc-second resolution version was generated from the 1 second 300 m elevation range product. This was done by aggregating the 1” data over a 3 x 3 grid cell window and taking the maximum of the nine values that contributed to each 3” output grid cell. The 3” 300 m elevation range data were then masked using the SRTM 3” ocean and water body datasets.
References Gallant, J.C. and Wilson, J.P. (2000) Primary topographic attributes, chapter 3 in Wilson, J.P. and Gallant, J.C. Terrain Analysis: Principles and Applications, John Wiley and Sons, New York.
[Updated 28/01/25 to fix an issue in the ‘Lower’ values, which were not fully representing the range of uncertainty. ‘Median’ and ‘Higher’ values remain unchanged. The size of the change varies by grid cell and fixed period/global warming levels but the average difference between the 'lower' values before and after this update is 0.21°C.]What does the data show? This dataset shows the change in winter average temperature for a range of global warming levels, including the recent past (2001-2020), compared to the 1981-2000 baseline period. Here, winter is defined as December-January-February. Note, as the values in this dataset are averaged over a season they do not represent possible extreme conditions.The dataset uses projections of daily average air temperature from UKCP18 which are averaged over the winter period to give values for the 1981-2000 baseline, the recent past (2001-2020) and global warming levels. The warming levels available are 1.5°C, 2.0°C, 2.5°C, 3.0°C and 4.0°C above the pre-industrial (1850-1900) period. The recent past value and global warming level values are stated as a change (in °C) relative to the 1981-2000 value. This enables users to compare winter average temperature trends for the different periods. In addition to the change values, values for the 1981-2000 baseline (corresponding to 0.51°C warming) and recent past (2001-2020, corresponding to 0.87°C warming) are also provided. This is summarised in the table below.PeriodDescription1981-2000 baselineAverage temperature (°C) for the period2001-2020 (recent past)Average temperature (°C) for the period2001-2020 (recent past) changeTemperature change (°C) relative to 1981-20001.5°C global warming level changeTemperature change (°C) relative to 1981-20002°C global warming level changeTemperature change (°C) relative to 1981-20002.5°C global warming level changeTemperature change (°C) relative to 1981-20003°C global warming level changeTemperature change (°C) relative to 1981-20004°C global warming level changeTemperature change (°C) relative to 1981-2000What is a global warming level?The Winter Average Temperature Change is calculated from the UKCP18 regional climate projections using the high emissions scenario (RCP 8.5) where greenhouse gas emissions continue to grow. Instead of considering future climate change during specific time periods (e.g. decades) for this scenario, the dataset is calculated at various levels of global warming relative to the pre-industrial (1850-1900) period. The world has already warmed by around 1.1°C (between 1850–1900 and 2011–2020), whilst this dataset allows for the exploration of greater levels of warming.The global warming levels available in this dataset are 1.5°C, 2°C, 2.5°C, 3°C and 4°C. The data at each warming level was calculated using a 21 year period. These 21 year periods are calculated by taking 10 years either side of the first year at which the global warming level is reached. This time will be different for different model ensemble members. To calculate the value for the Winter Average Temperature Change, an average is taken across the 21 year period.We cannot provide a precise likelihood for particular emission scenarios being followed in the real world future. However, we do note that RCP8.5 corresponds to emissions considerably above those expected with current international policy agreements. The results are also expressed for several global warming levels because we do not yet know which level will be reached in the real climate as it will depend on future greenhouse emission choices and the sensitivity of the climate system, which is uncertain. Estimates based on the assumption of current international agreements on greenhouse gas emissions suggest a median warming level in the region of 2.4-2.8°C, but it could either be higher or lower than this level.What are the naming conventions and how do I explore the data?These data contain a field for each warming level and the 1981-2000 baseline. They are named 'tas winter change' (change in air 'temperature at surface'), the warming level or baseline, and 'upper' 'median' or 'lower' as per the description below. e.g. 'tas winter change 2.0 median' is the median value for winter for the 2.0°C warming level. Decimal points are included in field aliases but not in field names, e.g. 'tas change winter 2.0 median' is named 'tas_winter_change_20_median'. To understand how to explore the data, refer to the New Users ESRI Storymap. Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘tas winter change 2.0°C median’ values.What do the 'median', 'upper', and 'lower' values mean?Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future.For this dataset, the model projections consist of 12 separate ensemble members. To select which ensemble members to use, the Winter Average Temperature Change was calculated for each ensemble member and they were then ranked in order from lowest to highest for each location.The ‘lower’ fields are the second lowest ranked ensemble member. The ‘higher’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and higher fields, the greater the uncertainty.‘Lower’, ‘median’ and ‘upper’ are also given for the baseline period as these values also come from the model that was used to produce the projections. This allows a fair comparison between the model projections and recent past. Useful linksFor further information on the UK Climate Projections (UKCP).Further information on understanding climate data within the Met Office Climate Data Portal.
GLAH05 Level-1B waveform parameterization data include output parameters from the waveform characterization procedure and other parameters required to calculate surface slope and relief characteristics. GLAH05 contains parameterizations of both the transmitted and received pulses and other characteristics from which elevation and footprint-scale roughness and slope are calculated. The received pulse characterization uses two implementations of the retracking algorithms: one tuned for ice sheets, called the standard parameterization, used to calculate surface elevation for ice sheets, oceans, and sea ice; and another for land (the alternative parameterization).Each data granule has an associated browse product.
Raw data to calculate rate of adaptationRaw dataset for rate of adaptation calculations (Figure 1) and related statistics.dataall.csvR code to analyze raw data for rate of adaptationCompetition Analysis.RRaw data to calculate effective population sizesdatacount.csvR code to analayze effective population sizesR code used to analyze effective population sizes; Figure 2Cell Count Ne.RR code to determine our best estimate of the dominance coefficient in each environmentR code to produce figures 3, S4, S5 -- what is the best estimate of dominance? Note, competition and effective population size R code must be run first in the same session.what is h.R
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This index enables users to identify the extent of the relationship grids provided on LDS, which are used to convert heights provided in terms of one of 13 historic local vertical datums to NZVD2016. The polygons comprising the index show the extent of the conversion grids. Users can view the following polygon attributes: Shape_VDR: Vertical Datum Relationship grid area LVD: Local Vertical Datum Control: Number of control marks used to compute the relationship grid Mean: Mean vertical datum relationship value at control points Std: Standard deviation of vertical datum relationship value at control points Min: Minimum vertical datum relationship value at control points Max: Maximum vertical datum relationship value at control points Range: Range of vertical datum relationship value at control points Ref: Reference control mark for the local vertical datum Ref_value: Vertical datum relationship value at the reference mark Grid: Formal grid id Users should note that the values represented in this dataset have been calculated with the outliers excluded. These same outliers were excluded during the computation of the relationship grids, but were included when calculating the 95% confidence intervals More information on converting heights between vertical datums can be found on the LINZ website.
This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE's Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.
The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly component, easterly component, horizontal intensity, or total intensity), for a specific location, elevation and date or range of dates based on the current International Geomagnetic Reference Field (IGRF). The calculated result is a grid that contains the calculated component and the annual change of the component over the geographical area specified. WDeclination is calculated using the current World Magnetic Model (WMM) or International Geomagnetic Reference Field (IGRF) model. While results are typically accurate to 30 minutes of arc, users should be aware that several environmental factors can cause disturbances in the magnetic field.
Soil_conditioning_dataData used to calculated soil conditioning effects.Biomass_growth_dataData used to analyze growth differences and calculate plant-soil feedback effects.
The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 0.04-0.06 degree (5-7 km, depending on latitude) resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
This dataset consists of mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
## Example questions
Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.
Answer: 4
Question: Calculate -841880142.544 + 411127.
Answer: -841469015.544
Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).
Answer: 54*a - 30
It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories: