Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Home-range estimation is an important application of animal tracking data that is frequently complicated by autocorrelation, sampling irregularity, and small effective sample sizes. We introduce a novel, optimal weighting method that accounts for temporal sampling bias in autocorrelated tracking data. This method corrects for irregular and missing data, such that oversampled times are downweighted and undersampled times are upweighted to minimize error in the home-range estimate. We also introduce computationally efficient algorithms that make this method feasible with large datasets. Generally speaking, there are three situations where weight optimization improves the accuracy of home-range estimates: with marine data, where the sampling schedule is highly irregular, with duty cycled data, where the sampling schedule changes during the observation period, and when a small number of home-range crossings are observed, making the beginning and end times more independent and informative than the intermediate times. Using both simulated data and empirical examples including reef manta ray, Mongolian gazelle, and African buffalo, optimal weighting is shown to reduce the error and increase the spatial resolution of home-range estimates. With a conveniently packaged and computationally efficient software implementation, this method broadens the array of datasets with which accurate space-use assessments can be made.
Facebook
TwitterThis point feature class contains 81,481 points arranged in a 270-meter spaced grid that covers the Spring Mountains and Sheep Range in Clark County, Nevada. Points are attributed with hydroclimate variables and ancillary data compiled to support efforts to characterize ecological zones.
Facebook
Twitter
The sample included in this dataset represents five children who participated in a number line intervention study. Originally six children were included in the study, but one of them fulfilled the criterion for exclusion after missing several consecutive sessions. Thus, their data is not included in the dataset.
All participants were currently attending Year 1 of primary school at an independent school in New South Wales, Australia. For children to be able to eligible to participate they had to present with low mathematics achievement by performing at or below the 25th percentile in the Maths Problem Solving and/or Numerical Operations subtests from the Wechsler Individual Achievement Test III (WIAT III A & NZ, Wechsler, 2016). Participants were excluded from participating if, as reported by their parents, they have any other diagnosed disorders such as attention deficit hyperactivity disorder, autism spectrum disorder, intellectual disability, developmental language disorder, cerebral palsy or uncorrected sensory disorders.
The study followed a multiple baseline case series design, with a baseline phase, a treatment phase, and a post-treatment phase. The baseline phase varied between two and three measurement points, the treatment phase varied between four and seven measurement points, and all participants had 1 post-treatment measurement point.
The number of measurement points were distributed across participants as follows:
Participant 1 – 3 baseline, 6 treatment, 1 post-treatment
Participant 3 – 2 baseline, 7 treatment, 1 post-treatment
Participant 5 – 2 baseline, 5 treatment, 1 post-treatment
Participant 6 – 3 baseline, 4 treatment, 1 post-treatment
Participant 7 – 2 baseline, 5 treatment, 1 post-treatment
In each session across all three phases children were assessed in their performance on a number line estimation task, a single-digit computation task, a multi-digit computation task, a dot comparison task and a number comparison task. Furthermore, during the treatment phase, all children completed the intervention task after these assessments. The order of the assessment tasks varied randomly between sessions.
Number Line Estimation. Children completed a computerised bounded number line task (0-100). The number line is presented in the middle of the screen, and the target number is presented above the start point of the number line to avoid signalling the midpoint (Dackermann et al., 2018). Target numbers included two non-overlapping sets (trained and untrained) of 30 items each. Untrained items were assessed on all phases of the study. Trained items were assessed independent of the intervention during baseline and post-treatment phases, and performance on the intervention is used to index performance on the trained set during the treatment phase. Within each set, numbers were equally distributed throughout the number range, with three items within each ten (0-10, 11-20, 21-30, etc.). Target numbers were presented in random order. Participants did not receive performance-based feedback. Accuracy is indexed by percent absolute error (PAE) [(number estimated - target number)/ scale of number line] x100.
Single-Digit Computation. The task included ten additions with single-digit addends (1-9) and single-digit results (2-9). The order was counterbalanced so that half of the additions present the lowest addend first (e.g., 3 + 5) and half of the additions present the highest addend first (e.g., 6 + 3). This task also included ten subtractions with single-digit minuends (3-9), subtrahends (1-6) and differences (1-6). The items were presented horizontally on the screen accompanied by a sound and participants were required to give a verbal response. Participants did not receive performance-based feedback. Performance on this task was indexed by item-based accuracy.
Multi-digit computational estimation. The task included eight additions and eight subtractions presented with double-digit numbers and three response options. None of the response options represent the correct result. Participants were asked to select the option that was closest to the correct result. In half of the items the calculation involved two double-digit numbers, and in the other half one double and one single digit number. The distance between the correct response option and the exact result of the calculation was two for half of the trials and three for the other half. The calculation was presented vertically on the screen with the three options shown below. The calculations remained on the screen until participants responded by clicking on one of the options on the screen. Participants did not receive performance-based feedback. Performance on this task is measured by item-based accuracy.
Dot Comparison and Number Comparison. Both tasks included the same 20 items, which were presented twice, counterbalancing left and right presentation. Magnitudes to be compared were between 5 and 99, with four items for each of the following ratios: .91, .83, .77, .71, .67. Both quantities were presented horizontally side by side, and participants were instructed to press one of two keys (F or J), as quickly as possible, to indicate the largest one. Items were presented in random order and participants did not receive performance-based feedback. In the non-symbolic comparison task (dot comparison) the two sets of dots remained on the screen for a maximum of two seconds (to prevent counting). Overall area and convex hull for both sets of dots is kept constant following Guillaume et al. (2020). In the symbolic comparison task (Arabic numbers), the numbers remained on the screen until a response was given. Performance on both tasks was indexed by accuracy.
During the intervention sessions, participants estimated the position of 30 Arabic numbers in a 0-100 bounded number line. As a form of feedback, within each item, the participants’ estimate remained visible, and the correct position of the target number appeared on the number line. When the estimate’s PAE was lower than 2.5, a message appeared on the screen that read “Excellent job”, when PAE was between 2.5 and 5 the message read “Well done, so close! and when PAE was higher than 5 the message read “Good try!” Numbers were presented in random order.
Age = age in ‘years, months’ at the start of the study
Sex = female/male/non-binary or third gender/prefer not to say (as reported by parents)
Math_Problem_Solving_raw = Raw score on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).
Math_Problem_Solving_Percentile = Percentile equivalent on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).
Num_Ops_Raw = Raw score on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).
Math_Problem_Solving_Percentile = Percentile equivalent on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).
The remaining variables refer to participants’ performance on the study tasks. Each variable name is composed by three sections. The first one refers to the phase and session. For example, Base1 refers to the first measurement point of the baseline phase, Treat1 to the first measurement point on the treatment phase, and post1 to the first measurement point on the post-treatment phase.
The second part of the variable name refers to the task, as follows:
DC = dot comparison
SDC = single-digit computation
NLE_UT = number line estimation (untrained set)
NLE_T= number line estimation (trained set)
CE = multidigit computational estimation
NC = number comparison
The final part of the variable name refers to the type of measure being used (i.e., acc = total correct responses and pae = percent absolute error).
Thus, variable Base2_NC_acc corresponds to accuracy on the number comparison task during the second measurement point of the baseline phase and Treat3_NLE_UT_pae refers to the percent absolute error on the untrained set of the number line task during the third session of the Treatment phase.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By ddrg (From Huggingface) [source]
With a total of six columns, including formula1, formula2, label (binary format), formula1, formula2, and label, the dataset provides all the necessary information for conducting comprehensive analysis and evaluation.
The train.csv file contains a subset of the dataset specifically curated for training purposes. It includes an extensive range of math formula pairs along with their corresponding labels and unique ID names. This allows researchers and data scientists to construct models that can predict whether two given formulas fall within the same category or not.
On the other hand, test.csv serves as an evaluation set. It consists of additional pairs of math formulas accompanied by their respective labels and unique IDs. By evaluating model performance on this test set after training it on train.csv data, researchers can assess how well their models generalize to unseen instances.
By leveraging this informative dataset, researchers can unlock new possibilities in mathematics-related fields such as pattern recognition algorithms development or enhancing educational tools that involve automatic identification and categorization tasks based on mathematical formulas
Introduction
Dataset Description
train.csv
The
train.csvfile contains a set of labeled math formula pairs along with their corresponding labels and formula name IDs. It consists of the following columns: - formula1: The first mathematical formula in the pair (text). - formula2: The second mathematical formula in the pair (text). - label: The classification label indicating whether the pair of formulas belong to the same category or not (binary). A label value of 1 indicates that both formulas belong to the same category, while a label value of 0 indicates different categories.test.csv
The purpose of the
test.csvfile is to provide a set of formula pairs along with their labels and formula name IDs for testing and evaluation purposes. It has an identical structure totrain.csv, containing columns like formula1, formula2, label, etc.Task
The main task using this dataset is binary classification, where your objective is to predict whether two mathematical formulas belong to the same category or not based on their textual representation. You can use various machine learning algorithms such as logistic regression, decision trees, random forests, or neural networks for training models on this dataset.
Exploring & Analyzing Data
Before building your model, it's crucial to explore and analyze your data. Here are some steps you can take:
- Load both CSV files (
train.csvandtest.csv) into your preferred data analysis framework or programming language (e.g., Python with libraries like pandas).- Examine the dataset's structure, including the number of rows, columns, and data types.
- Check for missing values in the dataset and handle them accordingly.
- Visualize the distribution of labels to understand whether it is balanced or imbalanced.
Model Building
Once you have analyzed and preprocessed your dataset, you can start building your classification model using various machine learning algorithms:
- Split your
train.csvdata into training and validation sets for model evaluation during training.- Choose a suitable
- Math Formula Similarity: This dataset can be used to develop a model that classifies whether two mathematical formulas are similar or not. This can be useful in various applications such as plagiarism detection, identifying duplicate formulas in databases, or suggesting similar formulas based on user input.
- Formula Categorization: The dataset can be used to train a model that categorizes mathematical formulas into different classes or categories. For example, the model can classify formulas into algebraic expressions, trigonometric equations, calculus problems, or geometric theorems. This categorization can help organize and search through large collections of mathematical formulas.
- Formula Recommendation: Using this dataset, one could build a recommendation system that suggests related math formulas based on user input. By analyzing the similarities between different formula pairs and their corresponding labels, the system could provide recommendations for relevant mathematical concepts that users may need while solving problems or studying specific topics in mathematics
Facebook
TwitterThis dataset consists of mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
## Example questions
Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.
Answer: 4
Question: Calculate -841880142.544 + 411127.
Answer: -841469015.544
Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).
Answer: 54*a - 30
It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:
Facebook
TwitterGLAH05 Level-1B waveform parameterization data include output parameters from the waveform characterization procedure and other parameters required to calculate surface slope and relief characteristics. GLAH05 contains parameterizations of both the transmitted and received pulses and other characteristics from which elevation and footprint-scale roughness and slope are calculated. The received pulse characterization uses two implementations of the retracking algorithms: one tuned for ice sheets, called the standard parameterization, used to calculate surface elevation for ice sheets, oceans, and sea ice; and another for land (the alternative parameterization). Each data granule has an associated browse product.
Facebook
TwitterSummary and methods used to calculate the physical characteristics used to compare the home range estimators.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This derived dataset contains basic statistical products derived from the eReefs CSIRO hydrodynamic model v2.0 outputs at both 1 km and 4 km resolution and v4.0 at 4 km for both a daily and monthly aggregation period. The statistics generated are daily minimum, maximum, mean and range. For monthly aggregations there are monthly mean of the daily minimum, maximum and range, and the monthly minimum, maximum and range. The dataset only calculates statistics for the temperature and water elevation (eta).
These are generated by the AIMS eReefs Platform (https://ereefs.aims.gov.au/). These statistical products are derived from the original hourly model outputs available via the National Computing Infrastructure (NCI) (https://thredds.nci.org.au/thredds/catalogs/fx3/catalog.html).
The data is re-gridded from the original curvilinear grid used by the eReefs model into a regular grid so the data files can be easily loaded into standard GIS software. These products are made available via a THREDDS server (https://thredds.ereefs.aims.gov.au/thredds/) in NetCDF format and
This data set contains two (2) products, based on the periods over which the statistics are determined: daily, and monthly.
Method:
Data files are processed in two stages. The daily files are calculated from the original hourly files, then the monthly files are calculated from the daily files. See Technical Guide to Derived Products from CSIRO eReefs Models for details on the regridding process.
Data Dictionary:
Daily statistics:
The following variables can be found in the Daily statistics product:
- temp_mean: mean temperature for each grid cell for the day.
- temp_min: minimum temperature for each grid cell for the day.
- temp_max: maximum temperature for each grid cell for the day.
- temp_range: difference between maximum and minimum temperatures for each grid cell for the day.
- eta_mean: mean surface elevation for each grid cell for the day.
- eta_min: minimum surface elevation for each grid cell for the day.
- eta_max: maximum surface elevation for each grid cell for the day.
- eta_range: difference between maximum and minimum surface elevation for each grid cell for the day.
Depths:
Depths at 1km resolution: -2.35m, -5.35m, -18.0m, -49.0m
Depths are 4km resolution: -1.5m, -5.55m, -17.75m, -49.0m
* Monthly statistics:
The following variables can be found in the Monthly statistics product:
- temp_min_min: the minimum value of the "temp_min" variable from the Daily statistics product. This equates to the minimum temperature for each grid cell for the corresponding month.
- temp_min_mean: the mean value of the "temp_min" variable from the Daily statistics product. This equates to the mean minimum temperature for each grid cell for the corresponding month.
- temp_max_max: the maximum value of the "temp_max" variable from the Daily statistics product. This equates to the maximum temperature for each grid cell for the corresponding month.
- temp_max_mean: the mean value of the "temp_max" variable from the Daily statistics product. This equates to the mean maximum temperature for each grid cell for the corresponding month.
- temp_mean: the mean value of the "temp_mean" variable from the Daily statistics product. This equates to the mean temperature for each grid cell for the corresponding month.
- temp_range_mean: the mean value of the "temp_range" variable from the Daily statistics product. This equates to the mean range of temperatures for each grid cell for the corresponding month.
- eta_min_min: the minimum value of the "eta_min" variable from the Daily statistics product. This equates to the minimum surface elevation for each grid cell for the corresponding month.
- eta_min_mean: the mean value of the "eta_min" variable from the Daily statistics product. This equates to the mean minimum surface elevation for each grid cell for the corresponding month.
- eta_max_max: the maximum value of the "eta_max" variable from the Daily statistics product. This equates to the maximum surface elevation for each grid cell for the corresponding month.
- eta_max_mean: the mean value of the "eta_max" variable from the Daily statistics product. This equates to the mean maximum surface elevation for each grid cell for the corresponding month.
- eta_mean: the mean value of the "eta_mean" variable from the Daily statistics product. This equates to the mean surface elevation for each grid cell for the corresponding month.
- eta_range_mean: the mean value of the "eta_range" variable from the Daily statistics product. This equates to the mean range of surface elevations for each grid cell for the corresponding month.
Depths:
Depths at 1km resolution: -2.35m, -5.35m, -18.0m, -49.0m
Depths are 4km resolution: -1.5m, -5.55m, -17.75m, -49.0m
What does this dataset show:
The temperature statistics show that inshore areas along the coast get significantly warmer in summer and cooler in winter than offshore areas. The daily temperature range is lower in winter with most areas experiencing 0.2 - 0.3 degrees Celsius temperature change. In summer months the daily temperature range approximately doubles, with up welling areas in the Capricorn Bunker group, off the outer edge of the Prompey sector of reefs and on the east side of Torres Strait seeing daily temperature ranges between 0.7 - 1.2 degree Celsius.
Limitations:
This dataset is based on spatial and temporal models and so are an estimate of the environmental conditions. It is not based on in-water measurements, and thus will have a spatially varying level of error in the modelled values. It is important to consider if the model results are fit for the intended purpose.
Change Log:
2025-10-29: Updated the metadata title from 'eReefs AIMS-CSIRO Statistics of hydrodynamic model outputs' to 'Daily and monthly minimum, maximum and range of eReefs hydrodynamic model outputs - temperature, water elevation (AIMS, Source: CSIRO)'. Improve the introduction text. Corrected deprecated link to NCI THREDDS. Added a description of what the dataset shows.
Facebook
TwitterThe U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Facebook
TwitterThe U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset and codes for "Observation of Acceleration and Deceleration Periods at Pine Island Ice Shelf from 1997–2023 "
The MATLAB codes and related datasets are used for generating the figures for the paper "Observation of Acceleration and Deceleration Periods at Pine Island Ice Shelf from 1997–2023".
Files and variables
File 1: Data_and_Code.zip
Directory: Main_function
**Description:****Include MATLAB scripts and functions. Each script include discriptions that guide the user how to used it and how to find the dataset that used for processing.
MATLAB Main Scripts: Include the whole steps to process the data, output figures, and output videos.
Script_1_Ice_velocity_process_flow.m
Script_2_strain_rate_process_flow.m
Script_3_DROT_grounding_line_extraction.m
Script_4_Read_ICESat2_h5_files.m
Script_5_Extraction_results.m
MATLAB functions: Five Files that includes MATLAB functions that support the main script:
1_Ice_velocity_code: Include MATLAB functions related to ice velocity post-processing, includes remove outliers, filter, correct for atmospheric and tidal effect, inverse weited averaged, and error estimate.
2_strain_rate: Include MATLAB functions related to strain rate calculation.
3_DROT_extract_grounding_line_code: Include MATLAB functions related to convert range offset results output from GAMMA to differential vertical displacement and used the result extract grounding line.
4_Extract_data_from_2D_result: Include MATLAB functions that used for extract profiles from 2D data.
5_NeRD_Damage_detection: Modified code fom Izeboud et al. 2023. When apply this code please also cite Izeboud et al. 2023 (https://www.sciencedirect.com/science/article/pii/S0034425722004655).
6_Figure_plotting_code:Include MATLAB functions related to Figures in the paper and support information.
Director: data_and_result
Description:**Include directories that store the results output from MATLAB. user only neeed to modify the path in MATLAB script to their own path.
1_origin : Sample data ("PS-20180323-20180329", “PS-20180329-20180404”, “PS-20180404-20180410”) output from GAMMA software in Geotiff format that can be used to calculate DROT and velocity. Includes displacment, theta, phi, and ccp.
2_maskccpN: Remove outliers by ccp < 0.05 and change displacement to velocity (m/day).
3_rockpoint: Extract velocities at non-moving region
4_constant_detrend: removed orbit error
5_Tidal_correction: remove atmospheric and tidal induced error
6_rockpoint: Extract non-aggregated velocities at non-moving region
6_vx_vy_v: trasform velocities from va/vr to vx/vy
7_rockpoint: Extract aggregated velocities at non-moving region
7_vx_vy_v_aggregate_and_error_estimate: inverse weighted average of three ice velocity maps and calculate the error maps
8_strain_rate: calculated strain rate from aggregate ice velocity
9_compare: store the results before and after tidal correction and aggregation.
10_Block_result: times series results that extrac from 2D data.
11_MALAB_output_png_result: Store .png files and time serties result
12_DROT: Differential Range Offset Tracking results
13_ICESat_2: ICESat_2 .h5 files and .mat files can put here (in this file only include the samples from tracks 0965 and 1094)
14_MODIS_images: you can store MODIS images here
shp: grounding line, rock region, ice front, and other shape files.
File 2 : PIG_front_1947_2023.zip
Includes Ice front positions shape files from 1947 to 2023, which used for plotting figure.1 in the paper.
File 3 : PIG_DROT_GL_2016_2021.zip
Includes grounding line positions shape files from 1947 to 2023, which used for plotting figure.1 in the paper.
Data was derived from the following sources:
Those links can be found in MATLAB scripts or in the paper "**Open Research" **section.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The files with simulation results for ECOC 20223 submission "Analysis of the Scalar and Vector Random Coupling Models For a Four Coupled-Core Fiber". "4CCF_eigenvectorsPol" file is the Mathematica code which enables to calculate supermodes (eigenvectors of M(w)) and their propagation constants of 4-coupled-core fiber (4CCF). These results are uploaded to the python notebook "4CCF_modelingECOC" in order to plot them to get Fig. 2 in the paper. "TransferMatrix" is the python file with functions used for modeling, simulation and plotting. It is also uploaded in the python notebook "4CCF_modelingECOC", where all the calculations for figures in the paper are presented.
! UPD 25.09.2023: There is an error in the formula of birefringence calculation. It is in the function "CouplingCoefficients" in "TransferMatrix" file. There the variable "birefringence" has to be calculated according to the formula (19) [A. Ankiewicz, A. Snyder, and X.-H. Zheng, "Coupling between parallel optical fiber cores–critical examination", Journal of Lightwave Technology, vol. 4, no. 9,pp. 1317–1323, 1986]: (4*U**2*W*spec.k0(W)*spec.kn(2, W_)/(spec.k1(W)*V**4))*((spec.iv(1, W)/spec.k1(W))-(spec.iv(2, W)/spec.k0(W))) The correct formula gives almost the same result (the difference is 10^-5), but one has to use a correct formula anyway. ! UPD 9.12.2023: I have noticed that in the published version of the code I forgot to change the wavelength range for impulse response calculation. So instead of seeing the nice shape as in the paper you will see resolution limited shape. To solve that just change the range of wavelengths, you can add "wl = [1545e-9, 1548e-9]" in the first cell after "Total power impulse response". P.s. In case of any questions or suggestions you are welcome to write me an email ekader@chalmers.se
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Aim: Despite the wide distribution of many parasites around the globe, the range of individual species varies significantly even among phylogenetically related taxa. Since parasites need suitable hosts to complete their development, parasite geographical and environmental ranges should be limited to communities where their hosts are found. Parasites may also suffer from a trade-off between being locally abundant or widely dispersed. We hypothesize that the geographical and environmental ranges of parasites are negatively associated to their host specificity and their local abundance. Location: Worldwide Time period: 2009 to 2021 Major taxa studied: Avian haemosporidian parasites Methods: We tested these hypotheses using a global database which comprises data on avian haemosporidian parasites from across the world. For each parasite lineage, we computed five metrics: phylogenetic host-range, environmental range, geographical range, and their mean local and total number of observations in the database. Phylogenetic generalized least squares models were ran to evaluate the influence of phylogenetic host-range and total and local abundances on geographical and environmental range. In addition, we analysed separately the two regions with the largest amount of available data: Europe and South America. Results: We evaluated 401 lineages from 757 localities and observed that generalism (i.e. phylogenetic host range) associates positively to both the parasites’ geographical and environmental ranges at global and Europe scales. For South America, generalism only associates with geographical range. Finally, mean local abundance (mean local number of parasite occurrences) was negatively related to geographical and environmental range. This pattern was detected worldwide and in South America, but not in Europe. Main Conclusions: We demonstrate that parasite specificity is linked to both their geographical and environmental ranges. The fact that locally abundant parasites present restricted ranges, indicates a trade-off between these two traits. This trade-off, however, only becomes evident when sufficient heterogeneous host communities are considered. Methods We compiled data on haemosporidian lineages from the MalAvi database (http://130.235.244.92/Malavi/ , Bensch et al. 2009) including all the data available from the “Grand Lineage Summary” representing Plasmodium and Haemoproteus genera from wild birds and that contained information regarding location. After checking for duplicated sequences, this dataset comprised a total of ~6200 sequenced parasites representing 1602 distinct lineages (775 Plasmodium and 827 Haemoproteus) collected from 1139 different host species and 757 localities from all continents except Antarctica (Supplementary figure 1, Supplementary Table 1). The parasite lineages deposited in MalAvi are based on a cyt b fragment of 478 bp. This dataset was used to calculate the parasites’ geographical, environmental and phylogenetic ranges. Geographical range All analyses in this study were performed using R version 4.02. In order to estimate the geographical range of each parasite lineage, we applied the R package “GeoRange” (Boyle, 2017) and chose the variable minimum spanning tree distance (i.e., shortest total distance of all lines connecting each locality where a particular lineage has been found). Using the function “create.matrix” from the “fossil” package, we created a matrix of lineages and coordinates and employed the function “GeoRange_MultiTaxa” to calculate the minimum spanning tree distance for each parasite lineage distance (i.e. shortest total distance in kilometers of all lines connecting each locality). Therefore, as at least two distinct sites are necessary to calculate this distance, parasites observed in a single locality could not have their geographical range estimated. For this reason, only parasites observed in two or more localities were considered in our phylogenetically controlled least squares (PGLS) models. Host and Environmental diversity Traditionally, ecologists use Shannon entropy to measure diversity in ecological assemblages (Pielou, 1966). The Shannon entropy of a set of elements is related to the degree of uncertainty someone would have about the identity of a random selected element of that set (Jost, 2006). Thus, Shannon entropy matches our intuitive notion of biodiversity, as the more diverse an assemblage is, the more uncertainty regarding to which species a randomly selected individual belongs. Shannon diversity increases with both the assemblage richness (e.g., the number of species) and evenness (e.g., uniformity in abundance among species). To compare the diversity of assemblages that vary in richness and evenness in a more intuitive manner, we can normalize diversities by Hill numbers (Chao et al., 2014b). The Hill number of an assemblage represents the effective number of species in the assemblage, i.e., the number of equally abundant species that are needed to give the same value of the diversity metric in that assemblage. Hill numbers can be extended to incorporate phylogenetic information. In such case, instead of species, we are measuring the effective number of phylogenetic entities in the assemblage. Here, we computed phylogenetic host-range as the phylogenetic Hill number associated with the assemblage of hosts found infected by a given parasite. Analyses were performed using the function “hill_phylo” from the “hillr” package (Chao et al., 2014a). Hill numbers are parameterized by a parameter “q” that determines the sensitivity of the metric to relative species abundance. Different “q” values produce Hill numbers associated with different diversity metrics. We set q = 1 to compute the Hill number associated with Shannon diversity. Here, low Hill numbers indicate specialization on a narrow phylogenetic range of hosts, whereas a higher Hill number indicates generalism across a broader phylogenetic spectrum of hosts. We also used Hill numbers to compute the environmental range of sites occupied by each parasite lineage. Firstly, we collected the 19 bioclimatic variables from WorldClim version 2 (http://www.worldclim.com/version2) for all sites used in this study (N = 713). Then, we standardized the 19 variables by centering and scaling them by their respective mean and standard deviation. Thereafter, we computed the pairwise Euclidian environmental distance among all sites and used this distance to compute a dissimilarity cluster. Finally, as for the phylogenetic Hill number, we used this dissimilarity cluster to compute the environmental Hill number of the assemblage of sites occupied by each parasite lineage. The environmental Hill number for each parasite can be interpreted as the effective number of environmental conditions in which a parasite lineage occurs. Thus, the higher the environmental Hill number, the more generalist the parasite is regarding the environmental conditions in which it can occur. Parasite phylogenetic tree A Bayesian phylogenetic reconstruction was performed. We built a tree for all parasite sequences for which we were able to estimate the parasite’s geographical, environmental and phylogenetic ranges (see above); this represented 401 distinct parasite lineages. This inference was produced using MrBayes 3.2.2 (Ronquist & Huelsenbeck, 2003) with the GTR + I + G model of nucleotide evolution, as recommended by ModelTest (Posada & Crandall, 1998), which selects the best-fit nucleotide substitution model for a set of genetic sequences. We ran four Markov chains simultaneously for a total of 7.5 million generations that were sampled every 1000 generations. The first 1250 million trees (25%) were discarded as a burn-in step and the remaining trees were used to calculate the posterior probabilities of each estimated node in the final consensus tree. Our final tree obtained a cumulative posterior probability of 0.999. Leucocytozoon caulleryi was used as the outgroup to root the phylogenetic tree as Leucocytozoon spp. represents a basal group within avian haemosporidians (Pacheco et al., 2020).
Facebook
TwitterTransient killers whales inhabit the West Coast of the United States. Their range and movement patterns are difficult to ascertain, but are vital to understanding killer whale population dynamics and abundance trends. Satellite tagging of West Coast transient killer whales to determine range and movement patterns will provide data to assist in understanding transient killer whale populations. L...
Facebook
TwitterThe exercise after this contains questions that are based on the housing dataset.
How many houses have a waterfront? a. 21000 b. 21450 c. 163 d. 173
How many houses have 2 floors? a. 2692 b. 8241 c. 10680 d. 161
How many houses built before 1960 have a waterfront? a. 80 b. 7309 c. 90 d. 92
What is the price of the most expensive house having more than 4 bathrooms? a. 7700000 b. 187000 c. 290000 d. 399000
For instance, if the ‘price’ column consists of outliers, how can you make the data clean and remove the redundancies? a. Calculate the IQR range and drop the values outside the range. b. Calculate the p-value and remove the values less than 0.05. c. Calculate the correlation coefficient of the price column and remove the values less than the correlation coefficient. d. Calculate the Z-score of the price column and remove the values less than the z-score.
What are the various parameters that can be used to determine the dependent variables in the housing data to determine the price of the house? a. Correlation coefficients b. Z-score c. IQR Range d. Range of the Features
If we get the r2 score as 0.38, what inferences can we make about the model and its efficiency? a. The model is 38% accurate, and shows poor efficiency. b. The model is showing 0.38% discrepancies in the outcomes. c. Low difference between observed and fitted values. d. High difference between observed and fitted values.
If the metrics show that the p-value for the grade column is 0.092, what all inferences can we make about the grade column? a. Significant in presence of other variables. b. Highly significant in presence of other variables c. insignificance in presence of other variables d. None of the above
If the Variance Inflation Factor value for a feature is considerably higher than the other features, what can we say about that column/feature? a. High multicollinearity b. Low multicollinearity c. Both A and B d. None of the above
Facebook
Twitterhttps://api.github.com/licenses/mithttps://api.github.com/licenses/mit
This dataset contains Python numerical computation code for studying the phenomena of acoustic superluminescence and Hawking radiation in specific rotating acoustic black hole models. The code is based on the radial wave equation of scalar field (acoustic disturbance) under the effective acoustic metric background derived from analysis. Dataset generation process and processing methods: The core code is written in Python language, using standard scientific computing libraries NumPy and SciPy. The main steps include: (1) defining model parameters (such as A, B, m) and calculation range (frequency $\ omega $from 0.01 to 2.0, turtle coordinates $r ^ * $from -20 to 20); (2) Implement the mutual conversion function between the radial coordinate $r $and the turtle coordinate $r ^ * $, where the inversion of $r ^ * (r) $is numerically solved using SciPy's' optimize.root_scalar 'function (such as Brent's method), and special attention is paid to calculations near the horizon $r_H=| A |/c $to ensure stability; (3) Calculate the effective potential $V_0 (r ^ *, \ omega) $that depends on $r (r ^ *) $; (4) Convert the second-order radial wave equation into a system of quaternion first-order real valued ordinary differential equations; (5) The ODE system was solved using SciPy's' integrate. solve_ivp 'function (using an adaptive step size RK45 method with relative and absolute error margins set to $10 ^ {-8} $), applying pure inward boundary conditions (normalized unit transmission) at the field of view and asymptotic behavior at infinity; (6) Extract the reflection coefficient $\ mathcal {R} $and transmission coefficient $\ mathcal {T} $from the numerical solution; (7) Calculate the Hawking radiation power spectrum $P_ \ omega $based on the derived Hawking temperature $TH $, event horizon angular velocity $\ Omega-H $, Bose Einstein statistics, and combined with the gray body factor $| \ mathcal {T} | ^ 2 $. The calculation process adopts the natural unit system ($\ hbar=k_B=c=1 $) and sets the feature length $r_0=1 $. Dataset content: This dataset mainly includes a Python script file (code for numerical research on superluminescence and Hawking radiation of rotating acoustic black holes. py) and a README documentation file (README. md). The Python script implements the complete calculation process mentioned above. The README file provides a detailed explanation of the code's functionality, the required dependency libraries (Python 3, NumPy, SciPy) for running, the running methods, and the meaning of parameters. This dataset does not contain any raw experimental data and is only theoretical calculation code. Data accuracy and validation: The reliability of the code has been validated through two key indicators: (1) Flow conservation relationship$|\ mathcal{R}|^2 + [(\omega-m\Omega_H)/\omega]|\mathcal{T}|^2 = 1$ The numerical approximation holds within the calculated frequency range (with a deviation typically on the order of $10 ^ {-8} $or less); (2) Under the condition of superluminescence $0<\ omega1 $, which is consistent with theoretical expectations. File format and software: The code is in standard Python 3 (. py) format and can run in any standard Python 3 environment with NumPy and SciPy libraries installed. The README file is in Markdown (. md) format and can be opened with any text editor or Markdown viewer. No special or niche software is required.
Facebook
TwitterThis dataset provides the equation of state data for lead in the temperature and pressure range from room temperature to 10 MK, and from atmospheric pressure to 107GPa. The thermodynamic properties of the shock Hugoniot line, 300 K isotherm, melting line, and temperature dense transition zone were calculated.
Facebook
TwitterDeer group locations and sizes are used in assessing deer populations living on the ‘open range’. ‘Open range’ generally means open areas of habitat used mainly by red deer (for example, heather moorland). From the outset it is important to be clear that although the terms ‘count’ or ‘census’ are used, open range counting enables a population estimate to be made, but with associated error margins. Research has shown that, normally, estimates will vary by between 5 and 16%. In other words if you count 415 deer then the population estimate is at best between 348 and 481 (or at very best between 394 and 435). Open range population counts (and their resulting estimates) are therefore most likely to be useful for setting broad targets or giving an index of deer numbers as opposed to very precise population models. They are also useful for indicating trends in a series of counts. Count information can be obtained by joining table DEER_COUNT_INDEX based on COUNT_ID columns. Both Helicopter and ground counts are included in the data. The majority of the data were collected in ‘white ground’ conditions where the contrast between deer and the background of snow is maximised enabling deer to be more easily spotted. Summer counts of 'Priority' sites are also included where sites have been counted more intensively. Attribute Name / Item Name / Description DIGI_CALVS / Digital Calves / DIGI = counted from a digital photo SUM_STAGS / SUM Stags / DIGI + VIS combined SUM_HINDS / SUM Hinds / DIGI + VIS combined SUM_CALVES / SUM Calves / DIGI + VIS combined SUM_UNCL / SUM Unclassified / DIGI + VIS combined UNCL = unclassified – so generally hinds and calves combined. SUM_TOTAL / SUM Total / Overall total for that group (not necessarily for the 1km2 as there may be 3 or 4 groups in the 1km2 at that point in time. COUNT_ID / COUNT_ID / Provides link to accompanying csv file. DIGI_HINDS / Digital Hinds / DIGI = counted from a digital photo VIS_TOTAL / Visual Total / VIS = counted visually during the count DIGI_UNCL / Digital Unclassified / DIGI = counted from a digital photo UNCL = unclassified – so generally hinds and calves combined. DIGI_TOTAL / Digital Total / DIGI = counted from a digital photo VIS_STAG / Visual Stag / VIS = counted visually during the count VIS_HINDS / Visual Hinds / VIS = counted visually during the count VIS_CALVS / Visual Calves / VIS = counted visually during the count VIS_UNCL / Visual Unclassified / VIS = counted visually during the count UNCL = unclassified – so generally hinds and calves combined. DIGI_STAG / Digital Stag / DIGI = counted from a digital photo
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Understanding species abundances and distributions, especially at local to landscape scales, is critical for land managers and conservationists to prioritize management decisions and informs the effort and expense that may be required. The metrics of range size and local abundance reflect aspects of the biology and ecology of a given species, and together with its per capita (or per unit area) effects on other members of the community comprise a well-accepted theoretical paradigm describing invasive species. Although these metrics are readily calculated from vegetation monitoring data, they have not generally (and effect in particular) been applied to native species. We describe how metrics defining invasions may be more broadly applied to both native and invasive species in vegetation management, supporting their relevance to local scales of species conservation and management. We then use a sample monitoring dataset to compare range size, local abundance and effect as well as summary calculations of landscape penetration (range size × local abundance) and impact (landscape penetration × effect) for native and invasive species in the mixed-grass plant community of western North Dakota, USA. This paper uses these summary statistics to quantify the impact for 13 of 56 commonly encountered species, with statistical support for effects of 6 of the 13 species. Our results agree with knowledge of invasion severity and natural history of native species in the region. We contend that when managers are using invasion metrics in monitoring, extending them to common native species is biologically and ecologically informative, with little additional investment. Resources in this dataset:Resource Title: Supporting Data (xlsx). File Name: Espeland-Sylvain-BiodivConserv-2019-raw-data.xlsxResource Description: Occurrence data per quadrangle, site, and transect. Species Codes and habitat identifiers are defined in a separate sheet.Resource Title: Data Dictionary. File Name: Espeland-Sylvain-BiodivConserv-2019-data-dictionary.csvResource Description: Details Species and Habitat codes for abundance data collected.Resource Title: Supporting Data (csv). File Name: Espeland-Sylvain-BiodivConserv-2019-raw-data.csvResource Description: Occurrence data per quadrangle, site, and transect.Resource Title: Supplementary Table S1.1. File Name: 10531_2019_1701_MOESM1_ESM.docxResource Description: Scientific name, common name, life history group, family, status (N= native, I= introduced), percent of plots present, and average cover when present of 56 vascular plant species recorded in 1196 undisturbed plots in federally-managed grasslands of western North Dakota. Life history groups: C3 = cool season perennial grass, C4 = warm season perennial grass, SE = sedge, SH = shrub, PF= perennial forb, BF = biennial forb, APF = annual, biennial, or perennial forb.
Facebook
Twitterhttps://dataful.in/terms-and-conditionshttps://dataful.in/terms-and-conditions
The dataset contains Year Wise Month Wise Filing Count of Different Taxpayer Types Categorized by Income Range
Note: Please note that the returns data corresponds to the ITRs submitted in the selected financial year up to the end of the selected month. For example, if FY 2025-26 and the month of January is selected, then the summary contains the total number of e-Returns submitted for different Assessment Years, including the current Assessment Year filed in FY 2025-26 up to the end of January.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Home-range estimation is an important application of animal tracking data that is frequently complicated by autocorrelation, sampling irregularity, and small effective sample sizes. We introduce a novel, optimal weighting method that accounts for temporal sampling bias in autocorrelated tracking data. This method corrects for irregular and missing data, such that oversampled times are downweighted and undersampled times are upweighted to minimize error in the home-range estimate. We also introduce computationally efficient algorithms that make this method feasible with large datasets. Generally speaking, there are three situations where weight optimization improves the accuracy of home-range estimates: with marine data, where the sampling schedule is highly irregular, with duty cycled data, where the sampling schedule changes during the observation period, and when a small number of home-range crossings are observed, making the beginning and end times more independent and informative than the intermediate times. Using both simulated data and empirical examples including reef manta ray, Mongolian gazelle, and African buffalo, optimal weighting is shown to reduce the error and increase the spatial resolution of home-range estimates. With a conveniently packaged and computationally efficient software implementation, this method broadens the array of datasets with which accurate space-use assessments can be made.