Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Format
index.csv.gz - CSV comma separated file with 3 columns: , , For example: src-d/go-git,s,README.md
The flag is either "s" (readme found) or "r" (readme does not exist on the root directory level). Readme file name may be any from the list:
"README.md", "readme.md", "Readme.md", "README.MD", "README.txt", "readme.txt", "Readme.txt", "README.TXT", "README", "readme", "Readme", "README.rst", "readme.rst", "Readme.rst", "README.RST"
100 part-r-00xxx files are in "new" Hadoop API format with the following settings:
inputFormatClass is org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
keyClass is org.apache.hadoop.io.Text - repository name
valueClass is org.apache.hadoop.io.BytesWritable - gzipped readme file
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and code in support of [Klarevas-Irby, Wikelski, and Farine (2021)], in addition to a readme file with further information. All scripts are written for implementation in R ver. 4.0 and are numbered in order that they are implemented (order does not matter between multiple scripts with same number).
After admixture, recombination breaks down genomic blocks of contiguous ancestry. The breakdown of these blocks forms a new molecular clock', that ticks at a much faster rate than the mutation clock, enabling accurate dating of admixture events in the recent past. However, existing theory on the break down of these blocks, or the accumulation of delineations between blocks, so called
junctions', has mostly been limited to using regularly spaced markers on phased data. Here, we present an extension to the theory of junctions using the Ancestral Recombination Graph that describes the expected number of junctions for any distribution of markers along the genome. Furthermore, we provide a new framework to infer the time since admixture using unphased data. We demonstrate both the phased and unphased methods on simulated data and show that our new extensions have improved accuracy with respect to previous methods, especially for smaller population sizes and more ancient admixture tim...
Dataset1.zip
Samples were stored unfiltered in the dark at 4° C for approximately five months after sampling.
On the day of measurements, specific volumes of samples were transferred to 2 mL Eppendorf vials so that 11.25 µg carbon was present in each sample vial, while 2 mL of blanks were transferred.
The water in samples and blanks was subsequently removed by vacuum evaporation at 45° C, after which samples were reconstituted in 150 µL 1 % (v/v) formic acid to a final concentration of 75 mg/L carbon.
Reverse-phase chromatography separations were performed on an Agilent 1100 series instrument with an Agilent PLRP‑S series column (150 x 1 mm, 3 µm bed size, 100 Å pore size). Eighty µL sample was loaded at a flow rate of 100 µL min-1 0.1 % formic acid, 0.05 % ammonia, and 5 % acetonitrile. The elution of DOM was achieved through a step-wise increase in concentrat...
The attached files contain the necessary descriptions for understanding and working with data used in the construction of Tapper et al. 2020 (Royal Society Open Science). The RSOS_dryad_readme.docx file contains descriptions of each file name attached to this data repository, and descriptions of the data contained within each column of a data file. Information pertaining to the R code, which has been provided in this data repository as a .Rmd (R Markdown) file, has not been provided within this readme file, because necessary descriptions for understanding our code are provided within the R code file itself.
The attached files contain the necessary descriptions for understanding and working with data used in the construction of Tapper et al. 2020, "Body temperature is a repeatable trait in a free-ranging passerine bird" (submitted to Journal of Experimental Biology). The Repeatability_dryad_readme.docx file contains descriptions of each file name attached to this data repository, and descriptions of the data contained within each column of a data file. Information pertaining to the R code, which has been provided in this data repository as a .Rmd (R Markdown) file, has not been provided within this readme file, because necessary descriptions for understanding our code are provided within the R code file itself.
hklmirs This repository contains the data and code for our two manuscripts (in preparation): Henning Teickner and Klaus-Holger Knorr (in preparation): Improving Models to Predict Holocellulose and Klason Lignin Contents for Peat Soil Organic Matter with Mid Infrared Spectra. Henning Teickner and Klaus-Holger Knorr (in preparation): Predicting Absolute Holocellulose and Klason Lignin Contents for Peat Remains Challenging. How to cite Please cite this compendium as: Henning Teickner and Klaus-Holger Knorr, (2022). Compendium of R code and data for “Improving Models to Predict Holocellulose and Klason Lignin Contents for Peat Soil Organic Matter with Mid Infrared Spectra” and “Predicting Absolute Holocellulose and Klason Lignin Contents for Peat Remains Challenging”. Accessed 03 Mar 2022. Online at https://github.com/henningte/hklmirs/ Contents The analysis directory contains: :file_folder: paper: R Markdown source documents needed to reproduce the manuscript, including figures and tables. The main script is 001-paper-main.Rmd. This script produces both manuscripts and the corresponding supplementary information. Additional scripts are: 002-paper-m-original-models.Rmd: Computes the original models used in Hodgkins et al. (2018) and models with the same model structure, but as Bayesian models. 003-paper-m-gaussian-beta.Rmd: Computes models assuming a Beta distribution for holocellulose and Klason lignin contents and compares them to the original models. 004-paper-m-reduce-underfitting.Rmd: Extents the Beta regression models by including additional variables (additional peaks) or using a different approach (using measured spectral intensities of binned spectra instead of extracted peaks), and validates these models using LOO-CV. 005-paper-m-minerals.Rmd: Uses the models from 003-paper-m-gaussian-beta.Rmd to test how accurate a model for holocellulose content is which is also calibrated on training samples with higher mineral contents. 006-paper-m-prediction-domain.Rmd: Analyzes the prediction domain (Wadoux et al. 2021) of the original models and the modified models and identifie under which conditions models extrapolate for peat and vegetation smaples from Hodgkins et al. (2018). 007-paper-m-prediction-differences.Rmd: Compares predictions for the training data and the peat and vegetation data from Hodgkins et al. (2018) for the original models from Hodgkins et al. (2018) and the modified models from 004-paper-m-reduce-underfitting.Rmd. 008-paper-supplementary.Rmd: Computes supplementary analyses and figures for the first manuscript. 001-reply-main.Rmd: This is the main script for manuscript 2. It is run from within 001-paper-main.Rmd and produces the supplementary information for manuscript 2. 002-reply-main.Rmd: This script produces the document for manuscript 2. It is run from within 001-reply-main.Rmd. :file_folder: data: Data used in the analysis. Note that raw data is not stored in :file_folder: raw_data (empty folder), but in :file_folder: /inst/extdata. :file_folder: derived_data contains derived data computed from the scripts. The raw data are derived from Hodgkins et al. (2018). :file_folder: stan_models: The Stan model used in 001-reply-main.Rmd. The other folders in this directory follow the standard naming scheme and function of folders in R packages. There are the following directories and files: README.md/README.Rmd: Readme for the compendium. DESCRIPTION: The R package DESCRIPTION file for the compendium. NAMESPACE: The R package NAMESPACE file for the compendium. LICENSE.md: Details on the license for the code in the compendium. CONTRIBUTING.md and CONDUCT.md: Files with information on how to contribute to the compendium. Dockerfile: Dockerfile to build a Docker image for the compendium. .Rbuildignore, .gitignore, .dockerignore: Files to ignore during R package building, to ignore by Git, and to ignore while building a Docker image, respectively. renv.lock: renv lock file (Lists all R package dependencies and versions and can be used to restore the R package library using renv). renv.lock was created by running renv::snapshot() in the R package directory and it uses the information included in the DESCRIPTION file. .Rprofile: Code to run upon opening the R-project. R, man, inst, data-raw, data, src: Default folders for making the R package run. Folder inst/extdata: Folder with the raw data used for the analyses. All files in this folder are derived from Hodgkins et al. (2018). How to run in your broswer or download and run locally You can download the compendium as a zip from from this URL: https://github.com/henningte/hklmirs/ Or you can install this compendium as an R package, hklmirs, from GitHub with: remotes::install_github("henningte/hklmirs") How to use Reproduce the analyses To reproduce the analyses for the paper, open the Rstudio project included in this research compendium and run the Rmarkdown script in analysis/paper/001-paper-main.rmd. Running the whole script takes about 12 ho...
The attached files contain the necessary descriptions for understanding and working with data used in the construction of Tapper et al. 2021 (Physiological and Biochemical Zoology). The PBZ_dryad_readme.docx file contains descriptions of each file name attached to this data repository and descriptions of the data contained within each column of a data file. Information pertaining to the R code, which has been provided in this data repository as a .R file, has not been provided within this readme file, because necessary descriptions for understanding our code are provided within the R code file itself.
The attached files contain the necessary descriptions for understanding and working with data used in the construction of Tapper et al. 2020 (Journal of Experimental Biology). The JEB_dryad_readme.docx file contains descriptions of each file name attached to this data repository, and descriptions of the data contained within each column of a data file. Information pertaining to the R code, which has been provided in this data repository as a .Rmd (R Markdown) file, has not been provided within this readme file, because necessary descriptions for understanding our code are provided within the R code file itself.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary---------------This is the repository containing the R code and data to produce the analyses and figures in the manuscript ‘Colour biases in learned foraging preferences in Trinidadian guppies’. R version 3.6.2 was used for this project. Here, we explain how to reproduce the results, provides the location of the metadata for the data sheets, and gives descriptions of the root directory contents and folder contents. This material is adapted from the README file of the project, README.md
which is located in the root directory.How to reproduce the results-------------------------------------------This project uses the renv
package from RStudio to manage package dependencies and ensure reproducibility through time. To ensure results are reproduced based on the versions of the packages used at the time this project was created, you will need to install renv
using install.packages("renv")
in R.If you want to reproduce the results it is best to download the entire repository onto your system. This can be done by clicking the Download button on the FigShare repository (DOI: 10.6084/m9.figshare.14404868). This will download a zip file of the entire repository. Unzip the zip file to get access to the project files.Once the repository is downloaded onto your system, navigate to the root directory and open guppy-colour-learning-project.Rproj
. It is important to open the project using the .Rproj
file to ensure the working directory is set correctly. Then install the package dependencies onto your system using renv::restore()
. Running renv::restore()
will install the correct versions of all the packages needed to reproduce our results. Packages are installed in a stand-alone library for this project and will not affect your installed R packages anywhere else.If you want to reproduce specific results from the analyses you can open either analysis-experiment-1.Rmd
for results from experiment 1 or analysis-experiment-2.Rmd
for results from experiment 2. Both are located in the root directory. You can select the Run All option under the Code option in the navbar of RStudio to execute all the code chunks. You can also run all chunks independently as well though we advise that you do so sequentially since variables necessary for the analysis are created as the script progresses.Metadata--------------Data are available in the data/
directory. - colour-learning-experiment-1-data.csv
are the data for experiment 1- colour-learning-experiment-2-full-data.csv
are the data for experiment 2We provide the variable descriptions for the data sets in the file metadata.md
located in the data/
directory. The packages required to conduct the analyses and construct the website as well as their versions and citations are provided in the file required-r-packages.md
.Directory structure---------------------------- - data/
contains the raw data used to conduct the analyses - docs/
contains the reader-friendly html write-up of the analyses, the GitHub pages site is built from this folder - R/
contains custom R functions used in the analysis - references/
contains reference information and formatting for citations used in the project - renv/
contains an activation script and configuration files for the renv package manager - figs/
contains the individual files for the figures and residual diagnostic plots produced by the analysis scripts. This directory is created and populated by running analysis-experiment-1.Rmd
, analysis-experiment-2.Rmd
and combined-figures.Rmd
Root directory contents------------------------------------The root directory contains Rmd scripts used to conduct the analyses, create figures, and render the website pages. Below we describe the contents of these files as well as the additional files contained in the root directory. - analysis-experiment-1.Rmd
is the R code and documentation for the experiment 1 data preparation and analysis. This script generates the Analysis 1 page of the website. - analysis-experiment-2.Rmd
is the R code and documentation for the experiment 2 data preparation and analysis. This script generates the Analysis 2 page of the website. - protocols.Rmd
contains the protocols used to conduct the experiments and generate the data. This script generates the Protocols page of the website. - index.Rmd
creates the Homepage of the project site. - combined-figures.Rmd
is the R code used to create figures that combine data from experiments 1 and 2. Not used in the project site. - treatment-object-side-assignment.Rmd
is the R code used to assign treatments and object sides during trials for experiment 2. Not used in the project site. - renv.lock
is a JSON formatted plain text file which contains package information for the project. renv will install the packages listed in this file upon executing renv::restore()
- required-r-packages.md
is a plain text file containing the versions and sources of the packages required for the project. - styles.css
contains the CSS formatting for the rendered html pages - LICENSE.md
contains the license indicating the conditions upon which the code can be reused - guppy-colour-learning-project.Rproj
is the R project file which sets the working directory of the R instance to the root directory of this repository. If trying to run the code in this repository to reproduce results it is important to open R by clicking on this .Rproj
file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the github repository for analysing data using R and Rstudio.
All information and how to interpret it can be found in the README and in the workflow script
This dataset contains hydrometeor size spectra data for the Seeded and Natural Orographic Wintertime clouds – the Idaho Experiment (SNOWIE). There is one file for each UW King Air (UWKA) research flight. The files contain particle size spectra from all of the particle probes that were operational on the UWKA for that flight. No attempt has been made to combine size spectra. Note: these data are version 1 and there are known issues to be resolved; please refer to the included readme file for detailed information on variables, naming convention, and missing data.
README File: README_MASSARO_2022_DATA_updated04mar2022.txt R-code for data analysis: CodeforCorrelatesofBoundaryPatrols.R Datasets: PPdata.xlsx, PatrolsandPeriph.xlsx, PP5yearPlots.xlsx, WholeStudyPatrolRate.xlsx We do not provide access to the raw data used in some of these analyses, as this raw data represent a substantial fraction of the long-term data from Gombe, which are not publicly available at this time due to multiple ongoing studies, but are available from the corresponding author on reasonable request
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Required data to prepare and execute analysis of the data included in this published study. This includes (1) mapping daily mean exposure of total ambient, WF, RX, and fRx PM2.5; (2) calculating annual person-days of exposure; (3) calculating and mapping the fraction of population exposure (in person-days) in the Historical Period (2008-2016) & Future Rx Scenario; (4) calculating the fraction of PM2.5-attributed cardiorespiratory emergency department (ED) visits by HYSPLIT-modeled PM2.5 smoke strata (μg/m3) in California; (5) calculating and plotting average daily PM2.5-attributed burden rates for cardiorespiratory emergency department (ED) visits by HYSPLIT-modeled PM2.5 smoke strata (μg/m3); (6) estimating annual prescribed fire smoke-attributed cardiorespiratory burden rates per 100,000 persons in the Historical Period (2008-2016), Future Prescribed Fire Scenario, and mapping the change in annual prescribed fire-related burden rates between the Historical Period and Future Scenario.Please note, raw health data is not provided due to confidentiality of personal health information used for research. Aggregated estimates of PM2.5-attributed ED visit counts and rates per 100,000 are provided by strata of smoke PM2.5.Data documentation ReadMe files for data and R scripts are provided. See:1.) HYSPLIT_Exposure.README2.) HYSPLIT_Future_Rx_Exp.README3.) PHIRE_Rx_Impacts_README_v2For original HYSPLIT Smoke Modeling datasets, see Kramer et al. (2023) published on Zenodo.Kramer, Samantha J., Huang, ShihMing, McClure, Crystal D., Chaveste, Melissa R., & Lurmann, Fred. (2023). Projected Smoke Impacts from Increased Prescribed Fire Activity (PHIRE) Smoke Modeling Datasets (Version v1) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7535621Publication Citation:Rosenberg,A., Hoshiko,S., Buckman,J. R., Yeomans,K. R., Hayashi,T., Kramer,S. J., et al. (2024). Health impacts of future prescribed fire smoke: Considerations from an exposure scenario in California. Earth's Future, 12, e2023EF003778. https://doi.org/10.1029/2023EF003778
Ecological meta-analyses usually exhibit high relative heterogeneity of effect size: most among-study variation in effect size represents true variation in mean effect size, rather than sampling error. This heterogeneity arises from both methodological and ecological sources. Methodological heterogeneity is a nuisance that complicates the interpretation of data syntheses. One way to reduce methodological heterogeneity is via coordinated distributed experiments, in which investigators conduct the same experiment at different sites, using the same methods. We tested whether coordinated distributed experiments in ecology exhibit a) low heterogeneity in effect size, and b) lower heterogeneity than meta-analyses, using data on 17 effects from eight coordinated distributed experiments, and 406 meta-analyses. Consistent with our expectations, among-site heterogeneity typically comprised <50% of the variance in effect size in distributed experiments. In contrast, heterogeneity within and amo..., , , # Coordinated distributed experiments in ecology do not consistently reduce heterogeneity in effect size
Included here is a data file for a distributed experiment, and code which analyses the heterogeneity of many coordinated distributed experiments and meta-analyses. The R code file reproduces the results of this study, called meta-analyses vs distd expts - R code for sharing v 2.R.
Data File:
rousk et al 2013 table 3 data - INCREASE.csv: data from the INCREASE distributed experiment by Rousk et al. (2013)
All other data used in code is automatically sourced from URLs, but relevant variables are still described below.
Other variables in datasets were not used in our analysis, and so are not explained in this README file. Cells with missing data have "NA" values.
Variables used in code:
Costello & Fox variables:Â
meta.analysis.id: Unique ID number for each meta-analysis
eff.size: Effect size
var. eff.size: Variance in e...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
rxivist.org allows readers to sort and filter the tens of thousands of preprints posted to bioRxiv. Rxivist uses a custom web crawler to index all papers on biorxiv.org; this is a snapshot of Rxivist the production database. The version number indicates the date on which the snapshot was taken. See the included "README.md" file for instructions on how to use the "rxivist.backup" file to import data into a PostgreSQL database server.
Please note this is a different repository than the one used for the Rxivist manuscript—that is in a separate Zenodo repository. You're welcome (and encouraged!) to use this data in your research, but please cite our paper, now published in eLife.
Going forward, this information will also be available pre-loaded into Docker images, available at blekhmanlab/rxivist_data.
Version notes:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
César E. Corona-González, Claudia Rebeca De Stefano-Ramos, Juan Pablo Rosado-Aíza, Fabiola R Gómez-Velázquez, David I. Ibarra-Zarate, Luz María Alonso-Valerdi
César E. Corona-González
https://orcid.org/0000-0002-7680-2953
a00833959@tec.mx
Psychophysiological data from Mexican children with learning difficulties who strengthen reading and math skills by assistive technology
2023
The current dataset consists of psychometric and electrophysiological data from children with reading or math learning difficulties. These data were collected to evaluate improvements in reading or math skills resulting from using an online learning method called Smartick.
The psychometric evaluations from children with reading difficulties encompassed: spelling tests, where 1) orthographic and 2) phonological errors were considered, 3) reading speed, expressed in words read per minute, and 4) reading comprehension, where multiple-choice questions were given to the children. The last 2 parameters were determined according to the standards from the Ministry of Public Education (Secretaría de Educación Pública in Spanish) in Mexico. On the other hand, group 2 assessments embraced: 1) an assessment of general mathematical knowledge, as well as 2) the hits percentage, and 3) reaction time from an arithmetical task. Additionally, selective attention and intelligence quotient (IQ) were also evaluated.
Then, individuals underwent an EEG experimental paradigm where two conditions were recorded: 1) a 3-minute eyes-open resting state and 2) performing either reading or mathematical activities. EEG recordings from the reading experiment consisted of reading a text aloud and then answering questions about the text. Alternatively, EEG recordings from the math experiment involved the solution of two blocks with 20 arithmetic operations (addition and subtraction). Subsequently, each child was randomly subcategorized as 1) the experimental group, who were asked to engage with Smartick for three months, and 2) the control group, who were not involved with the intervention. Once the 3-month period was over, every child was reassessed as described before.
The dataset contains a total of 76 subjects (sub-), where two study groups were assessed: 1) reading difficulties (R) and 2) math difficulties (M). Then, each individual was subcategorized as experimental subgroup (e), where children were compromised to engage with Smartick, or control subgroup (c), where they did not get involved with any intervention.
Every subject was followed up on for three months. During this period, each subject underwent two EEG sessions, representing the PRE-intervention (ses-1) and the POST-intervention (ses-2).
The EEG recordings from the reading difficulties group consisted of a resting state condition (run-1) and while performing active reading and reading comprehension activities (run-2). On the other hand, EEG data from the math difficulties group was collected from a resting state condition (run-1) and when solving two blocks of 20 arithmetic operations (run-2 and run-3). All EEG files were stored in .set format. The nomenclature and description from filenames are shown below:
Nomenclature | Description |
---|---|
sub- | Subject |
M | Math group |
R | Reading group |
c | Control subgroup |
e | Experimental subgroup |
ses-1 | PRE-intervention |
ses-2 | POST-Intervention |
run-1 | EEG for baseline |
run-2 | EEG for reading activity, or the first block of math |
run-3 | EEG for the second block of math |
Example: the file sub-Rc11_ses-1_task-SmartickDataset_run-2_eeg.set is related to: - The 11th subject from the reading difficulties group, control subgroup (sub-Rc11). - EEG recording from the PRE-intervention (ses-1) while performing the reading activity (run-2)
Psychometric data from the reading difficulties group:
Psychometric data from the math difficulties group:
Psychometric data can be found in the 01_Psychometric_Data.xlsx file
Engagement percentage be found in the 05_SessionEngagement.xlsx file
Seventy-six Mexican children between 7 and 13 years old were enrolled in this study.
The sample was recruited through non-profit foundations that support learning and foster care programs.
g.USBamp RESEARCH amplifier
The stimuli nested folder contains all stimuli employed in the EEG experiments.
Level 1 - Math: Images used in the math experiment. - Reading: Images used in the reading experiment.
Level 2
- Math
* POST_Operations: arithmetic operations from the POST-intervention.
* PRE_Operations: arithmetic operations from the PRE-intervention.
- Reading
* POST_Reading1: text 1 and text-related comprehension questions from the POST-intervention.
* POST_Reading2: text 2 and text-related comprehension questions from the POST-intervention.
* POST_Reading3: text 3 and text-related comprehension questions from the POST-intervention.
* PRE_Reading1: text 1 and text-related comprehension questions from the PRE-intervention.
* PRE_Reading2: text 2 and text-related comprehension questions from the PRE-intervention.
* PRE_Reading3: text 3 and text-related comprehension questions from the PRE-intervention.
Level 3 - Math * Operation01.jpg to Operation20.jpg: arithmetical operations solved during the first block of the math
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data is the analysis of the data outputs of 240 randomly selected research papers from 12 top-ranked journals published in early 2023. We investigate author compliance with recommended (but not compulsory) data policies, whether there is evidence to suggest that authors apply FAIR data guidance in their data publishing, and if the existence of specific recommendations for publishing NMR data by some journals encourages compliance. Files in the data package have been provided in both human and machine-readable forms. The main dataset is available in the Excel file Data worksheet.XLSX, the contents of which can also be found in Main_dataset.CSV, Data_types.CSV, and Article_selection.CSV with explanations of the variable coding used in the studies in Variable_names.CSV, Codes.CSV, and FAIR_variable_coding.CSV. The R code used for the article selection can be found in Article_selection.R. Data about article types from the journals that contain original research data is in Article_types.CSV. Data collected for analysis in our sister paper[4] can be found in Extended_Adherence.CSV, Extended_Crystallography.CSV, Extended_DAS.CSV, Extended_File_Types.CSV, and Extended_Submission_Process.CSV. A full list of files in the data package and a short description for each is given in README.TXT.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Here, we provide the data and R-scripts used in:
Bård-Jørgen Bårdsen and Jan Ove Bustnes (2022). Multiple stressors: negative effects of nest predation on the viability of a threatened gull in different environmental conditions. Journal of Avian Biology. https://doi.org/10.1111/jav.02953. Bård-Jørgen Bårdsen and Jan Ove Bustnes (2023). Correction to Multiple stressors: negative effects of nest predation on the viability of a threatened gull in different environmental conditions. Journal of Avian Biology. https://doi.org/10.1111/jav.12915.
This study assessed the viability of a population of the lesser black-backed gull (Larus fuscus fuscus) using data collected from 2005-2020 from a nature reserve in Northern Norway. The study merged results from statistical analyses of empirical data with a Leslie model. Here, we provide the underlying data and the R-scripts used to analyse the data and run the model. The data set includes information about reproduction at several stages (laying, hatching and fledgling), nest predation, and individual capture histories (used to estimate apparent survival; see Bårdsen and Bustnes 2022). We discovered a misspecification error in the matrix model in Bårdsen and Bustnes (2022). This error did not change the overall conclusions or the results in the original article's empirical analyses. Here, we present an updated version of our scripts, i.e., scripts used by Bårdsen and Bustnes (2023). In the correction, we also highlight which part of the original article was affected by this mistake. Methods Bårdsen and Bustnes (2022), including the online Supplementary Material (Appendix S1-2), provide a detailed description of the study area and the empirical data. In the downloadable software ('ToBePublished.zip'), we provide data, metadata, and R-scripts for the statistical analyses and the models. Please confer with the 'README.txt' in 'ToBePublished.zip' for more information. We also include the data (without the scripts) from our study area as a downloadable dataset ('Data.zip'; see the included 'README.txt' for details).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Original dataset used for analyses presented in Reichard et al. (sex differences in lifespan in Nothobranchius) for Journal of Animal Ecology paper (2021/2022)It includes readme file, R scripts and basic data.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Format
index.csv.gz - CSV comma separated file with 3 columns: , , For example: src-d/go-git,s,README.md
The flag is either "s" (readme found) or "r" (readme does not exist on the root directory level). Readme file name may be any from the list:
"README.md", "readme.md", "Readme.md", "README.MD", "README.txt", "readme.txt", "Readme.txt", "README.TXT", "README", "readme", "Readme", "README.rst", "readme.rst", "Readme.rst", "README.RST"
100 part-r-00xxx files are in "new" Hadoop API format with the following settings:
inputFormatClass is org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
keyClass is org.apache.hadoop.io.Text - repository name
valueClass is org.apache.hadoop.io.BytesWritable - gzipped readme file