30 datasets found
  1. d

    Current Population Survey (CPS)

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damico, Anthony (2023). Current Population Survey (CPS) [Dataset]. http://doi.org/10.7910/DVN/AK4FDD
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Damico, Anthony
    Description

    analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D

  2. o

    Data from: Dataset from: “A new statistical workflow (R-packages based) to...

    • openagrar.de
    Updated Jul 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paola Ferrario; Achim Bub; Lara Frommherz; Ralf Krüger; Manuela Rist; Bernhard Watzl (2023). Dataset from: “A new statistical workflow (R-packages based) to investigate associations between one variable of interest and the metabolome" [Dataset]. http://doi.org/10.25826/Data20230721-100335-0
    Explore at:
    Dataset updated
    Jul 21, 2023
    Dataset provided by
    Max Rubner-Institut (MRI), Federal Research Institute of Nutrition and Food, Department of Safety and Quality of Fruit and Vegetables, Germany
    Max Rubner-Institut (MRI), Federal Research Institute of Nutrition and Food, Department of Physiology and Biochemistry of Nutrition, Germany
    Authors
    Paola Ferrario; Achim Bub; Lara Frommherz; Ralf Krüger; Manuela Rist; Bernhard Watzl
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains a part of the underlying research data for the paper: Ferrario PG, Bub A, Frommherz L, Krüger R, Rist MJ, Watzl B. “A new statistical workflow (R-packages based) to investigate associations between one variable of interest and the metabolome". These research data consist in measurements of the participants from the Karlsruhe Metabolomics and Nutrition (KarMeN) Study. Specifically, the data concern Age, BMI and two plasma metabolites: glycoursodeoxycholic acid (GUDCA), and the plasma metabolite decanoic acid (C10:0). GUDCA is a bile acid, and this was quantified by LC-MS (Liquid chromatography-mass spectrometry). C10:0 is a saturated fatty acid, and it was quantified by GC-MS (Gas chromatography-mass spectrometry).

  3. d

    Surface underway measurements of fugacity of carbon dioxide (fCO2),...

    • catalog.data.gov
    Updated Jul 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Surface underway measurements of fugacity of carbon dioxide (fCO2), temperature, salinity and other variables collected during the R/V Thomas G. Thompson Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP) Section I06S cruise TN366 (EXPOCODE 325020190403) in the Southern and Indian Oceans from 2019-04-07 to 2019-05-14 (NCEI Accession 0208231) [Dataset]. https://catalog.data.gov/dataset/surface-underway-measurements-of-fugacity-of-carbon-dioxide-fco2-temperature-salinity-and-other1
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    (Point of Contact)
    Area covered
    Indian Ocean
    Description

    This dataset consists of Surface underway measurements of fugacity of carbon dioxide (fCO2), temperature, salinity and other variables collected during the R/V Thomas G. Thompson Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP) Section I06S cruise TN366 (EXPOCODE 325020190403) in the Southern and Indian Oceans from 2019-04-07 to 2019-05-14. The cruise began and ended in Cape Town, South Africa and collected data in the Southern and Indian Oceans. This effort was conducted in support of the Global Ocean Ship-Based Hydrographic Investigations Program (GO-SHIP) and NOAA's Climate Program Office (CPO).

  4. r

    Data from: INDILACT – Extended voluntary waiting period in primiparous dairy...

    • researchdata.se
    Updated Mar 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anna Edvardsson Rasmussen (2025). INDILACT – Extended voluntary waiting period in primiparous dairy cows. Part 2: Customized VWP – Metadata and R–scripts with statistical calculations [Dataset]. https://researchdata.se/en/catalogue/dataset/2024-424
    Explore at:
    (108812), (428272), (22393), (5799), (3243), (23818), (5034), (747913), (7170)Available download formats
    Dataset updated
    Mar 13, 2025
    Dataset provided by
    Swedish University of Agricultural Sciences (SLU)
    Authors
    Anna Edvardsson Rasmussen
    Time period covered
    Jan 1, 2019 - Oct 27, 2022
    Area covered
    Sweden
    Description

    This is part 2 of INDILACT, part 1 is published separately.

    The objective of this study is to investigate how a customized voluntary waiting period before first insemination in prmiparous dairy cows would affect milk production, fertility and health of primparous dairy cows during their first calving interval.

    The data was registered between January 2019 and october 2022.

    This data is archived: - Metadata (publically available) - Raw data (.txt files) from the Swedish national herd recording scheme (SNDRS), operated by Växa Sverige: access restricted due to agreements with the principle owners of the data, Växa Sverige and the farms. Code lists are available in INDILACT part 1. - Aggregated data (Excel files): access restricted due to agreements with the principle owners of the data, Växa Sverige and the farms - R- scripts with statistical calculations (Openly available)

    Metadata (3 filer): - Metadata gentypning: The only new file type compared to INDILACT Part 1, description of how this data category have been handled. The other file-types have been handled in the same way as in INDILACT Part 1. - Metadata - del 2 - General summary of initioal data handeling for aggregation of the files of the same types (dates etc.) to create excel-files used in the R-scripts. - DisCodes: Divisions of the diagnoses into categories.

    Raw data: -59 .txt files containing data retrieved from SNDRS from 8 separate occacions. -Data from 18 Swedish farms from Jan 2019 to Oct 2022.

    Aggregeated data: - 29 Excelfiles. The textfiles have been transformed to Excel formate and all data from the same file type is aggregated into one file. - Data collected from the farms by email and phone contact, about individual cows enrolled in the trial, from Oct 2020 to Oct 2022. - One merged Script derived from initial data handeling in R where relevant variables were calculated and aggregated to be used for statistical calculations.

    R-script with data handeling and statistical calculations: - "Data analysis part 2 - final": Data handeling to create the file used in the statistical calculations. - "Part 2 - Binomial models - Fertility": Statistiscal calculations of variables using Binomial models. - "Part 2 - glmmTMB models - Fertility": Statistiscal calculations of variables using glmmTMB models. - "Part 2 - linear models - Fertility": Statistiscal calculations of fertility variables using linear models. - "Part 2 - linear models": Statistiscal calculations of milk variables using linear models.

    Running the R scripts requires access to the restricted files. The files should be unpacked in a subdirectory "data" relative to the working directory for the scripts. See also the file "sessionInfo.txt" for information on R packages used.

  5. Z

    Data from: A dataset to model Levantine landcover and land-use change...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Dec 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kempf, Michael (2023). A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10396147
    Explore at:
    Dataset updated
    Dec 16, 2023
    Dataset authored and provided by
    Kempf, Michael
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Levant
    Description

    Overview

    This dataset is the repository for the following paper submitted to Data in Brief:

    Kempf, M. A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19. Data in Brief (submitted: December 2023).

    The Data in Brief article contains the supplement information and is the related data paper to:

    Kempf, M. Climate change, the Arab Spring, and COVID-19 - Impacts on landcover transformations in the Levant. Journal of Arid Environments (revision submitted: December 2023).

    Description/abstract

    The Levant region is highly vulnerable to climate change, experiencing prolonged heat waves that have led to societal crises and population displacement. Since 2010, the area has been marked by socio-political turmoil, including the Syrian civil war and currently the escalation of the so-called Israeli-Palestinian Conflict, which strained neighbouring countries like Jordan due to the influx of Syrian refugees and increases population vulnerability to governmental decision-making. Jordan, in particular, has seen rapid population growth and significant changes in land-use and infrastructure, leading to over-exploitation of the landscape through irrigation and construction. This dataset uses climate data, satellite imagery, and land cover information to illustrate the substantial increase in construction activity and highlights the intricate relationship between climate change predictions and current socio-political developments in the Levant.

    Folder structure

    The main folder after download contains all data, in which the following subfolders are stored are stored as zipped files:

    “code” stores the above described 9 code chunks to read, extract, process, analyse, and visualize the data.

    “MODIS_merged” contains the 16-days, 250 m resolution NDVI imagery merged from three tiles (h20v05, h21v05, h21v06) and cropped to the study area, n=510, covering January 2001 to December 2022 and including January and February 2023.

    “mask” contains a single shapefile, which is the merged product of administrative boundaries, including Jordan, Lebanon, Israel, Syria, and Palestine (“MERGED_LEVANT.shp”).

    “yield_productivity” contains .csv files of yield information for all countries listed above.

    “population” contains two files with the same name but different format. The .csv file is for processing and plotting in R. The .ods file is for enhanced visualization of population dynamics in the Levant (Socio_cultural_political_development_database_FAO2023.ods).

    “GLDAS” stores the raw data of the NASA Global Land Data Assimilation System datasets that can be read, extracted (variable name), and processed using code “8_GLDAS_read_extract_trend” from the respective folder. One folder contains data from 1975-2022 and a second the additional January and February 2023 data.

    “built_up” contains the landcover and built-up change data from 1975 to 2022. This folder is subdivided into two subfolder which contain the raw data and the already processed data. “raw_data” contains the unprocessed datasets and “derived_data” stores the cropped built_up datasets at 5 year intervals, e.g., “Levant_built_up_1975.tif”.

    Code structure

    1_MODIS_NDVI_hdf_file_extraction.R

    This is the first code chunk that refers to the extraction of MODIS data from .hdf file format. The following packages must be installed and the raw data must be downloaded using a simple mass downloader, e.g., from google chrome. Packages: terra. Download MODIS data from after registration from: https://lpdaac.usgs.gov/products/mod13q1v061/ or https://search.earthdata.nasa.gov/search (MODIS/Terra Vegetation Indices 16-Day L3 Global 250m SIN Grid V061, last accessed, 09th of October 2023). The code reads a list of files, extracts the NDVI, and saves each file to a single .tif-file with the indication “NDVI”. Because the study area is quite large, we have to load three different (spatially) time series and merge them later. Note that the time series are temporally consistent.

    2_MERGE_MODIS_tiles.R

    In this code, we load and merge the three different stacks to produce large and consistent time series of NDVI imagery across the study area. We further use the package gtools to load the files in (1, 2, 3, 4, 5, 6, etc.). Here, we have three stacks from which we merge the first two (stack 1, stack 2) and store them. We then merge this stack with stack 3. We produce single files named NDVI_final_*consecutivenumber*.tif. Before saving the final output of single merged files, create a folder called “merged” and set the working directory to this folder, e.g., setwd("your directory_MODIS/merged").

    3_CROP_MODIS_merged_tiles.R

    Now we want to crop the derived MODIS tiles to our study area. We are using a mask, which is provided as .shp file in the repository, named "MERGED_LEVANT.shp". We load the merged .tif files and crop the stack with the vector. Saving to individual files, we name them “NDVI_merged_clip_*consecutivenumber*.tif. We now produced single cropped NDVI time series data from MODIS. The repository provides the already clipped and merged NDVI datasets.

    4_TREND_analysis_NDVI.R

    Now, we want to perform trend analysis from the derived data. The data we load is tricky as it contains 16-days return period across a year for the period of 22 years. Growing season sums contain MAM (March-May), JJA (June-August), and SON (September-November). December is represented as a single file, which means that the period DJF (December-February) is represented by 5 images instead of 6. For the last DJF period (December 2022), the data from January and February 2023 can be added. The code selects the respective images from the stack, depending on which period is under consideration. From these stacks, individual annually resolved growing season sums are generated and the slope is calculated. We can then extract the p-values of the trend and characterize all values with high confidence level (0.05). Using the ggplot2 package and the melt function from reshape2 package, we can create a plot of the reclassified NDVI trends together with a local smoother (LOESS) of value 0.3.To increase comparability and understand the amplitude of the trends, z-scores were calculated and plotted, which show the deviation of the values from the mean. This has been done for the NDVI values as well as the GLDAS climate variables as a normalization technique.

    5_BUILT_UP_change_raster.R

    Let us look at the landcover changes now. We are working with the terra package and get raster data from here: https://ghsl.jrc.ec.europa.eu/download.php?ds=bu (last accessed 03. March 2023, 100 m resolution, global coverage). Here, one can download the temporal coverage that is aimed for and reclassify it using the code after cropping to the individual study area. Here, I summed up different raster to characterize the built-up change in continuous values between 1975 and 2022.

    6_POPULATION_numbers_plot.R

    For this plot, one needs to load the .csv-file “Socio_cultural_political_development_database_FAO2023.csv” from the repository. The ggplot script provided produces the desired plot with all countries under consideration.

    7_YIELD_plot.R

    In this section, we are using the country productivity from the supplement in the repository “yield_productivity” (e.g., "Jordan_yield.csv". Each of the single country yield datasets is plotted in a ggplot and combined using the patchwork package in R.

    8_GLDAS_read_extract_trend

    The last code provides the basis for the trend analysis of the climate variables used in the paper. The raw data can be accessed https://disc.gsfc.nasa.gov/datasets?keywords=GLDAS%20Noah%20Land%20Surface%20Model%20L4%20monthly&page=1 (last accessed 9th of October 2023). The raw data comes in .nc file format and various variables can be extracted using the [“^a variable name”] command from the spatraster collection. Each time you run the code, this variable name must be adjusted to meet the requirements for the variables (see this link for abbreviations: https://disc.gsfc.nasa.gov/datasets/GLDAS_CLSM025_D_2.0/summary, last accessed 09th of October 2023; or the respective code chunk when reading a .nc file with the ncdf4 package in R) or run print(nc) from the code or use names(the spatraster collection). Choosing one variable, the code uses the MERGED_LEVANT.shp mask from the repository to crop and mask the data to the outline of the study area.From the processed data, trend analysis are conducted and z-scores were calculated following the code described above. However, annual trends require the frequency of the time series analysis to be set to value = 12. Regarding, e.g., rainfall, which is measured as annual sums and not means, the chunk r.sum=r.sum/12 has to be removed or set to r.sum=r.sum/1 to avoid calculating annual mean values (see other variables). Seasonal subset can be calculated as described in the code. Here, 3-month subsets were chosen for growing seasons, e.g. March-May (MAM), June-July (JJA), September-November (SON), and DJF (December-February, including Jan/Feb of the consecutive year).From the data, mean values of 48 consecutive years are calculated and trend analysis are performed as describe above. In the same way, p-values are extracted and 95 % confidence level values are marked with dots on the raster plot. This analysis can be performed with a much longer time series, other variables, ad different spatial extent across the globe due to the availability of the GLDAS variables.

    (9_workflow_diagramme) this simple code can be used to plot a workflow diagram and is detached from the actual analysis.

    Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization, Supervision, Project administration, and Funding acquisition: Michael

  6. d

    Dissolved inorganic carbon (DIC), total alkalinity, temperature, salinity...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Jul 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Dissolved inorganic carbon (DIC), total alkalinity, temperature, salinity and other variables collected from discrete samples and profile observations during the R/V Celtic Explorer cruise CE17007 along the Global Ocean Ship-Based Hydrographic Investigation Program (GO-SHIP) Section A02 (EXPOCODE 45CE20170427) in the North Atlantic Ocean from 2017-04-27 to 2017-05-22 (NCEI Accession 0208599) [Dataset]. https://catalog.data.gov/dataset/dissolved-inorganic-carbon-dic-total-alkalinity-temperature-salinity-and-other-variables-collec5
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    (Point of Contact)
    Area covered
    Atlantic Ocean, North Atlantic Ocean
    Description

    This dataset includes discrete profile measurements of dissolved inorganic carbon, total alkalinity, temperature, salinity, oxygen, nutrients, chlorofluorocarbon 12 (CFC-12) and sulfur hexafluoride (SF6) made during the R/V Celtic Explorer cruise CE17007 along the Global Ocean Ship-Based Hydrographic Investigation Program (GO-SHIP) Section A02 (EXPOCODE 45CE20170427) in the North Atlantic Ocean from 2017-04-27 to 2017-05-22. Hydrographic measurements along this section were made under the direction of the GO-SHIP. This reoccupation of the A02 section was supported by the Irish Marine Institute and funded under the Marine Research Programme by the Irish Government. Individual teams were supported by the European Union’s Horizon 2020 and the Canada Excellence Research Chair in Ocean Science and Technology.

  7. d

    Data from: A neural implementation of cognitive reserve: insights from a...

    • search.dataone.org
    Updated Dec 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fatemeh Hasanzadeh; Christian Habeck; Yunglin Gazes; Yaakov Stern (2024). A neural implementation of cognitive reserve: insights from a longitudinal fMRI study of set-switching in aging [Dataset]. http://doi.org/10.5061/dryad.qz612jmrm
    Explore at:
    Dataset updated
    Dec 14, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Fatemeh Hasanzadeh; Christian Habeck; Yunglin Gazes; Yaakov Stern
    Description

    The dataset (Data.mat) comprises variables extracted from demographic information, cognitive task performance, and MRI/fMRI imaging of a longitudinal study of 52 adults aged 60–71, evaluated at baseline and after a 5-year follow-up. The analysis code is structured into two main codes: (A) Main Analysis in MATLAB (Code1_MainAnalysis_Matlab.m), which utilizes the primary dataset (Data.mat) to examine brain variables, model cognitive trajectories, and prepare data for further interaction analysis. (B) Interaction Visualization in R (Code2_Johnson-Neyman_R.R), which produces interaction effect visualizations using the output from MATLAB analysis., This dataset does not contain any direct fMRI data. The fMRI data in our study were analyzed using a specific technique called Ordinal Trend Canonical Variates Analysis (OrT CVA). Through this approach, we derived a measure known as the OrT score, which was calculated for both single and dual-task conditions. This OrT score is the only fMRI-related variable included in the dataset that we uploaded. To provide further details, OrT CVA is a multivariate data-driven technique that identifies patterns of regional functional activation that show a monotonic change across multiple experimental conditions (in the current study single and dual conditions). The extracted functional activation patterns, called ordinal trends (OrT), indicate sustained activity across graduated increase in task demand (Habeck, Krakauer, et al., 2005; Habeck, Rakitin, et al., 2005). The technique utilizes a specialized design matrix to enhance variance contributions from patterns that exhibit within-subject increase..., , # A neural implementation of cognitive reserve: insights from a longitudinal fMRI study of set-switching in aging

    This document provides documentation for the dataset and code used in the paper by Fatemeh Hasanzadeh, Christian Habeck, Yunglin Gazes, and Yaakov Stern.

    Dataset: Data.mat

    The following section describes the variables contained in the Data.mat file, which includes longitudinal measurements from 52 older adults at baseline and 5-year follow-up. The dataset contains demographic, behavioral, and neuroimaging data collected as part of the study.

    Demographic and Study Variables

    VariableDescription
    subidUnique participant identifier
    TimePointStudy timepoint (1 = baseline, 2 = 5-year follow-up)
    scannerIDMRI scanner identifier (1= Philips/ 3= PRISMA)
    ...
  8. c

    Data from: Dataset of a randomized controlled depression prevention trial...

    • datacatalogue.cessda.eu
    • ssh.datastations.nl
    Updated Jul 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. Poppelaars; A. Lichtwarck-Aschoff; R. Otten; I. Granic (2023). Dataset of a randomized controlled depression prevention trial investigating the efficacy of the commercial video game Journey [Dataset]. http://doi.org/10.17026/dans-zhq-2qmc
    Explore at:
    Dataset updated
    Jul 15, 2023
    Dataset provided by
    Radboud University
    Authors
    M. Poppelaars; A. Lichtwarck-Aschoff; R. Otten; I. Granic
    Description

    This data archive contains data on a pre-registered randomized controlled trial (RCT) testing the potential for the commercial video game Journey to prevent the exacerbation of depressive symptoms compared to an active and passive control condition. The datasets contains all anonymised raw data from the screening questionnaires of all screening participants, the raw data from all 5 questionnaires (screening, pre-test, post-test, 6-month follow-up and 12-month follow-up) and any logbooks of participants who consented to participate in the RCT, and the raw coding data of the narrative identity fragments that were coded by two coders to determine reliability of the coding process. Participants were 244 youth aged 15 to 20 years old with elevated depressive symptoms. Those who were randomized to the Journey or the active control game condition were given four weeks to play Journey or the control game. Furthermore, a number of action mechanisms which were hypothesized to affect depressive symptoms were tested. Additionally, secondary outcomes, logbook data, and other additional variables not used in the data analyses for the main outcome paper are included to facilitate the further utilization of this data. A guide to the included files can be found in the 2020_Poppelaars_RCT Journey_Read Me.pdf file. Syntax and the resulting data files are available for creating scale scores and other variables recoded or calculated from the raw data. Furthermore, syntax and the resulting data files are available for the analyses of the main outcome paper (Poppelaars, M., Lichtwarck-Aschoff, A., Otten, R., & Granic, I. (2020). Can a Commercial Video Game Prevent Depression? Null Results and Whole Sample Action Mechanisms in a Randomized Controlled Trial. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2020.575962). Finally, codebooks describing the project and variables, as well as a methods section are included.

  9. d

    Data and code to explore annual cycle schedule adjustments in a long...

    • search.dataone.org
    • datadryad.org
    Updated Feb 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Camilo Carneiro; Tómas Gunnarsson; José Alves (2025). Data and code to explore annual cycle schedule adjustments in a long distance migrant [Dataset]. http://doi.org/10.5061/dryad.vt4b8gtv9
    Explore at:
    Dataset updated
    Feb 27, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Camilo Carneiro; Tómas Gunnarsson; José Alves
    Time period covered
    May 12, 2022
    Description

    Matching the timing of annual cycle events with the required resources can have crucial consequences for individual fitness. But as the annual cycle is comprised of sequential events, a delay at any point may be carried over to the subsequent stage (or more, in a domino effect) and negatively influence individual performance. To investigate how migratory animals navigate their annual schedule, and where and when it may be adjusted, we used full annual cycle data of 38 Icelandic whimbrels Numenius phaeopus islandicus tracked over 7 years – a subspecies that typically performs long-distance migrations to West Africa. We found that individuals apparently used the wintering sites to compensate for delays that mostly arose due to previous successful breeding, and a domino effect was observed from spring departure to laying date, with the potential to affect breeding output. However, the total time saved during all stationary periods is apparently enough to avoid interannual effects bet..., , , # Data and code to explore annual cycle schedule adjustments in a long distance migrant

    https://doi.org/10.5061/dryad.vt4b8gtv9

    Description of the data and file structure

    This study investigates how migratory animals navigate their annual schedule, and where and when adjust it, using the Icelandic whimbrel (Numenius phaeopus islandicus) as a model.

    Individuals apparently compensated for delays at the wintering sites, which mostly arose due to previous successful breeding, and a domino effect was observed from spring departure to laying date, with the potential to affect breeding output.

    We appreciate that anyone interested in using these data contacts the authors.

    Files and variables

    Following variables explanation, the corresponding R script is mentioned, with indication R packages required, and the versions used, to analyse the dataset.

    All analysis were conducted in R version 4.0.3.

    dataSEM

    Variable explanation:

    • fledg...
  10. English Longitudinal Study of Ageing: Waves 0-11, 1998-2024

    • beta.ukdataservice.ac.uk
    Updated 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J. Banks; G. David Batty; J. Breedvelt; K. Coughlin; Crawford, R., Institute For Fiscal Studies (IFS); M. Marmot; J. Nazroo; Oldfield, Z., Institute For Fiscal Studies (IFS); N. Steel; A. Steptoe; M. Wood; P. Zaninotto (2025). English Longitudinal Study of Ageing: Waves 0-11, 1998-2024 [Dataset]. http://doi.org/10.5255/ukda-sn-5050-32
    Explore at:
    Dataset updated
    2025
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    datacite
    Authors
    J. Banks; G. David Batty; J. Breedvelt; K. Coughlin; Crawford, R., Institute For Fiscal Studies (IFS); M. Marmot; J. Nazroo; Oldfield, Z., Institute For Fiscal Studies (IFS); N. Steel; A. Steptoe; M. Wood; P. Zaninotto
    Description

    The English Longitudinal Study of Ageing (ELSA) is a longitudinal survey of ageing and quality of life among older people that explores the dynamic relationships between health and functioning, social networks and participation, and economic position as people plan for, move into and progress beyond retirement. The main objectives of ELSA are to:

    • construct waves of accessible and well-documented panel data;
    • provide these data in a convenient and timely fashion to the scientific and policy research community;
    • describe health trajectories, disability and healthy life expectancy in a representative sample of the English population aged 50 and over;
    • examine the relationship between economic position and health;
    • investigate the determinants of economic position in older age;
    • describe the timing of retirement and post-retirement labour market activity; and
    • understand the relationships between social support, household structure and the transfer of assets.

    Further information may be found on the "https://www.elsa-project.ac.uk/"> ELSA project website, the or Natcen Social Research: ELSA web pages.

    Wave 11 data has been deposited - May 2025

    For the 45th edition (May 2025) ELSA Wave 11 core and pension grid data and documentation were deposited. Users should note this dataset version does not contain the survey weights. A version with the survey weights along with IFS and financial derived datasets will be deposited in due course. In the meantime, more information about the data collection or the data collected during this wave of ELSA can be found in the Wave 11 Technical Report or the User Guide.

    Health conditions research with ELSA - June 2021

    The ELSA Data team have found some issues with historical data measuring health conditions. If you are intending to do any analysis looking at the following health conditions, then please read the ELSA User Guide or if you still have questions contact elsadata@natcen.ac.uk for advice on how you should approach your analysis. The affected conditions are: eye conditions (glaucoma; diabetic eye disease; macular degeneration; cataract), CVD conditions (high blood pressure; angina; heart attack; Congestive Heart Failure; heart murmur; abnormal heart rhythm; diabetes; stroke; high cholesterol; other heart trouble) and chronic health conditions (chronic lung disease; asthma; arthritis; osteoporosis; cancer; Parkinson's Disease; emotional, nervous or psychiatric problems; Alzheimer's Disease; dementia; malignant blood disorder; multiple sclerosis or motor neurone disease).

    For information on obtaining data from ELSA that are not held at the UKDS, see the ELSA Genetic data access and Accessing ELSA data webpages.

    Wave 10 Health data
    Users should note that in Wave 10, the health section of the ELSA questionnaire has been revised and all respondents were asked anew about their health conditions, rather than following the prior approach of asking those who had taken part in the past waves to confirm previously recorded conditions. Due to this reason, the health conditions feed-forward data was not archived for Wave 10, as was done in previous waves.

    Harmonized dataset:

    Users of the Harmonized dataset who prefer to use the Stata version will need access to Stata MP software, as the version G3 file contains 11,779 variables (the limit for the standard Stata 'Intercooled' version is 2,047).

    ELSA COVID-19 study:
    A separate ad-hoc study conducted with ELSA respondents, measuring the socio-economic effects/psychological impact of the lockdown on the aged 50+ population of England, is also available under SN 8688, English Longitudinal Study of Ageing COVID-19 Study.

  11. c

    Better sharing through licenses

    • datacatalogue.cessda.eu
    • ssh.datastations.nl
    Updated Jul 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    A.R. Snijder (2023). Better sharing through licenses [Dataset]. http://doi.org/10.17026/dans-zpc-dmfb
    Explore at:
    Dataset updated
    Jul 4, 2023
    Dataset provided by
    OAPEN Foundation
    Authors
    A.R. Snijder
    Description

    The paper based on these data will investigate the influence of licenses on usage of online books in the OAPEN Library. The OAPEN Library platform logs usage data, starting from January 2011. Among the data logged is the number of times each monograph was downloaded in a month. We will use this as an indicator of successful dissemination: more downloads means a better result. For this paper, we will use the data captured over a period of 33 months: from January 2011 up until September 2013. During this time, 1734 different books were made available through the OAPEN Library. Of these monographs, 855 were disseminated under a Creative Commons license – in other words: libre OA – and 879 were distributed under a more restrictive regime.


    File list:
    - OAPEN_license_downloads.csv: data
    - Explanation of variables urn-nbn-nl-ui-13-8ut1-25.pdf: list of variables used in the data
    - oapen.excel-20140123: list of books mentioned in variable OAPEN-ID
    - R. Snijder, 'Better Sharing Through Licenses? Measuring the Influence of Creative Commons Licenses on the Usage of Open Access Monographs'. In JLSC vol 3 issue 1 2014-10-30.pdf: final article, available in this dataset as of 04-03-2015

  12. Data from: Flock size and structure influence reproductive success in four...

    • zenodo.org
    • explore.openaire.eu
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Mooney; Andrew Mooney; Andrew J. Teare; Johanna Staerk; Johanna Staerk; Simeon Q. Smeele; Simeon Q. Smeele; Paul Rose; Paul Rose; R. Harrison Edell; Catherine E. King; Laurie Conrad; Yvonne M. Buckley; Yvonne M. Buckley; Andrew J. Teare; R. Harrison Edell; Catherine E. King; Laurie Conrad (2025). Data from: Flock size and structure influence reproductive success in four species of flamingo in 540 captive populations worldwide [Dataset]. http://doi.org/10.5281/zenodo.7504077
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew Mooney; Andrew Mooney; Andrew J. Teare; Johanna Staerk; Johanna Staerk; Simeon Q. Smeele; Simeon Q. Smeele; Paul Rose; Paul Rose; R. Harrison Edell; Catherine E. King; Laurie Conrad; Yvonne M. Buckley; Yvonne M. Buckley; Andrew J. Teare; R. Harrison Edell; Catherine E. King; Laurie Conrad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary

    This dataset accompanies the publication "Flock size and structure influence reproductive success in four species of flamingo in 540 captive populations worldwide" published in Zoo Biology. It contains anonymised data from 540 captive flamingo populations, and includes the four species: Phoeniconaias minor, Phoenicopterus chilensis, Phoenicopterus roseus and Phoenicopterus ruber. Data were sourced from the Zoological Information Management System (ZIMS), operated by Species360 (https://www.species360.org/). ZIMS is the largest real-time database of comprehensive and standardized information spanning more than 1,200 zoological collections globally, and provides the number of institutions currently managing each flamingo species and both their current and historic population sizes. These data were used to investigate the relationship between reproductive success and both flock size, and structure, on a global scale.

    This dataset also contains climatic data provided by WorldClim, which were used to assess the influence of climatic variables on captive flamingo reproductive success globally. The WorldClim database averages 19 different climatic variables derived from monthly temperature and rainfall values at a 1 km spatial resolution for the period 1970-2000. Using geographic coordinates (latitude and longitude) we calculated several climatic metrics for each institution.

    Description of the Dataset

    One file is provided for each species (P. minor, P. chilensis, P. roseus and P. ruber) as a csv file. Each file contains the following 15 columns:

    • Institution Code: An anonymous code used to identify individual zoological institutions.
    • Country: The country where the institution is located.
    • Year: Current year (t).
    • Flock Size: Flock size in year t.
    • Males: The number of males in the flock in year t.
    • Females: The number of females in the flock in year t.
    • Unsexed: The number of unsexed individuals in the flock in year t.
    • Proportion of Females: The proportion of the flock made up of female individuals in year t.
    • Proportion of Unsexed: The proportion of the flock made up of unsexed individuals in year t.
    • Hatches: Number of birds hatched in year t.
    • Proportion of Additions: The proportion of the flock in year t made up of additions from year t-1 (not including new birds hatched into the flock).
    • MAP: Mean annual precipitation (mm).
    • MAT: Mean annual temperature (°C).
    • MAP Var: Mean annual variation in precipitation (MAP coefficient of variation).
    • MAT Var: Mean annual variation in temperature (MAT standard deviation).

    Note: Mean Annual Temperature (MAT) is provided by WorldClim as °C multiplied by 10, and similarly mean annual variation in temperature as MAT standard deviation multiplied by 100. In the corresponding publication, both were divided (by 10 and 100 respectively) prior to modelling to avoid confusion in the units used.

    Acknowledgements

    We acknowledge and thank all Species360 member institutions for their continued support and data input. The research which data refers to was funded by the Irish Research Council Laureate Awards 2017/2018 IRCLA/2017/60 to Y.M.B. Additionally, S.Q.S. received funding from the International Max Planck Research School for Organismal Biology. The Species360 Conservation Science Alliance would like to thank their sponsors: the World Association of Zoos and Aquariums, Wildlife Reserves of Singapore, and Copenhagen Zoo.

    Disclaimer

    Despite our best efforts at screening the data for errors and inconsistencies, some information could be erroneous. Similarly, data contained within ZIMS are based on submitted records from individual institutions, and are not subject to editorial verification, potentially permitting errors or failure to update species holdings etc. Despite this, ZIMS represents the only global database of zoo collection composition records, and as a result, is used by the IUCN, Convention on International Trade in Endangered Species (CITES), the Wildlife Trade Monitoring Network (TRAFFIC), United States Fish and Wildlife Service (USFWS) and Department for Environment, Food and Rural Affairs (DEFRA).

    Credit

    If you use this dataset, please cite the corresponding publication:

    Mooney, A., Teare, J. A., Staerk, J.,Smeele, S. Q., Rose, P., Edell, R. H., King, C. E., Conrad, L., & Buckley, Y. M. (2023). Flock size and structure influence reproductive success in four species of flamingo in 540 captive populations worldwide. Zoo Biology, 1–14. https://doi.org/10.1002/zoo.21753

  13. f

    Confirmation dataset.

    • plos.figshare.com
    • figshare.com
    xls
    Updated May 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdulsalam A. Al-Tamimi; Kenan Muhamedagic; Derzija Begic – Hajdarevic; Ajdin Vatres; Edin Kadric (2025). Confirmation dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0322628.t010
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 19, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Abdulsalam A. Al-Tamimi; Kenan Muhamedagic; Derzija Begic – Hajdarevic; Ajdin Vatres; Edin Kadric
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The application of additive manufacturing technologies for producing parts from polymer composite materials has gained significant attention due to the ability to create fully functional components that leverage the advantages of both polymer matrices and fiber reinforcements while maintaining the benefits of additive technology. Polymer composites are among the most advanced and widely used composite materials, offering high strength and stiffness with low mass and variable resistance to different media. This study aims to experimentally investigate the impact of selected process parameters, namely, wall thickness, raster angle, printing temperature, and build plate temperature, on the flexural properties of carbon fiber reinforced polyamide (CFrPA) fused deposition modeling (FDM) printed samples, as per ISO 178 standards. Additionally, regression and artificial neural network (ANN) models have been developed to predict these flexural properties. ANN models are developed for both normal and augmented inputs, with the architecture and hyperparameters optimized using random search technique. Response surface methodology (RSM), which is based on face centered composite design, is employed to analyze the effects of process parameters. The RSM results indicate that the raster angle and build plate temperature have the greatest impact on the flexural properties, resulting in an increase of 51% in the flexural modulus. The performance metrics of the optimized RSM and ANN models, characterized by low MSE, RMSE, MAE, and MAPE values and high R2 values, suggest that these models provide highly accurate and reliable predictions of flexural strength and modulus for the CFrPA material. The study revealed that ANN models with augmented inputs outperform both RSM models and ANN models with normal inputs in predicting these properties.

  14. Behavioral responses of common dolphins to naval sonar

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brandon Southall; John Durban (2024). Behavioral responses of common dolphins to naval sonar [Dataset]. http://doi.org/10.5061/dryad.ncjsxkt40
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 4, 2024
    Dataset provided by
    Southall Environmental Associates (United States)
    University of California, Santa Cruz
    Authors
    Brandon Southall; John Durban
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Despite strong interest in how noise affects marine mammals, little is known about the most abundant and commonly exposed taxa. Social delphinids occur in groups of hundreds of individuals that travel quickly, change behavior ephemerally, and are not amenable to conventional tagging methods, posing challenges in quantifying noise impacts. We integrated drone-based photogrammetry, strategically-placed acoustic recorders, and broad-scale visual observations to provide complimentary measurements of different aspects of behavior for short- and long-beaked common dolphins. We measured behavioral responses during controlled exposure experiments (CEEs) of military mid-frequency (3-4 kHz) active sonar (MFAS) using simulated and actual Navy sonar sources. We used latent-state Bayesian models to evaluate response probability and persistence in exposure and post-exposure phases. Changes in sub-group movement and aggregation parameters were commonly detected during different phases of MFAS CEEs but not control CEEs. Responses were more evident in short-beaked common dolphins (n=14 CEEs), and a direct relationship between response probability and received level was observed. Long-beaked common dolphins (n=20) showed less consistent responses, although contextual differences may have limited which movement responses could be detected. These are the first experimental behavioral response data for these abundant dolphins to directly inform impact assessments for military sonars. Methods We used complementary visual and acoustic sampling methods at variable spatial scales to measure different aspects of common dolphin behavior in known and controlled MFAS exposure and non-exposure contexts. Three fundamentally different data collection systems were used to sample group behavior. A broad-scale visual sampling of subgroup movement was conducted using theodolite tracking from shore-based stations. Assessments of whole-group and sub-group sizes, movement, and behavior were conducted at 2-minute intervals from shore-based and vessel platforms using high-powered binoculars and standardized sampling regimes. Aerial UAS-based photogrammetry quantified the movement of a single focal subgroup. The UAS consisted of a large (1.07 m diameter) custom-built octocopter drone launched and retrieved by hand from vessel platforms. The drone carried a vertically gimballed camera (at least 16MP) and sensors that allowed precise spatial positioning, allowing spatially explicit photogrammetry to infer movement speed and directionality. Remote-deployed (drifting) passive acoustic monitoring (PAM) sensors were strategically deployed around focal groups to examine both basic aspects of subspecies-specific common dolphin acoustic (whistling) behavior and potential group responses in whistling to MFAS on variable temporal scales (Casey et al., in press). This integration allowed us to evaluate potential changes in movement, social cohesion, and acoustic behavior and their covariance associated with the absence or occurrence of exposure to MFAS. The collective raw data set consists of several GB of continuous broadband acoustic data and hundreds of thousands of photogrammetry images. Three sets of quantitative response variables were analyzed from the different data streams: directional persistence and variation in speed of the focal subgroup from UAS photogrammetry; group vocal activity (whistle counts) from passive acoustic records; and number of sub-groups within a larger group being tracked by the shore station overlook. We fit separate Bayesian hidden Markov models (HMMs) to each set of response data, with the HMM assumed to have two states: a baseline state and an enhanced state that was estimated in sequential 5-s blocks throughout each CEE. The number of subgroups was recorded during periodic observations every 2 minutes and assumed constant across time blocks between observations. The number of subgroups was treated as missing data 30 seconds before each change was noted to introduce prior uncertainty about the precise timing of the change. For movement, two parameters relating to directional persistence and variation in speed were estimated by fitting a continuous time-correlated random walk model to spatially explicit photogrammetry data in the form of location tracks for focal individuals that were sequentially tracked throughout each CEE as a proxy for subgroup movement. Movement parameters were assumed to be normally distributed. Whistle counts were treated as normally distributed but truncated as positive because negative count data is not possible. Subgroup counts were assumed to be Poisson distributed as they were distinct, small values. In all cases, the response variable mean was modeled as a function of the HMM with a log link: log(Responset) = l0 + l1Z t where at each 5-s time block t, the hidden state took values of Zt = 0 to identify one state with a baseline response level l0, or Zt = 1 to identify an “enhanced” state, with l1 representing the enhancement of the quantitative value of the response variable. A flat uniform (-30,30) prior distribution was used for l0 in each response model, and a uniform (0,30) prior distribution was adopted for each l1 to constrain enhancements to be positive. For whistle and subgroup counts, the enhanced state indicated increased vocal activity and more subgroups. A common indicator variable was estimated for the latent state for both the movement parameters, such that switching to the enhanced state described less directional persistence and more variation in velocity. Speed was derived as a function of these two parameters and was used here as a proxy for their joint responses, representing directional displacement over time.
    To assess differences in the behavior states between experimental phases, the block-specific latent states were modeled as a function of phase-specific probabilities, Z t ~ Bernoulli (pphaset), to learn about the probability pphase of being in an enhanced state during each phase. For each pre-exposure, exposure, and post-exposure phase, this probability was assigned a flat uniform (0,1) prior probability. The model was programmed in R (R version 3.6.1; The R Foundation for Statistical Computing) with the nimble package (de Valpine et al. 2020) to estimate posterior distributions of model parameters using Markov Chain Monte Carlo (MCMC) sampling. Inference was based on 100,000 MCMC samples following a burn-in of 100,000, with chain convergence determined by visual inspection of three MCMC chains and corroborated by convergence diagnostics (Brooks and Gelman, 1998). To compare behavior across phases, we compared the posterior distribution of the pphase parameters for each response variable, specifically by monitoring the MCMC output to assess the “probability of response” as the proportion of iterations for which pexposure was greater or less than ppre-exposure and the “probability of persistence” as the proportion of iterations for which ppost-exposre was greater or less than ppre-exposure. These probabilities of response and persistence thus estimated the extent of separation (non-overlap) between the distributions of pairs of pphase parameters: if the two distributions of interest were identical, then p=0.5, and if the two were non-overlapping, then p=1. Similarly, we estimated the average values of the response variables in each phase by predicting phase-specific functions of the parameters: Mean.responsephase = exp(l0 + l1pphase) and simply derived average speed as the mean of the speed estimates for 5-second blocks in each phase.

  15. r

    Data from: Shorter CWR cycling tests as proxies for longer tests in highly...

    • researchdata.edu.au
    Updated Aug 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shorter CWR cycling tests as proxies for longer tests in highly trained cyclists [dataset] [Dataset]. https://researchdata.edu.au/shorter-cwr-cycling-cyclists-dataset/2765667
    Explore at:
    Dataset updated
    Aug 15, 2023
    Dataset provided by
    Edith Cowan University
    Authors
    Trish King; Mark Andrews; Lachlan Mitchell; Jodie Cochrane Wilkie; Chantelle du Plessis; Anthony Blazevich
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Severe-intensity constant work rate (CWR) cycling tests simulate the high-intensity competition environment and are useful for monitoring training progression and adaptation, yet impose significant physiological and psychological strain, require substantial recovery, and may disrupt athlete training or competition preparation. A brief, minimally fatiguing test providing comparable information is desirable.

    Purpose: To determine whether physiological variables measured during, and functional decline in maximal power output immediately after, a 2-min CWR test can act as a proxy for 4-min test outcomes.

    Methods: Physiological stress (V̇O2 kinetics, heart rate, blood lactate concentrations ([La-]b)) was monitored and performance fatigability was estimated (as pre-to-post-CWR changes in 10-s sprint power) during 2- and 4-min CWR tests in 16 high-level cyclists (V̇O2peak=64.4±6.0 ml∙kg-1∙min-1). The relationship between the 2- and 4-min CWR tests and the physiological variables that best relate to the performance fatigability were investigated.

    Results: The 2-min CWR test evoked a smaller decline in sprint mechanical power (32% vs. 47%, p < 0.001). Both the physiological variables (r=0.66-0.96) and sprint mechanical power (r=0.67-0.92) were independently and strongly correlated between 2- and 4-min tests. Differences in V̇O2peak and [La-]b in both CWR tests were strongly associated with the decline in sprint mechanical power.

    Conclusion: Strong correlations between 2- and 4-min severe-intensity CWR test outcomes indicated that the shorter test can be used as a proxy for the longer test. A shorter test may be more practical within the elite performance environment due to lower physiological stress and performance fatigability and should have less impact on subsequent training and competition preparation.

  16. r

    Temperature, precipitation, birch and fireweed chemistry, and moose (Alces...

    • researchdata.se
    • demo.researchdata.se
    Updated Aug 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sheila Holmes; Kjell Danell; John Ball; Göran Ericsson (2023). Temperature, precipitation, birch and fireweed chemistry, and moose (Alces alces) calf mass in northern Sweden [Dataset]. http://doi.org/10.5878/j1fh-8w11
    Explore at:
    (2206), (7653), (561), (1403), (4302), (6738), (27757)Available download formats
    Dataset updated
    Aug 7, 2023
    Dataset provided by
    Swedish University of Agricultural Sciences
    Authors
    Sheila Holmes; Kjell Danell; John Ball; Göran Ericsson
    Time period covered
    1988 - 1997
    Area covered
    Sweden
    Description

    Data and R code used in piecewise structural equation modelling for a study that compared the direct and indirect impacts of temperature and precipitation on moose calf mass in northern Sweden. The study was initiated in 1988 in an effort to examine the impacts of climate change on common forage species of the economically and culturally important moose in Sweden. It ran until 1997 and was re-started in 2017.

    Temperature and precipitation variables are derived from SMHI weather station data. Average moose calf mass for study sites is derived from data from the Swedish Hunter's Association and individual hunting teams. Both weather and moose calf mass represent mean values within a 50km radius of each study site. Nitrogen and neutral detergent fibre measures are the result of near-infrared spectroscopy modelling, using 50 samples to calibrate the model. Samples were collected from 1-ha sites and included material from 30 individuals of either downy birch or fireweed.

    The dataset contains the following files.

    DataWeatherVegMoose.tsv is the data itself (TSV format, 236 rows × 10 columns). This includes the following variables:

    Total precipitation (mm) from the start of the growing season, defined as the first day of the first four consecutive days each calendar year that each have a mean daily temperature greater than or equal to 5 degrees C, to July 17 of that year. This is an average value for all SMHI weather stations within a 50 km radius of a site.

    Mean daily average temperature from the start of the growing season, defined as the first day of the first four consecutive days each calendar year that each have a mean daily temperature greater than or equal to 5 degrees C, to July 17 of that year. This is an average value for all SMHI weather stations within a 50 km radius of a site.

    Proportion of days from the start of the growing season, defined as the first day of the first four consecutive days each calendar year that each have a mean daily temperature greater than or equal to 5 degrees C, to July 17 of that year, when the maximum daily temperature was greater than or equal to 20 degrees C. This is an average value for all SMHI weather stations within a 50 km radius of a site.

    Neutral detergent fibre content of downy birch leaves at the site, based on a representative sample and calculated using Near Infrared Spectroscopy

    Neutral detergent fibre content of fireweed stems, leaves, and flowers at the site, based on a representative sample and calculated using Near Infrared Spectroscopy Nitrogen content of downy birch leaves at the site, based on a representative sample and calculated using Near Infrared Spectroscopy

    Nitrogen content of fireweed stems, leaves, and flowers at the site, based on a representative sample and calculated using Near Infrared Spectroscopy Mean date-adjusted moose calf slaughter weight for calves reportedly shot within 50km of the site. Values must represent the mean weight of at least 10 calves to be included.

    The documentation file Key_DataWeatherVegMoose.tsv contains detailed information about the variables in the dataset.

    The documentation file sites_no.tsv contains codes for the different sites where data was collected. It corresponds with the variable Site in the dataset DataWeatherVegMoose.tsv.

    R_code_piecewise_SEM.r is the R script used to calculate the piecewise structural equation models linking weather to moose calf mass directly and via forage chemistry.

    R_code_piecewise_SEM_log.txt is output of the script with session information. If R , with the packages nlme and piecewiseSEM, is installed, it can be generated by running this from a shell: Rscript R_code_piecewise_SEM.r > R_code_piecewise_SEM_log.txt

  17. Data and code for the manuscript "Meta-analysis reveals negative but highly...

    • zenodo.org
    bin, csv
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grace Skinner; Grace Skinner (2024). Data and code for the manuscript "Meta-analysis reveals negative but highly variable impacts of invasive alien species on terrestrial insects". [Dataset]. http://doi.org/10.5281/zenodo.14290021
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Grace Skinner; Grace Skinner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and R code provided alongside the manuscript "Meta-analysis reveals negative but highly variable impacts of invasive alien species on terrestrial insects". The files attached can be used to repeat the analysis conducted to investigate the impact of invasive alien species on terrestrial insects with a meta-analytic approach.

    Please note that data for one study could not be made publicly available due to data-sharing restrictions. Thus, results generated using the publicly available datasets may differ slightly from those reported in the manuscript.

    The code scripts can be run in order:

    • 1_data_wrangling_and_summary.R uses the extracted_data_for_wrangling_and_analysis.csv data to wrangle and summarise the data. The script also produces the
      wrangled_data_for_meta_analytic_models.csv dataset.
    • 2_abundance_models.R uses the wrangled_data_for_meta_analytic_models.csv data to run the abudance meta-analytic models.
    • 3_species_richness_models.R uses the wrangled_data_for_meta_analytic_models.csv data to run the species richness meta-analytic models.
  18. f

    Dataset.

    • plos.figshare.com
    xlsx
    Updated Mar 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Caoimhe Lonergan; Seán R. Millar; Zubair Kabir (2024). Dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0299029.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Mar 6, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Caoimhe Lonergan; Seán R. Millar; Zubair Kabir
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Cork and Kerry Diabetes and Heart Disease Study dataset. (XLSX)

  19. Z

    Weather modulates spider trophic interactions: the interactive effects of...

    • data.niaid.nih.gov
    Updated Apr 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vaughan, Ian P. (2023). Weather modulates spider trophic interactions: the interactive effects of changes in prey community structure, adaptive web building and prey choice - Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7291565
    Explore at:
    Dataset updated
    Apr 19, 2023
    Dataset provided by
    Windsor, Fredric M.
    Bell, James R.
    Cuff, Jordan P.
    Tercel, Maximillian P.T.G.
    Symondson, W.O.C.
    Vaughan, Ian P.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Materials and Methods

    Fieldwork and sample processing

    Field collection and sample processing has been described previously by Cuff, Tercel, et al., (2022), with the exception of weather variables. Extraction, amplification and sequencing of DNA, and bioinformatic analysis is described by Cuff, Tercel, et al. (2022) and Drake et al. (2022). The resultant sequencing read counts were converted to presence-absence data of each detected prey taxon in each individual spider.

    Weather data

    Weather data were taken from publicly available reports from the Cardiff Airport weather station (6.6 km from the study site) via “Wunderground” (Wunderground, 2020) from 1/1/2018 to 17/9/2018 (the last field collection). Weather data were also separately extracted for the week preceding each of the two 2017 collection dates (3/8/2017 to 9/8/2017 and 29/8/2017 to 4/9/2017). Specifically, daily average temperatures (°C), daily average dew point (°C), maximum daily wind speed (mph), daily sea level pressure (Hg) and day length (min; sunrise to sunset) were recorded. Precipitation data were downloaded via the UK Met Office Hadley Centre Observation Data (UK Met Office, 2020) as regional precipitation (mm) for South West England & Wales. Weather data were converted to mean values for seven days preceding the collection of spider samples to correspond with the longevity of DNA in the guts of spiders (Greenstone et al., 2014).

    Statistical Analysis

    All analyses were conducted in R v4.0.3 (R Core Team, 2020). To assess how weather affects spider trophic interactions, we analysed dietary changes across weather gradients using multivariate models. To identify whether this was likely to be driven by changes in prey abundance, we assessed the corresponding changes in the prey communities and then used null models to ascertain whether spiders were responding to prey abundance changes through density-independent prey choice. Given the dependence of spiders on webs for foraging, we also compared web height and area over weather gradients to assess whether this may be a component of adaptive foraging. To assess the inter-annual consistency of prey choices in response to weather conditions, we also assessed whether prey preference data could be used to improve the predictive power of null models. For this, we generated null models for 2017 data with prey abundance weighted by prey preferences estimated with the 2018 data. This allowed us to assess the consistency of prey choice under similar conditions, but also provides insight as to whether this framework can be used to predict predator responses to diverse prey communities under dynamic conditions. We detail the specific stages of this analytical framework below.

    Sampling completeness and diversity assessment

    To assess the diversity represented by the dietary analysis and the invertebrate community sampling, and the completeness of those datasets, coverage-based rarefaction and extrapolation were carried out, and Hill diversity calculated (Chao et al., 2014; Roswell, Dushoff, & Winfree, 2021). This was performed using the ‘iNEXT’ package with species represented by frequency-of-occurrence across samples (Chao et al., 2014; Hsieh et al., 2016; Figures S4 & S6).

    Relationships between weather, spider trophic interactions and prey community composition

    Prey species that occurred in only one spider individual were removed before further analyses to prevent outliers skewing the results. Spider trophic interactions were related to temporal and weather variables in multivariate generalized linear models (MGLMs) with a binomial error family (Wang, Naumann, Wright, & Warton, 2012). Trophic interactions were related to temporal variables and their pairwise interactions (including spider genus to account for any confounding effect), weather variables and their pairwise interactions, and weather variables and their interactions with spider genus and time (to account for any confounding effects) in three separate MGLMs. These variables were separated into different models (Temporal model, Weather Interaction model and Confounding effects model) to improve model fit and reduce singularity. Invertebrate communities from suction sampling were related to temporal and weather variables in identically structured MGLMs (excluding the spider genus variable) with a Poisson error family.

    All MGLMs were fitted using the ‘manyglm’ function in the ‘mvabund’ package (Wang et al., 2012). ‘Temporal model’ independent variables were Julian day (day), mean day length in minutes for the preceding week (day length), spider genus (for dietary models only, to ascertain any effect of spider taxonomic differences on dietary differences over time and day lengths) and all two-way interactions between these variables. ‘Weather interaction model’ independent variables were mean temperature, precipitation, dewpoint, wind speed and pressure for the preceding week, and pairwise interactions between weather variables. ‘Confounding effects’ model independent variables were day (to investigate the interaction between time and weather), spider genus (for dietary models only, to ascertain any effect of spider taxonomic differences on dietary differences over time and day lengths), mean temperature, precipitation, dewpoint, wind speed and pressure for the preceding week, and two-way interactions of each weather variable with day and genus.

    Trophic interaction and community differences were visualised by non-metric multidimensional scaling (NMDS) using the ‘metaMDS’ function in the ‘vegan’ package (Oksanen et al., 2016) in two dimensions and 999 simulations, with Jaccard distance for spider diets and Bray-Curtis distance for invertebrate communities. For the dietary NMDS, outliers (n = 21; samples containing rare taxa) obscured variation on one axis and were thus removed to facilitate separation of samples and achieve minimum stress. For visualization of the effect of continuous variables against the NMDS, surf plots were created with scaled coloured contours using the ‘ordisurf’ function in the ‘ggplot’ package (Wickham, 2016).

    Relationships between web characteristics and weather variables

    Web area and height were compared against weather and temporal variables using a multivariate linear model (MLM) with the ‘manylm’ command in ‘mvabund’ (Wang et al., 2012). Log-transformed web area and height comprised the multivariate dependent variable, and day, spider genus, temperature, precipitation, dewpoint, wind, pressure and two-way interactions between each of these and day and genus comprised the independent variables.

    Variation in spider prey choice across weather conditions

    To separately represent spiders from different weather conditions in prey choice analyses, sample dates for every spider were clustered based on the mean weather conditions (temperature, precipitation, dewpoint, wind and pressure) of the week before collection (7 days, to align approximately with spider gut DNA half-life; Greenstone et al., 2014). Alongside data from 2018 (n = 24 collection dates), two sampling periods from 2017 were included in the clustering to ascertain similarity of weather conditions for additional inter-annual prey choice analyses described below. The clustering process is described in the Supplementary Information of the manuscript. Five clusters were generated: High Pressure (HPR), Hot (HOT), Wet Low Dewpoint (WLD), Dry Windy (DWI), Wet Moderate Dewpoint (WMD), and 2017 (2017 sampling periods).

    Prey preferences of spiders in each of the weather clusters was analysed using network-based null models in the ‘econullnetr’ package (Vaughan et al., 2018) with the ‘generate_null_net’ command. Consumer nodes in this case represented spiders belonging to each of the weather clusters. Econullnetr generates null models based on prey abundance, represented here by suction sample data, to predict how consumers will forage if based on the abundance of resources alone. These null models are then compared against the observed interactions of consumers (i.e., interactions of spiders within each weather cluster with their prey) to ascertain the extent to which resource choice deviated from random (i.e., density dependence). The trophic network was visualised with the associated prey choice effect sizes using ‘igraph’ (Csardi & Nepusz, 2006) with a circular layout, and as a bipartite network using ‘ggnetwork’ (Briatte, 2021; Wickham, 2016). The normalised degree of each weather cluster node was generated using the ‘bipartite’ package (Dormann, Gruber, & Fruend, 2008) and compared against the normalised degree of the same node in the null network to determine whether spiders were more or less generalist than expected by random. Prior to the prey choice analysis, an hemipteran prey identified no further than order level through dietary analysis was removed due to the inability to pair it to any present prey taxa with certainty.

    Validating and predicting relationships between years

    To test how generalisable the results are and the extent to which weather drives prey preferences, we used a measure of prey preference (observed/expected values; observed interaction frequencies divided by interaction frequencies expected by null models) from the above prey choice analysis to assess whether we could more accurately predict observed trophic interactions under similar weather conditions for data from a linked study at the same location in 2017. These additional data represent a subset of the spider taxa analysed above (Tenuiphantes tenuis and Erigone spp.) collected using the same methods by the same researchers and in the same locality (Cuff, Drake, et al., 2021).

    The similarity in weather conditions between the 2017 study period and each of the five 2018 weather clusters was determined via NMDS of the weather data in two dimensions with

  20. f

    Pearson correlation coefficients (r) and corresponding significance p-values...

    • plos.figshare.com
    xls
    Updated May 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weixing Yang; Tingting Li; Bo Wen; Yuan Ren (2025). Pearson correlation coefficients (r) and corresponding significance p-values among different monitoring types are presented. [Dataset]. http://doi.org/10.1371/journal.pone.0324604.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Weixing Yang; Tingting Li; Bo Wen; Yuan Ren
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pearson correlation coefficients (r) and corresponding significance p-values among different monitoring types are presented.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Damico, Anthony (2023). Current Population Survey (CPS) [Dataset]. http://doi.org/10.7910/DVN/AK4FDD

Current Population Survey (CPS)

Explore at:
Dataset updated
Nov 21, 2023
Dataset provided by
Harvard Dataverse
Authors
Damico, Anthony
Description

analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D

Search
Clear search
Close search
Google apps
Main menu