100+ datasets found
  1. Data from: BEING A TREE CROP INCREASES THE ODDS OF EXPERIENCING YIELD...

    • zenodo.org
    bin, zip
    Updated Aug 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcelo Adrián Aizen; Marcelo Adrián Aizen; Gabriela Gleiser; Gabriela Gleiser; Thomas Kitzberger; Thomas Kitzberger; Rubén Milla; Rubén Milla (2023). BEING A TREE CROP INCREASES THE ODDS OF EXPERIENCING YIELD DECLINES IRRESPECTIVE OF POLLINATOR DEPENDENCE [Dataset]. http://doi.org/10.5281/zenodo.7863825
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Aug 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Marcelo Adrián Aizen; Marcelo Adrián Aizen; Gabriela Gleiser; Gabriela Gleiser; Thomas Kitzberger; Thomas Kitzberger; Rubén Milla; Rubén Milla
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Marcelo A. Aizen, Gabriela R. Gleiser, Thomas Kitzberger, Ruben Milla. Being a tree crop increases the odds of experiencing yield declines irrespective of pollinator dependence (to be submitted to PCI)

    Data and R scripts to reproduce the analyses and the figures shown in the paper. All analyses were performed using R 4.0.2.

    Data

    1. FAOdata_21-12-2021.csv

    This file includes yearly data (1961-2020, column 8) on yield and cultivated area (columns 6 and 10) at the country, sub-regional, and regional levels (column 2) for each crop (column 4) drawn from the United Nations Food and Agriculture Organization database (data available at http://www.fao.org/faostat/en; accessed July 21-12-2021). [Used in Script 1 to generate the synthesis dataset]

    2. countries.csv

    This file provides information on the region (column 2) to which each country (column 1) belongs. [Used in Script 1 to generate the synthesis dataset]

    3. dependence.csv

    This file provides information on the pollinator dependence category (column 2) of each crop (column 1).

    4. traits.csv

    This file provides information on the traits of each crop other than pollinator dependence, including, besides the crop name (column1), the variables type of harvested organ (column 5) and growth form (column 6). [Used in Script 1 to generate the synthesis dataset]

    5. dataset.csv

    The synthesis dataset generated by Script 1.

    6. growth.csv

    The yield growth dataset generated by Script 1 and used as input by Scripts 2 and 3.

    7. phylonames.csv

    This file lists all the crops (column 1) and their equivalent tip names in the crop phylogeny (column 2). [Used in Script 2 for the phylogenetically-controlled analyses]

    8.phylo137.tre

    File containing the phylogenetic tree.

    Scripts

    1. dataset

    This R script curates and merges all the individual datasets mentioned above into a single dataset, estimating and adding to this single dataset the growth rate for each crop and country, and the (log) cumulative harvested area per crop and country over the period 1961-2020.

    2. analyses

    This R script includes all the analyses described in the article’s main text.

    3. figures

    This R script creates all the main and supplementary figures of this article.

    4. lme4_phylo_setup

    R function written by Li and Bolker (2019) to carry out phylogenetically-controlled generalized linear mixed-effects models as described in the main text of the article.

    References

    Li, M., and B. Bolker. 2019. wzmli/phyloglmm: First release of phylogenetic comparative analysis in lme4- verse. Zenodo. https://doi.org/10.5281/zenodo.2639887.

  2. e

    A global database of long-term changes in insect assemblages

    • knb.ecoinformatics.org
    • search-dev.test.dataone.org
    • +4more
    Updated Jan 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roel van Klink; Diana E. Bowler; Jonathan M. Chase; Orr Comay; Michael M. Driessen; S.K. Morgan Ernest; Alessandro Gentile; Francis Gilbert; Konstantin Gongalsky; Jennifer Owen; Guy Pe'er; Israel Pe'er; Vincent H. Resh; Ilia Rochlin; Sebastian Schuch; Ann E. Swengel; Scott R. Swengel; Thomas L. Valone; Rikjan Vermeulen; Tyson Wepprich; Jerome Wiedmann (2022). A global database of long-term changes in insect assemblages [Dataset]. http://doi.org/10.5063/F1ZC817H
    Explore at:
    Dataset updated
    Jan 26, 2022
    Dataset provided by
    Knowledge Network for Biocomplexity
    Authors
    Roel van Klink; Diana E. Bowler; Jonathan M. Chase; Orr Comay; Michael M. Driessen; S.K. Morgan Ernest; Alessandro Gentile; Francis Gilbert; Konstantin Gongalsky; Jennifer Owen; Guy Pe'er; Israel Pe'er; Vincent H. Resh; Ilia Rochlin; Sebastian Schuch; Ann E. Swengel; Scott R. Swengel; Thomas L. Valone; Rikjan Vermeulen; Tyson Wepprich; Jerome Wiedmann
    Time period covered
    Jan 1, 1925 - Jan 1, 2018
    Area covered
    Pacific Ocean, North Pacific Ocean
    Variables measured
    End, Link, Year, Realm, Start, CRUmnC, CRUmnK, Metric, Number, Period, and 63 more
    Description

    UPDATED on October 15 2020 After some mistakes in some of the data were found, we updated this data set. The changes to the data are detailed on Zenodo (http://doi.org/10.5281/zenodo.4061807), and an Erratum has been submitted. This data set under CC-BY license contains time series of total abundance and/or biomass of assemblages of insect, arachnid and Entognatha assemblages (grouped at the family level or higher taxonomic resolution), monitored by standardized means for ten or more years. The data were derived from 165 data sources, representing a total of 1668 sites from 41 countries. The time series for abundance and biomass represent the aggregated number of all individuals of all taxa monitored at each site. The data set consists of four linked tables, representing information on the study level, the plot level, about sampling, and the measured assemblage sizes. all references to the original data sources can be found in the pdf with references, and a Google Earth file (kml) file presents the locations (including metadata) of all datasets. When using (parts of) this data set, please respect the original open access licenses. This data set underlies all analyses performed in the paper 'Meta-analysis reveals declines in terrestrial, but increases in freshwater insect abundances', a meta-analysis of changes in insect assemblage sizes, and is accompanied by a data paper entitled 'InsectChange – a global database of temporal changes in insect and arachnid assemblages'. Consulting the data paper before use is recommended. Tables that can be used to calculate trends of specific taxa and for species richness will be added as they become available. The data set consists of four tables that are linked by the columns 'DataSource_ID'. and 'Plot_ID', and a table with references to original research. In the table 'DataSources', descriptive data is provided at the dataset level: Links are provided to online repositories where the original data can be found, it describes whether the dataset provides data on biomass, abundance or both, the invertebrate group under study, the realm, and describes the location of sampling at different geographic scales (continent to state). This table also contains a reference column. The full reference to the original data is found in the file 'References_to_original_data_sources.pdf'. In the table 'PlotData' more details on each site within each dataset are provided: there is data on the exact location of each plot, whether the plots were experimentally manipulated, and if there was any spatial grouping of sites (column 'Location'). Additionally, this table contains all explanatory variables used for analysis, e.g. climate change variables, land-use variables, protection status. The table 'SampleData' describes the exact source of the data (table X, figure X, etc), the extraction methods, as well as the sampling methods (derived from the original publications). This includes the sampling method, sampling area, sample size, and how the aggregation of samples was done, if reported. Also, any calculations we did on the original data (e.g. reverse log transformations) are detailed here, but more details are provided in the data paper. This table links to the table 'DataSources' by the column 'DataSource_ID'. Note that each datasource may contain multiple entries in the 'SampleData' table if the data were presented in different figures or tables, or if there was any other necessity to split information on sampling details. The table 'InsectAbundanceBiomassData' provides the insect abundance or biomass numbers as analysed in the paper. It contains columns matching to the tables 'DataSources' and 'PlotData', as well as year of sampling, a descriptor of the period within the year of sampling (this was used as a random effect), the unit in which the number is reported (abundance or biomass), and the estimated abundance or biomass. In the column for Number, missing data are included (NA). The years with missing data were added because this was essential for the analysis performed, and retained here because they are easier to remove than to add. Linking the table 'InsectAbundanceBiomassData.csv' with 'PlotData.csv' by column 'Plot_ID', and with 'DataSources.csv' by column 'DataSource_ID' will provide the full dataframe used for all analyses. Detailed explanations of all column headers and terms are available in the ReadMe file, and more details will be available in the forthcoming data paper. WARNING: Because of the disparate sampling methods and various spatial and temporal scales used to collect the original data, this dataset should never be used to test for differences in insect abundance/biomass among locations (i.e. differences in intercept). The data can only be used to study temporal trends, by testing for differences in slopes. The data are standardized within plots to allow the temporal comparison, but not necessarily among plots (even within one dataset).

  3. Z

    Food and Agriculture Biomass Input–Output (FABIO) database

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kuschnig, Nikolas (2022). Food and Agriculture Biomass Input–Output (FABIO) database [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_2577066
    Explore at:
    Dataset updated
    Jun 8, 2022
    Dataset provided by
    Kuschnig, Nikolas
    Bruckner, Martin
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This data repository provides the Food and Agriculture Biomass Input Output (FABIO) database, a global set of multi-regional physical supply-use and input-output tables covering global agriculture and forestry.

    The work is based on mostly freely available data from FAOSTAT, IEA, EIA, and UN Comtrade/BACI. FABIO currently covers 191 countries + RoW, 118 processes and 125 commodities (raw and processed agricultural and food products) for 1986-2013. All R codes and auxilliary data are available on GitHub. For more information please refer to https://fabio.fineprint.global.

    The database consists of the following main components, in compressed .rds format:

    Z: the inter-commodity input-output matrix, displaying the relationships of intermediate use of each commodity in the production of each commodity, in physical units (tons). The matrix has 24000 rows and columns (125 commodities x 192 regions), and is available in two versions, based on the method to allocate inputs to outputs in production processes: Z_mass (mass allocation) and Z_value (value allocation). Note that the row sums of the Z matrix (= total intermediate use by commodity) are identical in both versions.

    Y: the final demand matrix, denoting the consumption of all 24000 commodities by destination country and final use category. There are six final use categories (yielding 192 x 6 = 1152 columns): 1) food use, 2) other use (non-food), 3) losses, 4) stock addition, 5) balancing, and 6) unspecified.

    X: the total output vector of all 24000 commodities. Total output is equal to the sum of intermediate and final use by commodity.

    L: the Leontief inverse, computed as (I – A)-1, where A is the matrix of input coefficients derived from Z and x. Again, there are two versions, depending on the underlying version of Z (L_mass and L_value).

    E: environmental extensions for each of the 24000 commodities, including four resource categories: 1) primary biomass extraction (in tons), 2) land use (in hectares), 3) blue water use (in m3)., and 4) green water use (in m3).

    mr_sup_mass/mr_sup_value: For each allocation method (mass/value), the supply table gives the physical supply quantity of each commodity by producing process, with processes in the rows (118 processes x 192 regions = 22656 rows) and commodities in columns (24000 columns).

    mr_use: the use table capture the quantities of each commodity (rows) used as an input in each process (columns).

    A description of the included countries and commodities (i.e. the rows and columns of the Z matrix) can be found in the auxiliary file io_codes.csv. Separate lists of the country sample (including ISO3 codes and continental grouping) and commodities (including moisture content) are given in the files regions.csv and items.csv, respectively. For information on the individual processes, see auxiliary file su_codes.csv. RDS files can be opened in R. Information on how to read these files can be obtained here: https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/readRDS

    Except of X.rds, which contains a matrix, all variables are organized as lists, where each element contains a sparse matrix. Please note that values are always given in physical units, i.e. tonnes or head, as specified in items.csv. The suffixes value and mass only indicate the form of allocation chosen for the construction of the symmetric IO tables (for more details see Bruckner et al. 2019). Product, process and country classifications can be found in the file fabio_classifications.xlsx.

    Footprint results are not contained in the database but can be calculated, e.g. by using this script: https://github.com/martinbruckner/fabio_comparison/blob/master/R/fabio_footprints.R

    How to cite:

    To cite FABIO work please refer to this paper:

    Bruckner, M., Wood, R., Moran, D., Kuschnig, N., Wieland, H., Maus, V., Börner, J. 2019. FABIO – The Construction of the Food and Agriculture Input–Output Model. Environmental Science & Technology 53(19), 11302–11312. DOI: 10.1021/acs.est.9b03554

    License:

    This data repository is distributed under the CC BY-NC-SA 4.0 License. You are free to share and adapt the material for non-commercial purposes using proper citation. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. In case you are interested in a collaboration, I am happy to receive enquiries at martin.bruckner@wu.ac.at.

    Known issues:

    The underlying FAO data have been manipulated to the minimum extent necessary. Data filling and supply-use balancing, yet, required some adaptations. These are documented in the code and are also reflected in the balancing item in the final demand matrices. For a proper use of the database, I recommend to distribute the balancing item over all other uses proportionally and to do analyses with and without balancing to illustrate uncertainties.

  4. f

    Data from: Projections of Definitive Screening Designs by Dropping Columns:...

    • tandf.figshare.com
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alan R. Vazquez; Peter Goos; Eric D. Schoen (2023). Projections of Definitive Screening Designs by Dropping Columns: Selection and Evaluation [Dataset]. http://doi.org/10.6084/m9.figshare.7624412.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Alan R. Vazquez; Peter Goos; Eric D. Schoen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract–Definitive screening designs permit the study of many quantitative factors in a few runs more than twice the number of factors. In practical applications, researchers often require a design for m quantitative factors, construct a definitive screening design for more than m factors and drop the superfluous columns. This is done when the number of runs in the standard m-factor definitive screening design is considered too limited or when no standard definitive screening design (sDSD) exists for m factors. In these cases, it is common practice to arbitrarily drop the last columns of the larger design. In this article, we show that certain statistical properties of the resulting experimental design depend on the exact columns dropped and that other properties are insensitive to these columns. We perform a complete search for the best sets of 1–8 columns to drop from sDSDs with up to 24 factors. We observed the largest differences in statistical properties when dropping four columns from 8- and 10-factor definitive screening designs. In other cases, the differences are small, or even nonexistent.

  5. r

    Data from: The Berth Allocation Problem with Channel Restrictions - Datasets...

    • researchdata.edu.au
    • researchdatafinder.qut.edu.au
    Updated 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corry Paul; Bierwirth Christian (2018). The Berth Allocation Problem with Channel Restrictions - Datasets [Dataset]. http://doi.org/10.4225/09/5b306f6511d7c
    Explore at:
    Dataset updated
    2018
    Dataset provided by
    Queensland University of Technology
    Authors
    Corry Paul; Bierwirth Christian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Jul 10, 6 - Dec 9, 27
    Description

    These datatasets relate to the computational study presented in the paper "The Berth Allocation Problem with Channel Restrictions", authored by Paul Corry and Christian Bierwirth. They consist of all the randomly generated problem instances along with the computational results presented in the paper.

    Results across all problem instances assume ship separation parameters of [delta_1, delta_2, delta_3] = [0.25, 0, 0.5].

    Excel Workbook Organisation:

    The data is organised into separate Excel files for each table in the paper, as indicated by the file description. Within each file, each row of data presented (aggregating 10 replications) in the corrsponding table is captured in two worksheets, one with the problem instance data, and the other with generated solution data obtained from several solution methods (described in the paper). For example, row 3 of Tab. 2, will have data for 10 problem instances on worksheet T2R3, and corresponding solution data on T2R3X.

    Problem Instance Data Format:

    On each problem instance worksheet (e.g. T2R3), each row of data corresponds to a different problem instance, and there are 10 replications on each worksheet.

    The first column provides a replication identifier which is referenced on the corresponding solution worksheet (e.g. T2R3X).

    Following this, there are n*(2c+1) columns (n = number of ships, c = number of channel segmenets) with headers p(i)_(j).(k)., where i references the operation (channel transit/berth visit) id, j references the ship id, and k references the index of the operation within the ship. All indexing starts at 0. These columns define the transit or dwell times on each segment. A value of -1 indicates a segment on which a berth allocation must be applied, and hence the dwell time is unkown.

    There are then a further n columns with headers r(j), defining the release times of each ship.

    For ChSP problems, there are a final n colums with headers b(j), defining the berth to be visited by each ship. ChSP problems with fixed berth sequencing enforced have an additional n columns with headers toa(j), indicating the order in which ship j sits within its berth sequence. For BAP-CR problems, these columnns are not present, but replaced by n*m columns (m = number of berths) with headers p(j).(b) defining the berth processing time of ship j if allocated to berth b.

    Solution Data Format:

    Each row of data corresponds to a different solution.

    Column A references the replication identifier (from the corresponding instance worksheet) that the soluion refers to.

    Column B defines the algorithm that was used to generate the solution.

    Column C shows the objective function value (total waiting and excess handling time) obtained.

    Column D shows the CPU time consumed in generating the solution, rounded to the nearest second.

    Column E shows the optimality gap as a proportion. A value of -1 or an empty value indicates that optimality gap is unknown.

    From column F onwards, there are are n*(2c+1) columns with the previously described p(i)_(j).(k). headers. The values in these columns define the entry times at each segment.

    For BAP-CR problems only, following this there are a further 2n columns. For each ship j, there will be columns titled b(j) and p.b(j) defining the berth that was allocated to ship j, and the processing time on that berth respectively.

  6. VOYAGER 1 JUPITER POSITION RESAMPLED DATA 48.0 SECONDS

    • data.nasa.gov
    • catalog.data.gov
    application/rdfxml +5
    Updated May 21, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). VOYAGER 1 JUPITER POSITION RESAMPLED DATA 48.0 SECONDS [Dataset]. https://data.nasa.gov/Earth-Science/VOYAGER-1-JUPITER-POSITION-RESAMPLED-DATA-48-0-SEC/26c6-kmpc
    Explore at:
    tsv, xml, application/rdfxml, csv, json, application/rssxmlAvailable download formats
    Dataset updated
    May 21, 2021
    Description

    This data set includes Voyager 1 Jupiter encounter position data that have been generated at a 48.0 second sample rate using the NAIF SPICE kernals. The data set is composed of 4 columns: 1) ctime - this column contains the data acquisition time. The time is always output in the ISO standard spacecraft event time format (yyyy-mm-dd-Thh:mm:ss.sss) but is stored internally in Cline time which is measured in seconds after 00:00:00.000 Jan 01, 1966, 2) r - this column contains the radial distance from Jupiter in Rj = 71398 km, 3) longitude - this column contains the east longitude of the spacecraft in degrees, 4) latitude - this column contains the latitude of the spacecraft in degrees. Position data is given in Minus System III coordinates.

  7. o

    Perception and Experience of Appraisals and Consumption Emotions in Reviews...

    • explore.openaire.eu
    • data.niaid.nih.gov
    • +1more
    Updated Jan 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerard Christopher Yeo; Kokil Jaidka (2023). Perception and Experience of Appraisals and Consumption Emotions in Reviews (PEACE-Reviews) Pilot Dataset [Dataset]. http://doi.org/10.5281/zenodo.7528895
    Explore at:
    Dataset updated
    Jan 12, 2023
    Authors
    Gerard Christopher Yeo; Kokil Jaidka
    Description

    This pilot dataset contains review text responses and associated emotional ratings about the usage of particular products by participants (USA) recruited from a crowdsourcing platform called Prolific. This pilot study is part of a larger project of constructing a dataset that has people's review text of particular products labelled with emotional variables such as cognitive appraisals, and emotional intensity. The aim of this pilot is to pilot different methods that elicit participants to write their emotional experiences when using a product. Variables Column A- condition that the participant was assigned to. [1- presence of emotion prompts, review format; 2- presence of emotion prompts, question format; 3- absence of emotion prompts, review format; absence of emotion prompts, question format] Column B- product reviewed Column C- cost of product reviewed Column D- emotion felt while using the product (this only applies to condition 1 and 3) Columns E - K are the text responses from participants. Column E- review text responses of the product (this only applies to condition 1 and 3) Column F- how important is the product in text (this only applies to condition 2 and 4) Column G- how did the participants feel when using the product in text (this only applies to condition 2 and 4) Column H- Why did the participants feel the way they are feeling when using the product in text (this only applies to condition 2) Column I- whether the product is consistent with what the participants' wanted in text (this only applies to condition 2 and 4) Column J- whether using the product was pleasant in text (this only applies to condition 2 and 4) Column K- whether the participants' understood what was happening when they were using the product in text (this only applies to condition 2 and 4) Columns L - AE are the rating of the cognitive appraisals on a Likert scale of 1-7. (7 means that the participants endorse more on that appraisal). These ratings correspond to how the participants' perceive the situation of using the product that they have talked about. Missing data correspond to the participants' rating as 'not applicable'. Column L- self-control- To what extent did you think you had control over the situation? Column M- pleasantness- To what extent did you think that the situation was pleasant? Column N- goal congruence- To what extent was the situation consistent with what you wanted? Column O- expectedness- To what extent did you expect the situation to occur? Column P- fairness- To what extent did you think the situation was fair? Column Q- certainty- To what extent did you understand what was happening in the situation? Column R- coping potential- To what extent were you able to cope with any negative consequences of the situation? Column S- goal relevance- To what extent did you think that the situation was relevant to what you wanted? Column T- other-agency- To what extent did you think that someone else other than you was responsible for what was happening in the situation? Column U- difficulty- To what extent did you think that the situation was difficult? Column V- self-agency- To what extent did you think that you were responsible for what was happening in the situation? Column W- attentional activity- To what extent did you think that you needed to attend to the situation further? Column X- circumstances-control- To what extent did you think that circumstances beyond anyone's control were controlling what was happening in the situation? Column Y- positive future expectancy- To what extent did you think that the situation would get better? Column Z- other-control- To what extent did you think that other people were controlling what was happening in the situation? Column AA- effort- To what extent did you think that you needed to exert effort to deal with the situation? Column AB- problems- To what extent did you think that there were problems that had to be solved before you could get what you wanted? Column AC- external normative significance- To what extent did you think that the situation was consistent with external and social norms? Column AD- circumstances-agency- To what extent did you think that circumstances beyond anyone's control were responsible for what was happening in the situation? Column AE- familiarity- To what extent did you think that the situation was familiar? Columns AF - AH are some variables pertaining the participants' experience of using the product on a Likert scale of 1-7. Column AF-How much effort did you put into researching about the product/service before purchasing it? Column AG- To what extent would you recommend the product/service that you have recalled to someone else? Column AH- To what extent would you purchase again the product/service that you have recalled? Columns AI - AP are the emotional intensity rated for each emotion on a Likert scale of 1-7 (7 means that the participant strongly felt that emotion when using the product) Columns AQ - AW are demographic varia...

  8. w

    Books called Extending R :

    • workwithdata.com
    Updated Oct 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). Books called Extending R : [Dataset]. https://www.workwithdata.com/datasets/books?f=1&fcol0=book&fop0=%3D&fval0=Extending+R+%3A
    Explore at:
    Dataset updated
    Oct 23, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about books and is filtered where the book is Extending R :, featuring 7 columns including author, BNB id, book, book publisher, and ISBN. The preview is ordered by publication date (descending).

  9. d

    Video Plankton Recorder data (formatted with taxa displayed in single...

    • search.dataone.org
    • bco-dmo.org
    Updated Dec 5, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carin J. Ashjian (2021). Video Plankton Recorder data (formatted with taxa displayed in single column); from R/V Columbus Iselin and R/V Endeavor cruises CI9407, EN259, EN262 in the Gulf of Maine and Georges Bank from 1994-1995 [Dataset]. https://search.dataone.org/view/sha256%3A566834df8a2123a0db852b01a896151843ebca2390357ad42aef9b4eb0a86032
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset provided by
    Biological and Chemical Oceanography Data Management Office (BCO-DMO)
    Authors
    Carin J. Ashjian
    Area covered
    Gulf of Maine, Georges Bank
    Description

    This dataset includes ALL the abundance values, zero and non-zero. Taxonomic groups are diplayed in the 'taxon' column, rather than in separate columns, with abundances in the 'abund_L' column. For the original presentation of the data, see VPR_ashjian_orig. For a version of the data with only non-zero data, see VPR_ashjian_nonzero. In the 'nonzero' dataset, values of 0 in the abund_L column (taxon abundance) have been removed.

    Methodology
    The following information was extracted from C.J. Ashjian et al., Deep- Sea Research II 48(2001) 245-282 . An in-depth discussion of the data and sampling methods can be found there.

    The Video Plankton Recorder was towed at 2 m/s, collecting data from the surface to the bottom (towyo). The VPR was equipped with 2-4 cameras, temperature and conductivity probes, fluorometer and transmissometer. Environmental data was collected at 0.25 Hz (CI9407) or 0.5 Hz (EN259, EN262). Video images were recorded at 60 fields per second (fps).

    Video tapes were analyzed for plankton abundances using a semi-automated method discussed in Davis, C.S. et al., Deep-Sea Research II 43 (1996) 1946-1970. In-focus images were extracted from the video tapes and identified by hand to particle type, taxon, or species. Plankton and particle observations were merged with environmental and navigational data by binning the observations for each category into the time intervals at which the environmental data were collected (again see above Davis citation). Concentrations were calculated utilizing the total volume (liters) imaged during that period. For less-abundant categories, usually only a single organism was observed during each time interval so that the resulting concentrations are close to presence or absence data rather than covering a range of values.

  10. r

    [Superseded] Intellectual Property Government Open Data 2019

    • researchdata.edu.au
    • devweb.dga.links.com.au
    • +1more
    Updated Jun 6, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IP Australia (2019). [Superseded] Intellectual Property Government Open Data 2019 [Dataset]. https://researchdata.edu.au/superseded-intellectual-property-data-2019/2994670
    Explore at:
    Dataset updated
    Jun 6, 2019
    Dataset provided by
    data.gov.au
    Authors
    IP Australia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    What is IPGOD?\r

    The Intellectual Property Government Open Data (IPGOD) includes over 100 years of registry data on all intellectual property (IP) rights administered by IP Australia. It also has derived information about the applicants who filed these IP rights, to allow for research and analysis at the regional, business and individual level. This is the 2019 release of IPGOD.\r \r \r

    How do I use IPGOD?\r

    IPGOD is large, with millions of data points across up to 40 tables, making them too large to open with Microsoft Excel. Furthermore, analysis often requires information from separate tables which would need specialised software for merging. We recommend that advanced users interact with the IPGOD data using the right tools with enough memory and compute power. This includes a wide range of programming and statistical software such as Tableau, Power BI, Stata, SAS, R, Python, and Scalar.\r \r \r

    IP Data Platform\r

    IP Australia is also providing free trials to a cloud-based analytics platform with the capabilities to enable working with large intellectual property datasets, such as the IPGOD, through the web browser, without any installation of software. IP Data Platform\r \r

    References\r

    \r The following pages can help you gain the understanding of the intellectual property administration and processes in Australia to help your analysis on the dataset.\r \r * Patents\r * Trade Marks\r * Designs\r * Plant Breeder’s Rights\r \r \r

    Updates\r

    \r

    Tables and columns\r

    \r Due to the changes in our systems, some tables have been affected.\r \r * We have added IPGOD 225 and IPGOD 325 to the dataset!\r * The IPGOD 206 table is not available this year.\r * Many tables have been re-built, and as a result may have different columns or different possible values. Please check the data dictionary for each table before use.\r \r

    Data quality improvements\r

    \r Data quality has been improved across all tables.\r \r * Null values are simply empty rather than '31/12/9999'.\r * All date columns are now in ISO format 'yyyy-mm-dd'.\r * All indicator columns have been converted to Boolean data type (True/False) rather than Yes/No, Y/N, or 1/0.\r * All tables are encoded in UTF-8.\r * All tables use the backslash \ as the escape character.\r * The applicant name cleaning and matching algorithms have been updated. We believe that this year's method improves the accuracy of the matches. Please note that the "ipa_id" generated in IPGOD 2019 will not match with those in previous releases of IPGOD.

  11. d

    Water column data from samples collected on R/V Hugh Sharp cruise HRS1803GL...

    • search.dataone.org
    • darchive.mblwhoilibrary.org
    • +1more
    Updated Mar 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    George W. Luther; Bradley M. Tebo (2025). Water column data from samples collected on R/V Hugh Sharp cruise HRS1803GL in the Chesapeake Bay during July-August 2018 [Dataset]. https://search.dataone.org/view/sha256%3Ac666e4d8584e5b75652724291335651de8db9ae2f73d49353d8a1fc0c22d02e2
    Explore at:
    Dataset updated
    Mar 9, 2025
    Dataset provided by
    Biological and Chemical Oceanography Data Management Office (BCO-DMO)
    Authors
    George W. Luther; Bradley M. Tebo
    Time period covered
    Jul 28, 2018 - Aug 3, 2018
    Area covered
    Description

    Water column data from samples collected on R/V Hugh Sharp cruise HRS1803GL in the Chesapeake Bay during July-August 2018. Samples were collected by CTD and from an in situ pump profiler system attached to the CTD rosette.

  12. d

    Water-column environmental variables and accompanying discrete CTD...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Water-column environmental variables and accompanying discrete CTD measurements collected off California and Oregon during NOAA Ship Lasker R-19-05 (USGS field activity 2019-672-FA) from October to November 2019 (ver. 2.0, July 2022) [Dataset]. https://catalog.data.gov/dataset/water-column-environmental-variables-and-accompanying-discrete-ctd-measurements-collected--441f7
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    Various water column variables, including salinity, dissolved inorganic nutrients, pH, total alkalinity, dissolved inorganic carbon, radio-carbon isotopes were measured in samples collected using a Niskin-bottle rosette at selected depths from sites offshore of California and Oregon from October to November 2019 during NOAA Ship Lasker R-19-05 (USGS field activity 2019-672-FA). CTD (Conductivity Temperature Depth) data were also collected at each depth that a Niskin-bottle sample was collected and are presented along with the water sample data. This data release supersedes version 1.0, published in August 2020 at https://doi.org/10.5066/P9ZS1JX8. Versioning details are documented in the accompanying VersionHistory_P9JKYWQU.txt file.

  13. f

    Data from: Experimental and Numerical Analysis of Battened Built-up...

    • scielo.figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    N. Divyah; R. Thenmozhi; M. Neelamegam (2023). Experimental and Numerical Analysis of Battened Built-up Lightweight Concrete Encased Composite Columns subjected to Axial Cyclic loading [Dataset]. http://doi.org/10.6084/m9.figshare.14325260.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELO journals
    Authors
    N. Divyah; R. Thenmozhi; M. Neelamegam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract In the recent era, built-up columns have been continuously used by the engineers in the design and analysis of tall buildings and bridges. Vibration analysis of these types of columns is essential to understand the failure modes of such columns. In that aspect, this study aims to analyze a concrete-encased built-up column made by configuring cold-formed steel angle sections connected by means of battens encased by normal weight and lightweight concrete with and without the inclusion of basalt fibre. Eight columns with battens were simulated, and it is encased with four different types of concrete and subjected to axial cyclic loading. The experimental results were correlated with the numerical investigation performed using FEA. The results indicated that the type of concrete dramatically influences the behaviour of columns. Higher ultimate strength and ductility was observed for all specimens, which is due to lower shear capacity of the battens. It was observed that the intensity of the axial cyclic load has a significant effect on the ultimate strength and deflection of columns, but it is less influential on the yield strength. It was concluded the results of experimental and FEA shows good compatibility between each other and depicts an error of 7.48%.

  14. d

    Numerical code and data for the stellar structure and dynamical instability...

    • datadryad.org
    • zenodo.org
    zip
    Updated May 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arun Mathew; Malay K. Nandy (2021). Numerical code and data for the stellar structure and dynamical instability analysis of generalised uncertainty white dwarfs [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzt
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 10, 2021
    Dataset provided by
    Dryad
    Authors
    Arun Mathew; Malay K. Nandy
    Time period covered
    2021
    Description

    There is a total of 17 datasets to produce all the Figures in the article. There are mainly two different data files: GUP White Dwarf Mass-Radius (GUPWD_M-R) data and GUP White Dwarf Profile (GUPWD_Profile) data.

    The file GUPWD_M-R gives only the Mass-Radius relation with Radius (km) in the first column and Mass (solar mass) in the second.

    On the other hand GUPWD_Profile provides the complete profile with following columns.

    column 1: Dimensionless central Fermi Momentum $\xi_c$ column 2: Central Density $\rho_c$ ( Log10 [$\rho_c$ g cm$^{-3}$] ) column 3: Radius $R$ (km) column 4: Mass $M$ (solar mass) column 5: Square of fundamental frequency $\omega_0^2$ (sec$^{-2}$)

    =====================================================================================

    Figure 1 (a) gives Mass-Radius (M-R) curves for $\beta_0=10^{42}$, $10^{41}$ and $10^{40}$. The filenames of the corresponding dataset are

    GUPWD_M-R[Beta0=E42].dat GUPWD_M-R[Beta0=E41].dat GUPWD_M-R[Beta0...

  15. U: Modelled Isotopes and firn parameters using the Schwander et al. (1997)...

    • doi.pangaea.de
    html, tsv
    Updated May 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Döring (2022). U: Modelled Isotopes and firn parameters using the Schwander et al. (1997) firn model with temperature (dataset R, columns 4,7) and 200 km ice sheet margin retreat accumulation rate scenario (dataset Q, column 4) as inputs [Dataset]. http://doi.org/10.1594/PANGAEA.943619
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    May 2, 2022
    Dataset provided by
    PANGAEA
    Authors
    Michael Döring
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1990
    Area covered
    Variables measured
    δ15N, Gas age, Ice age, δ40Ar/4, δ15N excess, Age, difference, Firn temperature gradient, Lock-in depth in ice equivalent
    Description

    This dataset is about: U: Modelled Isotopes and firn parameters using the Schwander et al. (1997) firn model with temperature (dataset R, columns 4,7) and 200 km ice sheet margin retreat accumulation rate scenario (dataset Q, column 4) as inputs. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.943597 for more information.

  16. d

    Crangon crangon in the Sylt R m bight in 2017

    • datadiscoverystudio.org
    893053
    Updated 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wiltshire, Karen Helen; Rick, Johannes J; Asmus, Ragnhild; Kadel, Petra; Hussel, Birgit; Asmus, Harald (2018). Crangon crangon in the Sylt R m bight in 2017 [Dataset]. http://doi.org/10.1594/PANGAEA.893053
    Explore at:
    893053Available download formats
    Dataset updated
    2018
    Authors
    Wiltshire, Karen Helen; Rick, Johannes J; Asmus, Ragnhild; Kadel, Petra; Hussel, Birgit; Asmus, Harald
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    Techical Information: Due to changing temperature regimes in the North- and the Wadden Sea, a fish survey in the Sylt R m bight (SRB) was established in 2007 for at least ten years. The aim is to investigate the Wadden Sea fish fauna with special interest in changes of migration behavior, species composition and feeding habits. Seven stations are sampled monthly inside the SRB. Two additional stations, one outside the bight, one close to the Danish border are sampled as references four times a year. For sampling a mini bottom trawl, total length 17 m, trawl opening 7 m, height 3 m with a mesh size of 36 mm in the wings, 16 mm in the mid part and 6 mm in the cod end is used. At every station one haul in the water column and another at the bottom are sampled, for 15 minutes at a speed of approximately 2 knots. The data will help to give a more detailed picture of food chains and energy flows inside the Wadden Sea.

  17. d

    CTD and water column profiles collected aboard R/V Point Sur cruise PS19_14...

    • search.dataone.org
    • data.griidc.org
    Updated Feb 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peterson, Richard (2025). CTD and water column profiles collected aboard R/V Point Sur cruise PS19_14 in the Gulf of Mexico from 2019-01-26 to 2019-01-28 [Dataset]. http://doi.org/10.7266/n7-wp6x-2736
    Explore at:
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    GRIIDC
    Authors
    Peterson, Richard
    Description

    This dataset reports CTD and water column profiles near the GC-600 natural oil seep in the Gulf of Mexico aboard R/V Point Sur cruise PS19_14 in the Gulf of Mexico from 2019-01-26 to 2019-01-28. Water column profiles were collected at two sampling sites (GC-699, and GC-600) and the data include CTD, dissolved oxygen, Chlorophyll-a Dissolved Organic Matter (DOM) fluorescence (fDOM), altimeter, Photosynthetically Available Radiation (PAR) and Surficial PAR (SPAR). During this cruise, we performed a series of CTD casts in and around both sites to constrain water column structure and radium isotopes. The main objective of the cruise was to use the ROV Odysseus from Pelagic Research Services to directly sample hydrocarbons emanating from MegaPlume at GC-600 (27° 22.199'N 90° 34.262'W). During our time at sea, we further aimed to sample radium isotopes in the water column profiles. The R/V Point Sur cruise PS19_14 was led by chief scientist Dr. Richard Peterson.

  18. Finite Element Analysis perturbation files for Rubin Observatory Simonyi...

    • zenodo.org
    application/gzip
    Updated Sep 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joshua E. Meyers; Joshua E. Meyers (2023). Finite Element Analysis perturbation files for Rubin Observatory Simonyi Survey Telescope and LSST Camera [Dataset]. http://doi.org/10.5281/zenodo.8384326
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Joshua E. Meyers; Joshua E. Meyers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ## Notes on FEA files


    # M1M3 Bending modes

    M1M3_1um_156_grid.fits.gz

    - source = IM/data/M1M3/M1M3_1um_156_grid.txt

    - shape = (5256, 159)

    - Each row is one of 5256 FEA nodes.

    - 0th column is M1M3 disambiguator

    - 1st and 2nd columns are FEA node x and y in M1M3 CS

    - Last 156 columns are bending modes; the z-displacement of each node for each mode.

    M1M3_1um_156_force.fits.gz

    - source = IM/data/M1M3/M1M3_1um_156_force.txt

    - shape = (156, 159)

    - Each row is one of 156 bending modes.

    - 0th column is actuator ID

    - 1st and 2nd columns are actuator x and y in M1M3 CS

    - Last 156 columns are forces in Newtons for each mode.


    # M1M3 print through

    M1M3_dxdydz_zenith.fits.gz

    - source = IM/data/M1M3/M1M3_dxdydz_zenith.npy

    - shape = (5256, 3)

    - Each row is one of 5256 FEA nodes.

    - Columns are dx, dy, dz in M1M3 CS.

    - This is the gravitational "print through" when mirror is zenith pointing

    M1M3_dxdydz_horizon.fits.gz

    - source = IM/data/M1M3/M1M3_dxdydz_horizon.npy

    - shape = (5256, 3)

    - Each row is one of 5256 FEA nodes.

    - Columns are dx, dy, dz in M1M3 CS.

    - This is the gravitational "print through" when mirror is horizon pointing

    M1M3_force_zenith.fits.gz

    - source = IM/data/M1M3/M1M3_force_zenith.npy

    - shape = (256,)

    - Each row is one of 256 actuators. (So we consider the x and y actuators here too.)

    - Columns are forces in Newtons.

    - These are the mirror support forces when the mirror is zenith pointing. (Is this after optimization? Include LUT or not?)

    M1M3_force_horizon.fits.gz

    - source = IM/data/M1M3/M1M3_force_horizon.npy

    - shape = (256,)

    - Each row is one of 256 actuators. (So we consider the x and y actuators here too.)

    - Columns are forces in Newtons.

    - These are the mirror support forces when the mirror is horizon pointing. (Is this after optimization? Include LUT or not?)


    # M1M3 Thermal

    M1M3_thermal_FEA.fits.gz

    - source = IM/data/M1M3/M1M3_thermal_FEA.npy

    - shape = (5244, 7)

    - Each row is one of 5244 FEA nodes. (Why aren't these the same as above? I don't know.)

    - Columns are:

    - 0: Unit-Normalized FEA x

    - 1: Unit-Normalized FEA y

    - 2: Bulk temperature dz coefficient

    - 3: x temperature gradient dz coefficient

    - 3: y temperature gradient dz coefficient

    - 3: z temperature gradient dz coefficient

    - 3: r temperature gradient dz coefficient


    # M1M3 Miscellany

    M1M3_influence_256.fits.gz

    - source = IM/data/M1M3/M1M3_influence_256.npy

    - shape = (5256, 256)

    - Each row is one of 5256 FEA nodes.

    - Each column is one of 256 actuators.

    - Values are dz/dF for each actuator/node.

    M1M3_LUT.fits.gz

    - source = IM/data/M1M3/M1M3_LUT.txt

    - shape = (257, 91)

    - First column is index in degrees (0-90 inclusive). Last 256 columns are forces in Newtons.

    - Each column is LUT for one value of the elevation index.

    M1M3_1000N_UL_shape_156.fits.gz

    - source = IM/data/M1M3/M1M3_1000N_UL_shape_156.npy

    - shape = (5256, 156)

    - Rows must be FEA nodes, columns must be bending modes.

    - Not sure what the purpose is of this one.


    # M2 Bending modes

    M2_1um_grid.fits.gz

    - source = IM/data/M2/M2_1um_grid.DAT

    - shape = (15984, 75)

    - Each row is one of 15984 FEA nodes.

    - 0th column is node index ?

    - 1st and 2nd columns are FEA node x and y in M2 CS

    - Last 72 columns are bending modes; the z-displacement of each node for each mode.

    M2_1um_force.fits.gz

    - source = IM/data/M2/M2_1um_force.DAT

    - shape = (72, 75)

    - Each row is one of 72 bending modes.

    - 0th column is actuator ID

    - 1st and 2nd columns are actuator x and y in M2 CS

    - Last 72 columns are forces in Newtons for each mode.

    # M2 print through / thermal

    M2_GT_FEA.fits.gz

    - source = IM/data/M2/M2_GT_FEA.txt

    - shape = (9084, 6)

    - Each row is one of 9084 FEA nodes. (Why aren't these the same as above? I don't know.)

    - Columns are:

    - 0: Unit-Normalized FEA x

    - 1: Unit-Normalized FEA y

    - 2: Zenith print through dz coefficient

    - 3: Horizon print through dz coefficient

    - 4: z temperature gradient dz coefficient

    - 5: r temperature gradient dz coefficient


  19. Cross-position activity recognition

    • kaggle.com
    Updated Dec 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jindong Wang (2017). Cross-position activity recognition [Dataset]. https://www.kaggle.com/datasets/jindongwang92/crossposition-activity-recognition/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 21, 2017
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jindong Wang
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This directory contains the cross-position activity recognition datasets used in the following paper. Please consider citing this article if you want to use the datasets.

    Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, and Philip S. Yu. Stratified Transfer Learning for Cross-domain Activity Recognition. 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom).

    These datasets are secondly constructed based on three public datasets: OPPORTUNITY (opp) [1], PAMAP2 (pamap2) [2], and UCI DSADS (dsads) [3].

    Here are some useful information about this directory. Please feel free to contact jindongwang@outlook.com for more information.

    1. This is NOT the raw data, since I have performed feature extraction and normalized the features into [-1,1]. The code for feature extraction can be found in here: https://github.com/jindongwang/activityrecognition/tree/master/code. Currently, there are 27 features for a single sensor. There are 81 features for a body part. More information can be found in above PerCom-18 paper.

    2. There are 4 .mat files corresponding to each dataset: dsads.mat for UCI DSADS, opp_hl.mat and opp_ll.mat for OPPORTUNITY, and pamap.mat for PAMAP2. Note that opp_hl and opp_loco denotes 'high-level' and 'locomotion' activities, respectively. (1) dsads.mat: 9120 * 408. Columns 1~405 are features, listed in the order of 'Torso', 'Right Arm', 'Left Arm', 'Right Leg', and 'Left Leg'. Each position contains 81 columns of features. Columns 406~408 are labels. Column 406 is the activity sequence indicating the executing of activities (usually not used in experiments). Column 407 is the activity label (1~19). Column 408 denotes the person (1~8). (2) opp_hl.mat and opp_loco.mat: Same as dsads.mat. But they contain more body parts: 'Back', 'Right Upper Arm', 'Right Lower Arm', 'Left Upper Arm', 'Left Lower Arm', 'Right Shoe (Foot)', and 'Left Shoe (Foot)'. Of course we did not use the data of both shoes in our paper. Column 460 is the activity label (please refer to OPPORTUNITY dataset to see the meaning of those activities). Column 461 is the activity drill (also check the dataset information). Column 462 denotes the person (1~4). (3) pamap.mat: 7312 * 245. Columns 1~243 are features, listed in the order of 'Wrist', 'Chest', and 'Ankle'. Column 244 is the activity label. Column 245 denotes the person (1~9).

    3. There are another 3 datasets with the prefix 'cross_', containing only 4 common classes of each dataset. This is for experimenting the cross-dataset activity recognition (see our PerCom-18 paper). The 4 common classes are lying, standing, walking, and sitting. (1) cross_dsads.mat: 1920*406. Columns 1~405 are features. Column 406 is labels. (2) cross_opp.mat: 5022*460. Columns 1~459 are features. Column 460 is labels. (3) cross_pamap.mat: 3063 * 244. Columns 1~243 are features. Column 244 is labels.

    -------- Original references for the 3 datasets:

    [1] R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Troster, ¨ J. d. R. Millan, and D. Roggen, “The opportunity challenge: A bench- ´ mark database for on-body sensor-based activity recognition,” Pattern Recognition Letters, vol. 34, no. 15, pp. 2033–2042, 2013.

    [2] A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” in Wearable Computers (ISWC), 2012 16th International Symposium on. IEEE, 2012, pp. 108–109.

    [3] B. Barshan and M. C. Yuksek, “Recognizing daily and sports activities ¨ in two open source machine learning environments using body-worn sensor units,” The Computer Journal, vol. 57, no. 11, pp. 1649–1667, 2014.

  20. d

    Data from: Water column sample data from predefined locations of the West...

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Water column sample data from predefined locations of the West Florida Shelf: USGS Cruise 11BHM01 [Dataset]. https://catalog.data.gov/dataset/water-column-sample-data-from-predefined-locations-of-the-west-florida-shelf-usgs-cruise-1-fbbd6
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The United States Geological Survey (USGS) is conducting a study on the effects of climate change on ocean acidification within the Gulf of Mexico; dealing specifically with the effect of ocean acidification on marine organisms and habitats. To investigate this, the USGS participated in cruises in the West Florida Shelf and northern Gulf of Mexico regions aboard the R/V Weatherbird II, a ship of opportunity lead by Dr. Kendra Daly, of the University of South Florida (USF). This cruise occurred May 03 - 09, 2011, leaving from and returned to Saint Petersburg, Florida. The USGS collected data pertaining to pH, dissolved inorganic carbon (DIC), and total alkalinity in discrete samples. Thirty-four underway discrete samples were collected approximately hourly over a span of 1632 kilometer (km) track line, additionally 44 discrete samples were taken at four stations, these were taken at various depths. Flow-through conductivity-temperature-depth (CTD) data were collected, which includes temperature, salinity, and pH. Corroborating the USGS data are the vertical CTD profiles collected by USF, using the following sensors: CTD, oxygen, chlorophyll fluorescence, optical backscatter, and transmissometer. Additionally, discrete depth samples for nutrients, chlorophyll, and particulate organic carbon/nitrogen were collected.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Marcelo Adrián Aizen; Marcelo Adrián Aizen; Gabriela Gleiser; Gabriela Gleiser; Thomas Kitzberger; Thomas Kitzberger; Rubén Milla; Rubén Milla (2023). BEING A TREE CROP INCREASES THE ODDS OF EXPERIENCING YIELD DECLINES IRRESPECTIVE OF POLLINATOR DEPENDENCE [Dataset]. http://doi.org/10.5281/zenodo.7863825
Organization logo

Data from: BEING A TREE CROP INCREASES THE ODDS OF EXPERIENCING YIELD DECLINES IRRESPECTIVE OF POLLINATOR DEPENDENCE

Related Article
Explore at:
zip, binAvailable download formats
Dataset updated
Aug 8, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Marcelo Adrián Aizen; Marcelo Adrián Aizen; Gabriela Gleiser; Gabriela Gleiser; Thomas Kitzberger; Thomas Kitzberger; Rubén Milla; Rubén Milla
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Marcelo A. Aizen, Gabriela R. Gleiser, Thomas Kitzberger, Ruben Milla. Being a tree crop increases the odds of experiencing yield declines irrespective of pollinator dependence (to be submitted to PCI)

Data and R scripts to reproduce the analyses and the figures shown in the paper. All analyses were performed using R 4.0.2.

Data

1. FAOdata_21-12-2021.csv

This file includes yearly data (1961-2020, column 8) on yield and cultivated area (columns 6 and 10) at the country, sub-regional, and regional levels (column 2) for each crop (column 4) drawn from the United Nations Food and Agriculture Organization database (data available at http://www.fao.org/faostat/en; accessed July 21-12-2021). [Used in Script 1 to generate the synthesis dataset]

2. countries.csv

This file provides information on the region (column 2) to which each country (column 1) belongs. [Used in Script 1 to generate the synthesis dataset]

3. dependence.csv

This file provides information on the pollinator dependence category (column 2) of each crop (column 1).

4. traits.csv

This file provides information on the traits of each crop other than pollinator dependence, including, besides the crop name (column1), the variables type of harvested organ (column 5) and growth form (column 6). [Used in Script 1 to generate the synthesis dataset]

5. dataset.csv

The synthesis dataset generated by Script 1.

6. growth.csv

The yield growth dataset generated by Script 1 and used as input by Scripts 2 and 3.

7. phylonames.csv

This file lists all the crops (column 1) and their equivalent tip names in the crop phylogeny (column 2). [Used in Script 2 for the phylogenetically-controlled analyses]

8.phylo137.tre

File containing the phylogenetic tree.

Scripts

1. dataset

This R script curates and merges all the individual datasets mentioned above into a single dataset, estimating and adding to this single dataset the growth rate for each crop and country, and the (log) cumulative harvested area per crop and country over the period 1961-2020.

2. analyses

This R script includes all the analyses described in the article’s main text.

3. figures

This R script creates all the main and supplementary figures of this article.

4. lme4_phylo_setup

R function written by Li and Bolker (2019) to carry out phylogenetically-controlled generalized linear mixed-effects models as described in the main text of the article.

References

Li, M., and B. Bolker. 2019. wzmli/phyloglmm: First release of phylogenetic comparative analysis in lme4- verse. Zenodo. https://doi.org/10.5281/zenodo.2639887.

Search
Clear search
Close search
Google apps
Main menu