31 datasets found
  1. H

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale...

    • dataverse.harvard.edu
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Georgios Boumis; Brad Peter (2024). Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends [Dataset]. http://doi.org/10.7910/DVN/ZZDYM9
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Georgios Boumis; Brad Peter
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...

  2. Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm

    • plos.figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tracey L. Weissgerber; Natasa M. Milic; Stacey J. Winham; Vesna D. Garovic (2023). Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm [Dataset]. http://doi.org/10.1371/journal.pbio.1002128
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Tracey L. Weissgerber; Natasa M. Milic; Stacey J. Winham; Vesna D. Garovic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.

  3. f

    Graph Input Data Example.xlsx

    • figshare.com
    xlsx
    Updated Dec 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Corynen (2018). Graph Input Data Example.xlsx [Dataset]. http://doi.org/10.6084/m9.figshare.7506209.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Dec 26, 2018
    Dataset provided by
    figshare
    Authors
    Dr Corynen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.

  4. C

    New Issues for New Sites Graph

    • data.cityofchicago.org
    application/rdfxml +5
    Updated Jun 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Chicago (2025). New Issues for New Sites Graph [Dataset]. https://data.cityofchicago.org/w/mr4w-ra5c/3q3f-6823?cur=1U-alnuO2va
    Explore at:
    json, tsv, csv, xml, application/rdfxml, application/rssxmlAvailable download formats
    Dataset updated
    Jun 29, 2025
    Authors
    City of Chicago
    Description

    This dataset contains all current and active business licenses issued by the Department of Business Affairs and Consumer Protection. This dataset contains a large number of records /rows of data and may not be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Notepad or Wordpad, to view and search.

    Data fields requiring description are detailed below.

    APPLICATION TYPE: 'ISSUE' is the record associated with the initial license application. 'RENEW' is a subsequent renewal record. All renewal records are created with a term start date and term expiration date. 'C_LOC' is a change of location record. It means the business moved. 'C_CAPA' is a change of capacity record. Only a few license types my file this type of application. 'C_EXPA' only applies to businesses that have liquor licenses. It means the business location expanded.

    LICENSE STATUS: 'AAI' means the license was issued.

    Business license owners may be accessed at: http://data.cityofchicago.org/Community-Economic-Development/Business-Owners/ezma-pppn To identify the owner of a business, you will need the account number or legal name.

    Data Owner: Business Affairs and Consumer Protection

    Time Period: Current

    Frequency: Data is updated daily

  5. o

    Data from: Increased Radon Exposure from Thawing of Permafrost Due to...

    • explore.openaire.eu
    Updated Jan 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul William John Glover (2022). Increased Radon Exposure from Thawing of Permafrost Due to Climate Change [Dataset]. http://doi.org/10.5281/zenodo.5833918
    Explore at:
    Dataset updated
    Jan 10, 2022
    Authors
    Paul William John Glover
    Description

    This database contains one excel data file containing all data required to produce all plot figures in the eponymous paper in Earth's Future, as well as 5 modelling output video files, the stills from which contribute to the non-plot figures in the paper. This database also contains high quality versions of all the display items in the eponymous paper. For further information please contact the author at p.w.j.glover@leeds.ac.uk

  6. u

    Data from: Low-Disturbance Manure Incorporation

    • agdatacommons.nal.usda.gov
    xlsx
    Updated Feb 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jessica Sherman; William Jokela; Carol Barford (2024). Low-Disturbance Manure Incorporation [Dataset]. http://doi.org/10.15482/USDA.ADC/1401975
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 8, 2024
    Dataset provided by
    Ag Data Commons
    Authors
    Jessica Sherman; William Jokela; Carol Barford
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The LDMI experiment (Low-Disturbance Manure Incorporation) was designed to evaluate nutrient losses with conventional and improved liquid dairy manure management practices in a corn silage (Zea mays) / rye cover-crop (Secale cereale) system. The improved manure management treatments were designed to incorporate manure while maintaining crop residue for erosion control. Field observations included greenhouse gas (GHG) fluxes from soil, soil nutrient concentrations, crop growth and harvest biomass and nutrient content, as well as monitoring of soil physical and chemical properties. Observations from LDMI have been used for parameterization and validation of computer simulation models of GHG emissions from dairy farms (Gaillard et al., submitted). The LDMI experiment was performed as part of the Dairy CAP, described below. The experiment included ten different treatments: (1) broadcast manure with disk-harrow incorporation, (2) broadcast manure with no tillage incorporation, (3) manure application with “strip-tillage” which was sweep injection ridged with paired disks, (4) aerator band manure application, (5) low-disturbance sweep injection of manure, (6) Coulter injection of manure with sweep tillage, (7) no manure with urea to supply 60 lb N/acre (67 kg N/ha), (8) no manure with urea to supply 120 lb N/acre (135 kg N/ha), (9) no manure with urea to supply 180 lb N/acre (202 kg N/ha), (10) no manure / no fertilizer control. Manure was applied in the fall; fertilizer was applied in the spring. These ten treatments were replicated four times in a randomized complete block design. The LDMI experiment was conducted at the Marshfield Research Station of the University of Wisconsin and the USDA Agricultural Research Service (ARS) in Stratford, WI (Marathon County, Latitude 44.7627, Longitude -90.0938). Soils at the research station are from the Withee soil series, fine-loamy, mixed, superactive, frigid Aquic Glossudalf. Each experimental plot was approximately 70 square meters. A weather station was located at the south edge of field site. A secondary weather station (MARS South), for snow and snow water equivalence data and for backup of the main weather station, was located at Latitude 44.641445 and Longitude -90.133526 (16,093 meters southwest of the field site). The experiment was initiated on November 28, 2011 with fall tillage and manure application in each plot according to its treatment type. Each spring, corn silage was planted in rows at a rate of 87500 plants per hectare. The cultivar was Pioneer P8906HR. The LDMI experiment ended on November 30, 2015. The manure applied in this experiment was from the dairy herd at the Marshfield Research Station. Cows were fed a diet of 48% dry matter, 17.45% protein, and 72.8% total digestible nutrients. Liquid slurry manure, including feces, urine, and rain, was collected and stored in a lagoon on the site. Manure was withdrawn from the lagoon, spread on the plots and sampled for analysis all on the same day, once per year. Manure samples were analyzed at the University of Wisconsin Soil and Forage Lab in Marshfield (NH4-N, total P and total K) and at the Marshfield ARS (pH, dry matter, volatile solids, total N and total C). GHG fluxes from soil (CO2, CH4, N2O) were measured using static chambers as described in Parkin and Venterea (2010). Measurements were made with the chambers placed across the rows of corn. I Additional soil chemical and physical characteristics were measured as noted in the data dictionary and other metadata of the LDMI data set, included here. This experiment was part of “Climate Change Mitigation and Adaptation in Dairy Production Systems of the Great Lakes Region,” also known as the Dairy Coordinated Agricultural Project (Dairy CAP), funded by the United States Department of Agriculture - National Institute of Food and Agriculture (award number 2013-68002-20525). The main goal of the Dairy CAP was to improve understanding of the magnitudes and controlling factors over GHG emissions from dairy production in the Great Lakes region. Using this knowledge, the Dairy CAP has improved life cycle analysis (LCA) of GHG production by Great Lakes dairy farms, developing farm management tools, and conducting extension, education and outreach activities. Resources in this dataset:Resource Title: Data_dictionary_DairyCAP_LDMI. File Name: Data_dictionary_DairyCAP_LDMI.xlsxResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI. (Separate spreadsheet tabs)Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: DairyCAP_LDMI. File Name: DairyCAP_LDMI.xlsxResource Description: This is the data from the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: Data Dictionary DairyCAP LDMI. File Name: Data_dictionary_DairyCAP_LDMI.csvResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.

    Resource Title: Biomass Data. File Name: LDMI_Biomass.csvResource Title: Experimental Set-up Data. File Name: LDMI_Exp_setup.csvResource Title: Gas Flux Data. File Name: LDMI_Gas_Fluxes.csvResource Title: Management History Data. File Name: LDMI_Management_History.csvResource Title: Manure Analysis Data. File Name: LDMI_Manure_Analysis.csvResource Title: Soil Chemical Data. File Name: LDMI_Soil_Chem.csvResource Title: Soil Physical Data. File Name: LDMI_Soil_Phys.csvResource Title: Weather Data. File Name: LDMI_Weather.csv

  7. Quantitative questions - analysed data

    • figshare.com
    xlsx
    Updated Aug 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ashleigh Prince (2023). Quantitative questions - analysed data [Dataset]. http://doi.org/10.6084/m9.figshare.24029238.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Aug 24, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Ashleigh Prince
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Excel spreadsheet contains the quantitative questions (Questions 1, 3 and 4). Each question is analysed in the form of a frequency distribution table and a pie chart.

  8. l

    Exploring soil sample variability through principal component analysis (PCA)...

    • metadatacatalogue.lifewatch.eu
    Updated Jun 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Exploring soil sample variability through principal component analysis (PCA) using excel data [Dataset]. https://metadatacatalogue.lifewatch.eu/geonetwork/search?keyword=Scree%20plot
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    SoilExcel workflow, a tool designed to optimize soil data analysis. It covers data preparation, statistical analysis methods, and result visualization. SoilExcel integrates various environmental data types and applies advanced techniques to enhance accuracy in soil studies. The results demonstrate its effectiveness in interpreting complex data, aiding decision-making in environmental management projects. Background Understanding the intricate relationships and patterns within soil samples is crucial for various environmental and agricultural applications. Principal Component Analysis (PCA) serves as a powerful tool in unraveling the complexity of multivariate soil datasets. Soil datasets often consist of numerous variables representing diverse physicochemical properties, making PCA an invaluable method for: ∙Dimensionality Reduction: Simplifying the analysis without compromising data integrity by reducing the dimensionality of large soil datasets. ∙Identification of Dominant Patterns: Revealing dominant patterns or trends within the data, providing insights into key factors contributing to overall variability. ∙Exploration of Variable Interactions: Enabling the exploration of complex interactions between different soil attributes, enhancing understanding of their relationships. ∙Interpretability of Data Variance: Clarifying how much variance is explained by each principal component, aiding in discerning the significance of different components and variables. ∙Visualization of Data Structure: Facilitating intuitive comprehension of data structure through plots such as scatter plots of principal components, helping identify clusters, trends, and outliers. ∙Decision Support for Subsequent Analyses: Providing a foundation for subsequent analyses by guiding decision-making, whether in identifying influential variables, understanding data patterns, or selecting components for further modeling. Introduction The motivation behind this workflow is rooted in the imperative need to conduct a thorough analysis of a diverse soil dataset, characterized by an array of physicochemical variables. Comprising multiple rows, each representing distinct soil samples, the dataset encompasses variables such as percentage of coarse sands, percentage of organic matter, hydrophobicity, and others. The intricacies of this dataset demand a strategic approach to preprocessing, analysis, and visualization. To lay the groundwork, the workflow begins with the transformation of an initial Excel file into a CSV format, ensuring improved compatibility and ease of use throughout subsequent analyses. Furthermore, the workflow is designed to empower users in the selection of relevant variables, a task facilitated by user-defined parameters. This flexibility allows for a focused and tailored dataset, essential for meaningful analysis. Acknowledging the inherent challenges of missing data, the workflow offers options for data quality improvement, including optional interpolation of missing values or the removal of rows containing such values. Standardizing the dataset and specifying the target variable are crucial, establishing a robust foundation for subsequent statistical analyses. Incorporating PCA offers a sophisticated approach, enabling users to explore inherent patterns and structures within the data. The adaptability of PCA allows users to customize the analysis by specifying the number of components or desired variance. The workflow concludes with practical graphical representations, including covariance and correlation matrices, a scree plot, and a scatter plot, offering users valuable visual insights into the complexities of the soil dataset. Aims The primary objectives of this workflow are tailored to address specific challenges and goals inherent in the analysis of diverse soil samples: ∙Data transformation: Efficiently convert the initial Excel file into a CSV format to enhance compatibility and ease of use. ∙Variable selection: Empower users to extract relevant variables based on user-defined parameters, facilitating a focused and tailored dataset. ∙Data quality improvement: Provide options for interpolation or removal of missing values to ensure dataset integrity for downstream analyses. ∙Standardization and target specification: Standardize the dataset values and designate the target variable, laying the groundwork for subsequent statistical analyses. ∙PCA: Conduct PCA with flexibility, allowing users to specify the number of components or desired variance for a comprehensive understanding of data variance and patterns. ∙Graphical representations: Generate visual outputs, including covariance and correlation matrices, a scree plot, and a scatter plot, enhancing the interpretability of the soil dataset. Scientific questions This workflow addresses critical scientific questions related to soil analysis: ∙Variable importance: Identify variables contributing significantly to principal components through the covariance matrix and PCA. ∙Data structure: Explore correlations between variables and gain insights from the correlation matrix. ∙Optimal component number: Determine the optimal number of principal components using the scree plot for effective representation of data variance. ∙Target-related patterns: Analyze how selected principal components correlate with the target variable in the scatter plot, revealing patterns based on target variable values.

  9. M

    S&P 500 - 100 Year Historical Chart

    • macrotrends.net
    csv
    Updated Jun 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MACROTRENDS (2025). S&P 500 - 100 Year Historical Chart [Dataset]. https://www.macrotrends.net/2324/sp-500-historical-chart-data
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jun 30, 2025
    Dataset authored and provided by
    MACROTRENDS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    1915 - 2025
    Area covered
    United States
    Description

    Interactive chart of the S&P 500 stock market index since 1927. Historical data is inflation-adjusted using the headline CPI and each data point represents the month-end closing value. The current month is updated on an hourly basis with today's latest value.

  10. m

    Data and Codes for the paper "A Simple, SIR-like but Individual-Based...

    • data.mendeley.com
    Updated Jul 2, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiaoping Liu (2020). Data and Codes for the paper "A Simple, SIR-like but Individual-Based Epidemic Model: Application in Comparison of COVID-19 in New York City and Wuhan" [Dataset]. http://doi.org/10.17632/3vg2r3ymgk.2
    Explore at:
    Dataset updated
    Jul 2, 2020
    Authors
    Xiaoping Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Wuhan, New York
    Description

    The author has calculated, simulated, and plotted all epidemic curves for the paper "A Simple, SIR-like but Individual-Based Epidemic Model: Application in Comparison of COVID-19 in New York City and Wuhan". All these calculated and simulated curves are shown in Figures 2-11. All data and codes for generating these figures are separately placed in different sheets in one Excel file. Using this Excel file, you can calculate and plot all epidemic curves shown in Figures 2-11. The values of parameters l and i are separately placed in two cells marked in yellow, which are located in top one or two row on the left. After you change either or both of the two parameters, the Excel file will calculate (for Figures 2-6) A, I, R and T, and plot (for Figures 7-11) a new epidemic curve for you. The calculated data for each pair of selected l and i are listed in the columns under the two marked parameters. You can use these calculated data to plot epidemic curves in the excel file by yourself.

  11. Data from: Back into the past: Resurveying random plots to track community...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    bin
    Updated Jun 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marta Gaia Sperandii; Marta Gaia Sperandii; Alicia Teresa Rosario Acosta; Alicia Teresa Rosario Acosta (2022). Back into the past: Resurveying random plots to track community changes in Italian coastal dunes [Dataset]. http://doi.org/10.5061/dryad.np5hqbzr8
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 3, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Marta Gaia Sperandii; Marta Gaia Sperandii; Alicia Teresa Rosario Acosta; Alicia Teresa Rosario Acosta
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset includes two excel sheets. The first contains vegetation data ("species_data", a matrix of 668 plots x 213 species) and the second contains plant functional traits data ("traits_data") that were used to evaluate temporal changes in taxonomic and functional diversity of Mediterranean coastal dune habitats.

    As to the first sheet ("species_data"): vegetation data were collected at two points in time (Time 0, hereafter T0: 2002-2007, and Time 1, herafter T1: 2017-2018) in 334 randomly-sampled, georeferenced, standardized (4 m2) plots. Historical data used for the resurveying study were extracted from RanVegDunes (Sperandii et al. 2017). Details on the resurveying protocol can be found in Sperandii et al. (2019), but in short: resampling activities took place during the same months in which the original sampling was done, and plot positions were relocated using a GPS unit on which historical geographic coordinates were stored. Plots are located in coastal dune sites along the Tyrrhenian and Adriatic coasts of Central Italy, and belong to herbaceous communities classified into the following EU Habitats (sensu Annex I 92/43/EEC): upper beach (Habitat 1210), embryo dunes (Habitat 2110), shifting dunes (Habitat 2120), fixed dunes (Habitat 2210), and dune grasslands (Habitat 2230). A subset of plots could not be classified into an EU Habitat because they were highly disturbed or invaded by alien species ("NC-plots"). The matrix includes cover data, expressed as percentage (%) cover.

    As to the second sheet ("traits_data"): this sheet includes data on 3 plant functional traits, two of them quantitative (plant height, specific leaf area - SLA) and one qualitative (plant lifespan). Data for the quantitative traits represent species-level average trait values and were extracted from "TraitDunes", a database registered on the global platform TRY (Kattge et al., 2020). Functional trait data were collected in the same sites covered by the resurveying study. Functional trait data were originally measured on the most abundant species, and are available for a varying number of species depending on the trait.

    References:

    Kattge, J., Bönisch, G., Díaz, S., Lavorel, S., Prentice, I. C., Leadley, P., ... & Wirth, C. (2020). TRY plant trait database–enhanced coverage and open access. Global Change Biology.

    Sperandii, M.G., Prisco, I., Stanisci, A., & Acosta, A.T.R (2017). RanVegDunes-A random plot database of Italian coastal dunes. Phytocoenologia, 47(2), 231-232.

    Sperandii, M.G., Bazzichetto, M., Gatti, F., & Acosta, A.T.R. (2019). Back into the past: Resurveying random plots to track community changes in Italian coastal dunes. Ecological Indicators, 96, 572-578.

  12. d

    Rainfall simulation experiments in the Southwestern USA using the Walnut...

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    • +3more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Rainfall simulation experiments in the Southwestern USA using the Walnut Gulch Rainfall Simulator [Dataset]. https://catalog.data.gov/dataset/rainfall-simulation-experiments-in-the-southwestern-usa-using-the-walnut-gulch-rainfall-si-cb5b2
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Service
    Area covered
    Southwestern United States, Walnut Gulch, United States
    Description

    Introduction Preservation and management of semi-arid ecosystems requires understanding of the processes involved in soil erosion and their interaction with plant community. Rainfall simulations on natural plots provide an effective way of obtaining a large amount of erosion data under controlled conditions in a short period of time. This dataset contains hydrological (rainfall, runoff, flow velocity), erosion (sediment concentration and rate), vegetation (plant cover), and other supplementary information from 272 rainfall simulation experiments conducted on 23 rangeland locations in Arizona and Nevada between 2002 and 2013. The dataset advances our understanding of basic hydrological and biological processes that drive soil erosion on arid rangelands. It can be used to quantify runoff, infiltration, and erosion rates on a variety of ecological sites in the Southwestern USA. Inclusion of wildfire and brush treatment locations combined with long term observations makes it important for studying vegetation recovery, ecological transitions, and effect of management. It is also a valuable resource for erosion model parameterization and validation. Instrumentation Rainfall was generated by a portable, computer-controlled, variable intensity simulator (Walnut Gulch Rainfall Simulator). The WGRS can deliver rainfall rates ranging between 13 and 178 mm/h with variability coefficient of 11% across 2 by 6.1 m area. Estimated kinetic energy of simulated rainfall was 204 kJ/ha/mm and drop size ranged from 0.288 to 7.2 mm. Detailed description and design of the simulator is available in Stone and Paige (2003). Prior to each field season the simulator was calibrated over a range of intensities using a set of 56 rain gages. During the experiments windbreaks were setup around the simulator to minimize the effect of wind on rain distribution. On some of the plots, in addition to rainfall only treatment, run-on flow was applied at the top edge of the plot. The purpose of run-on water application was to simulate hydrological processes that occur on longer slopes (>6 m) where upper portion of the slope contributes runoff onto the lower portion. Runoff rate from the plot was measured using a calibrated V-shaped supercritical flume equipped with depth gage. Overland flow velocity on the plots was measured using electrolyte and fluorescent dye solution. Dye moving from the application point at 3.2 m distance to the outlet was timed with stopwatch. Electrolyte transport in the flow was measured by resistivity sensors imbedded in edge of the outlet flume. Maximum flow velocity was defined as velocity of the leading edge of the solution and was determined from beginning of the electrolyte breakthrough curve and verified by visual observation (dye). Mean flow velocity was calculated using mean travel time obtained from the electrolyte solution breakthrough curve using moment equation. Soil loss from the plots was determined from runoff samples collected during each run. Sampling interval was variable and aimed to represent rising and falling limbs of the hydrograph, any changes in runoff rate, and steady state conditions. This resulted in approximately 30 to 50 samples per simulation. Shortly before every simulation plot surface and vegetative cover was measured at 400 point grid using a laser and line-point intercept procedure (Herrick et al., 2005). Vegetative cover was classified as forbs, grass, and shrub. Surface cover was characterized as rock, litter, plant basal area, and bare soil. These 4 metrics were further classified as protected (located under plant canopy) and unprotected (not covered by the canopy). In addition, plant canopy and basal area gaps were measured on the plots over three lengthwise and six crosswise transects. Experimental procedure Four to eight 6.1 m by 2 m replicated rainfall simulation plots were established on each site. The plots were bound by sheet metal borders hammered into the ground on three sides. On the down slope side a collection trough was installed to channel runoff into the measuring flume. If a site was revisited, repeat simulations were always conducted on the same long term plots. The experimental procedure was as follows. First, the plot was subjected to 45 min, 65 mm/h intensity simulated rainfall (dry run) intended to create initial saturated condition that could be replicated across all sites. This was followed by a 45 minute pause and a second simulation with varying intensity (wet run). During wet runs two modes of water application were used as: rainfall or run-on. Rainfall wet runs typically consisted of series of application rates (65, 100, 125, 150, and 180 mm/h) that were increased after runoff had reached steady state for at least five minutes. Runoff samples were collected on the rising and falling limb of the hydrograph and during each steady state (a minimum of 3 samples). Overland flow velocities were measured during each steady state as previously described. When used, run-on wet runs followed the same procedure as rainfall runs, except water application rates varied between 100 and 300 mm/h. In approximately 20% of simulation experiments the wet run was followed by another simulation (wet2 run) after a 45 min pause. Wet2 runs were similar to wet runs and also consisted of series of varying intensity rainfalls and/or run-on inputs. Resulting Data The dataset contains hydrological, erosion, vegetation, and ecological data from 272 rainfall simulation experiments conducted on 12 sq. m plots at 23 rangeland locations in Arizona and Nevada. The experiments were conducted between 2002 and 2013, with some locations being revisited multiple times. Resources in this dataset:Resource Title: Appendix B. Lists of sites and general information. File Name: Rainfall Simulation Sites Summary.xlsxResource Description: The table contains list or rainfall simulation sites and individual plots, their coordinates, topographic, soil, ecological and vegetation characteristics, and dates of simulation experiments. The sites grouped by common geographic area.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix F. Site pictures. File Name: Site photos.zipResource Description: Pictures of rainfall simulation sites and plots.Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Access,url: https://products.office.com/en-us/access Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix E. Simulation sites map. File Name: Rainfall Simulator Sites Map.zipResource Description: Map of rainfall simulation sites with embedded images in Google Earth.Resource Software Recommended: Google Earth,url: https://www.google.com/earth/ Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Access,url: https://products.office.com/en-us/access Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Access,url: https://products.office.com/en-us/access Resource Title: Appendix C. Rainfall simulations (revised). File Name: Rainfall simulation (R11272017).csvResource Description: The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experiments (updated 11/27/2017)Resource Software Recommended: Microsoft Access,url: https://products.office.com/en-us/access

  13. A

    [Vegetation Survey Data by Plot : Chase Lake National Wildlife Refuge :...

    • data.amerigeoss.org
    • datadiscoverystudio.org
    pdf
    Updated Jan 1, 2006
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2006). [Vegetation Survey Data by Plot : Chase Lake National Wildlife Refuge : 2006] [Dataset]. https://data.amerigeoss.org/hr/dataset/17b80e00-9690-4d57-afed-d273a46d0cf0
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 1, 2006
    Dataset provided by
    United States
    Area covered
    Chase Lake
    Description

    This reference houses a collection of tabular datasets containing vegetation survey data for the Chase Lake National Wildlife Refuge in 2006. The data has been uploaded as a set of PDF Portfolios, each containing individual excel files for each plot, and for each section of each plot.

  14. Toolik Snow Depth [Oberbauer]

    • data.ucar.edu
    • demo.arcticdata.io
    • +2more
    ascii
    Updated Dec 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven F. Oberbauer (2024). Toolik Snow Depth [Oberbauer] [Dataset]. http://doi.org/10.5065/D61V5C60
    Explore at:
    asciiAvailable download formats
    Dataset updated
    Dec 26, 2024
    Dataset provided by
    University Corporation for Atmospheric Research
    Authors
    Steven F. Oberbauer
    Time period covered
    May 2, 1995 - May 3, 2001
    Area covered
    Description

    This dataset represents initial snow depths on the season-extension project study site on May 2nd or May 3rd for years 1995-2001. There were control snow depths, snow removal (May 2), snow removal and soil heating (May 4). There were 10 replicates of each treatment in randomized blocks located 3m apart. Plots 1-30 represent plots initiated in 1995. Plots 36-60 represent plots initiated in 1997. This dataset is available in both ASCII and EXCEL formats - For data browsing, we suggest ordering the EXCEL version and using EXCEL to plot the data. NOTE: This dataset contains the data in ASCII format.

  15. m

    Ultimate_Analysis

    • data.mendeley.com
    Updated Jan 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akara Kijkarncharoensin (2022). Ultimate_Analysis [Dataset]. http://doi.org/10.17632/t8x96g88p3.2
    Explore at:
    Dataset updated
    Jan 28, 2022
    Authors
    Akara Kijkarncharoensin
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This database studies the performance inconsistency on the biomass HHV ultimate analysis. The research null hypothesis is the consistency in the rank of a biomass HHV model. Fifteen biomass models are trained and tested in four datasets. In each dataset, the rank invariability of these 15 models indicates the performance consistency.

    The database includes the datasets and source codes to analyze the performance consistency of the biomass HHV. These datasets are stored in tabular on an excel workbook. The source codes are the biomass HHV machine learning model through the MATLAB Objected Orient Program (OOP). These machine learning models consist of eight regressions, four supervised learnings, and three neural networks.

    An excel workbook, "BiomassDataSetUltimate.xlsx," collects the research datasets in six worksheets. The first worksheet, "Ultimate," contains 908 HHV data from 20 pieces of literature. The names of the worksheet column indicate the elements of the ultimate analysis on a % dry basis. The HHV column refers to the higher heating value in MJ/kg. The following worksheet, "Full Residuals," backups the model testing's residuals based on the 20-fold cross-validations. The article (Kijkarncharoensin & Innet, 2021) verifies the performance consistency through these residuals. The other worksheets present the literature datasets implemented to train and test the model performance in many pieces of literature.

    A file named "SourceCodeUltimate.rar" collects the MATLAB machine learning models implemented in the article. The list of the folders in this file is the class structure of the machine learning models. These classes extend the features of the original MATLAB's Statistics and Machine Learning Toolbox to support, e.g., the k-fold cross-validation. The MATLAB script, name "runStudyUltimate.m," is the article's main program to analyze the performance consistency of the biomass HHV model through the ultimate analysis. The script instantly loads the datasets from the excel workbook and automatically fits the biomass model through the OOP classes.

    The first section of the MATLAB script generates the most accurate model by optimizing the model's higher parameters. It takes a few hours for the first run to train the machine learning model via the trial and error process. The trained models can be saved in MATLAB .mat file and loaded back to the MATLAB workspace. The remaining script, separated by the script section break, performs the residual analysis to inspect the performance consistency. Furthermore, the figure of the biomass data in the 3D scatter plot, and the box plots of the prediction residuals are exhibited. Finally, the interpretations of these results are examined in the author's article.

    Reference : Kijkarncharoensin, A., & Innet, S. (2022). Performance inconsistency of the Biomass Higher Heating Value (HHV) Models derived from Ultimate Analysis [Manuscript in preparation]. University of the Thai Chamber of Commerce.

  16. f

    Included studies characteristics for good knowledge, positive attitude, and...

    • plos.figshare.com
    xls
    Updated Dec 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tenagework Eseyneh Dagnaw; Amare Mebrat Delie; Tadele Derbew Kassie; Sileshi Berihun; Hiwot Tesfa; Amare Zewdie (2024). Included studies characteristics for good knowledge, positive attitude, and good prevention practice among students in Ethiopia. [Dataset]. http://doi.org/10.1371/journal.pone.0314451.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 9, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Tenagework Eseyneh Dagnaw; Amare Mebrat Delie; Tadele Derbew Kassie; Sileshi Berihun; Hiwot Tesfa; Amare Zewdie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Ethiopia
    Description

    Included studies characteristics for good knowledge, positive attitude, and good prevention practice among students in Ethiopia.

  17. Atqasuk CO2 Fluxes [Oberbauer]

    • data.ucar.edu
    • search.dataone.org
    • +1more
    ascii
    Updated Dec 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven F. Oberbauer (2024). Atqasuk CO2 Fluxes [Oberbauer] [Dataset]. http://doi.org/10.5065/D6FF3QKW
    Explore at:
    asciiAvailable download formats
    Dataset updated
    Dec 26, 2024
    Dataset provided by
    University Corporation for Atmospheric Research
    Authors
    Steven F. Oberbauer
    Time period covered
    Jun 1, 2001 - Aug 31, 2001
    Area covered
    Description

    This dataset represent six diurnal courses of carbon dioxide fluxes from the Atqasuk ITEX wet and dry sites in 2001 for control and experimental plots. Diurnal courses of net ecosystem carbon dioxide fluxes were assessed at approximately biweekly intervals on the study plots. There were 5 plots of each of control, experimental (ITEX chamber) on wet and dry sites 6 times over the 2001 growing season. This dataset is available in both ASCII and EXCEL formats - For data browsing, we suggest ordering the EXCEL version and using EXCEL to plot the data. NOTE: This dataset contains the data in ASCII format.

  18. Barrow CO2 Fluxes [Oberbauer]

    • data.ucar.edu
    • arcticdata.io
    • +1more
    ascii
    Updated Dec 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven F. Oberbauer (2024). Barrow CO2 Fluxes [Oberbauer] [Dataset]. http://doi.org/10.5065/D6W66J1D
    Explore at:
    asciiAvailable download formats
    Dataset updated
    Dec 26, 2024
    Dataset provided by
    University Corporation for Atmospheric Research
    Authors
    Steven F. Oberbauer
    Time period covered
    Jun 1, 2001 - Aug 31, 2001
    Area covered
    Description

    This dataset represent six diurnal courses of carbon dioxide fluxes from the Barrow ITEX wet and dry sites in 2001 for control and experimental plots. Diurnal courses of net ecosystem carbon dioxide fluxes were assessed at approximately biweekly intervals on the study plots. There were 5 plots of each of control, experimental (ITEX chamber) on wet and dry sites 6 times over the 2001 growing season. This dataset is available in both ASCII and EXCEL formats - For data browsing, we suggest ordering the EXCEL version and using EXCEL to plot the data. NOTE: This dataset contains the data in ASCII format.

  19. A meta-analysis of factors affecting good prevention practices towards...

    • plos.figshare.com
    bin
    Updated Dec 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tenagework Eseyneh Dagnaw; Amare Mebrat Delie; Tadele Derbew Kassie; Sileshi Berihun; Hiwot Tesfa; Amare Zewdie (2024). A meta-analysis of factors affecting good prevention practices towards COVID-19 among students in Ethiopia. [Dataset]. http://doi.org/10.1371/journal.pone.0314451.t006
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 9, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Tenagework Eseyneh Dagnaw; Amare Mebrat Delie; Tadele Derbew Kassie; Sileshi Berihun; Hiwot Tesfa; Amare Zewdie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Ethiopia
    Description

    A meta-analysis of factors affecting good prevention practices towards COVID-19 among students in Ethiopia.

  20. g

    Romperod, diameter growth in uneven-sized spruce stand | gimi9.com

    • gimi9.com
    Updated Feb 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Romperod, diameter growth in uneven-sized spruce stand | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_https-doi-org-10-5878-wcbz-kq34/
    Explore at:
    Dataset updated
    Feb 1, 2022
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The data consists of long term diameter growth observations of uneven-sized Norway spruce dominated stands. The site has been managed with continuous individual tree selection for at least 100 years. There are two plots with different timing of cutting treatments. Height and age measurements of sample trees. One of the plots have coordinate set tree positions. Initial revision is 1989. The first plot consists of 209 tree observations and the second plot consists of 257 tree observations (at first revision point). The Excel file [Romperöd uniform data copy of version 3.xlsx] contains all data from all revisions between 1989 and 2015. This data is without coordinates of the tree positions. The data file contains information that links the tree identities between the two data files. The Excel file [Romperöd level 1b copy.xlsx] contains data from an extended revision of the thinned plot where the trees were also coordinate set. 74 of the trees have extended information on annual growth ring, root incidence, crown shape and height.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Georgios Boumis; Brad Peter (2024). Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends [Dataset]. http://doi.org/10.7910/DVN/ZZDYM9

Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jul 8, 2024
Dataset provided by
Harvard Dataverse
Authors
Georgios Boumis; Brad Peter
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...

Search
Clear search
Close search
Google apps
Main menu