CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
Once PowerPivot has been installed, to load the large files, please follow the instructions below: Start Excel as normal Click on the PowerPivot tab Click on the PowerPivot Window icon (top left) In the PowerPivot Window, click on the "From Other Sources" icon In the Table Import Wizard e.g. scroll to the bottom and select Text File Browse to the file you want to open and choose the file extension you require e.g. CSV Please read the below notes to ensure correct understanding of the data. Microsoft PowerPivot add-on for Excel can be used to handle larger data sets. The Microsoft PowerPivot add-on for Excel is available using the link in the 'Related Links' section - https://www.microsoft.com/en-us/download/details.aspx?id=43348 Once PowerPivot has been installed, to load the large files, please follow the instructions below: 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Please read the below notes to ensure correct understanding of the data. Fewer than 5 Items Please be aware that I have decided not to release the exact number of items, where the total number of items falls below 5, for certain drugs/patient combinations. Where suppression has been applied a * is shown in place of the number of items, please read this as 1-4 items. Suppressions have been applied where items are lower than 5, for items and NIC and for quantity when quantity and items are both lower than 5 for the following drugs and identified genders as per the sensitive drug list; When the BNF Paragraph Code is 60401 (Female Sex Hormones & Their Modulators) and the gender identified on the prescription is Male When the BNF Paragraph Code is 60402 (Male Sex Hormones And Antagonists) and the gender identified on the prescription is Female When the BNF Paragraph Code is 70201 (Preparations For Vaginal/Vulval Changes) and the gender identified on the prescription is Male When the BNF Paragraph Code is 70202 (Vaginal And Vulval Infections) and the gender identified on the prescription is Male When the BNF Paragraph Code is 70301 (Combined Hormonal Contraceptives/Systems) and the gender identified on the prescription is Male When the BNF Paragraph Code is 70302 (Progestogen-only Contraceptives) and the gender identified on the prescription is Male When the BNF Paragraph Code is 80302 (Progestogens) and the gender identified on the prescription is Male When the BNF Paragraph Code is 70405 (Drugs For Erectile Dysfunction) and the gender identified on the prescription is Female When the BNF Paragraph Code is 70406 (Drugs For Premature Ejaculation) and the gender identified on the prescription is Female This is because the patients could be identified, when combined with other information that may be in the public domain or reasonably available. This information falls under the exemption in section 40 subsections 2 and 3A (a) of the Freedom of Information Act. This is because it would breach the first data protection principle as: a. it is not fair to disclose patients personal details to the world and is likely to cause damage or distress. b. these details are not of sufficient interest to the public to warrant an intrusion into the privacy of the patients. Please click the below web link to see the exemption in full.
https://opendata.nhsbsa.net/dataset/foi-01204 April 2023 https://opendata.nhsbsa.net/dataset/foi-01240 May 2023 https://opendata.nhsbsa.net/dataset/foi-01310 June 2023 https://opendata.nhsbsa.net/dataset/foi-01378 July 2023 FOI-01424 - Datasets - Open Data Portal BETA (nhsbsa.net) August 2023 https://opendata.nhsbsa.net/dataset/foi-01502 September 2023 https://opendata.nhsbsa.net/dataset/foi-01550 October 2023 https://opendata.nhsbsa.net/dataset/foi-01668 November 2023 https://opendata.nhsbsa.net/dataset/foi-01669 December 2023 https://opendata.nhsbsa.net/dataset/foi-01756 Some data sets are over 1 million rows of data and it may be that you will need to use add-ons already existing on Microsoft Excel to enable you to view the data set in its entirety. Microsoft PowerPivot add-on for Excel can be used to handle larger data sets. The Microsoft PowerPivot add-on for Excel is available using the link in the 'Related Links' section below: https://www.microsoft.com/en-us/download/details.aspx?id=43348 Once PowerPivot has been installed, to load the large files, please follow the instructions below: 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was derived by the Bioregional Assessment Programme. The parent datasets are identified in the Lineage statement in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This dataset comprises of interpreted elevation surfaces and contours for the major Triassic and Upper Permian units of the Galilee Geological Basin.
This dataset was created to provide formation extents for aquifers in the Galilee geological basin
A Quality Assurance (QA) and validation process was conducted on the original well and bore data to choose wells/bores that are within 25 kilometres of the BA Galilee Region extent.
The QA/Validation process is as follows:
Well data
a. Obtained excel file "QPED_July_2013_galilee.xlsx" from GA
b. Based on stratigraphic information in "BH_costrat" tab formation names were regularised and simplified based on current naming conventions.
c. Simplified names added to QPED_July_2013_galileet.xlsx as "Steve_geo" and "Steve_group"
d. Produced new file "GSQ_Geology.xlsx" contained decimal latitude and longitude, KB elevation, top of unit in metres from KB, top of unit in metres AHD, bottom of unit in metres from KB, bottom of unit in metres AHD, original geology, simplified geology, simplified Group geology.
i. KB obtained from "BH_wellhist"
ii. Where no KB information was available ie KB=0, sample the 1S DEM at the well's location to obtain height. KB=DEM+10. Marked well as having lower reliability.
iii. Calculated Top_m_AHD = KB - Top_m_KB
iv. Calculated Bottom_m_AHD = KB - Bottom_m_KB
e. Brought GSQ_Geology.xlsx into ArcGIS
f. Selected wells based on "Steve_geo" field for each model layer to produce a geodatabase for each layer.
i. GSQ_basement_wells
ii. GSQ_top_joe_joe_group
iii. GSQ_top_bandanna_merge
iv. GSQ_rewan_group
v. GSQ_clematis
vi. GSQ_moolyember
g. Additional wells and reinterpreted tops added to appropriate geodatabase based on well completion reports
h. Additional wells added to coverages to help model building process
i. Well_name listed as Fake
ii. Exception being GSQ_top_basement_fake which was created as a separate geodatabase
Bore data
a. Obtained QLD_DNRM_GroundwaterDatabaseExtract_20131111 from GA
b. Used files REGISTRATIONS.txt, ELEVATIONS.txt and AQUIFER.txt to build GW_stratigraphy.xlsx
i. Based on RN
ii. Latitude from GIS_LAT (REGISTRATIONS.txt)
iii. Longitude from GIS_LNG (REGISTRATIONS.txt)
iv. Elevation from (ELEVATIONS.txt)
v. FORM_DESC from (AQUIFER.txt)
vi. Top from (AQUIFER.txt)
vii. Bottom from (AQUIFER.txt)
c. Brought GW_stratigraphy.xlsx into ArcGIS
d. Created gw_bores_galilee_dem
i. Sampled 1S DEM to obtain ground level elevation column RASTERVALU
ii. Created column top_m_AHD by RASTERVALU - Top
e. Selected bores based on "FORM_DESC" field for each model layer to produce a geodatabase for each layer.
i. Gw_basement
ii. GW_bores_joe_joe_group
iii. GW_bores_bandanna
iv. Gw_bores_rewan
v. Gw_bores_clematis
vi. Gw_bores_moolyember
Georectified seismic surfaces
a. Extracted interpreted seismic surfaces for base Permian (interpreted as basement) and top Bandanna (in time) from the following seismic surveys
i. Y80A, W81A, Carmichael, Pendine, T81A, Quilpie, Ward and Powell Creek seismic survey downloaded https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?searchType=general
ii. Brought TIF images into ArcGIS and georectified
iii. Digitised shape of contours and faults into geodatabase
1. Basement_contours and basement_faults
2. bandanna_contours_new_data and bandanna_faults
iv. Added field "contour" to geodatabase
v. Converted contours to depth in "contour" field based on well and bore data (top_m_AHD) and contour progression
vi. Use the shape and depth derived from OZ SEEBASE to help to add additional contours and faults to basement and bandanna datasets
Additional contour and fault surfaces were built derived from underlying surfaces and wells/bore data
a. Joejoe_contours and joejoe)faults
b. Rewan_contour_clip (used bandanna_faults as fault coverage)
c. Clematis_contour and clematis_faults
d. Moolyember_contour (used clematis_faults as fault coverage)
Surface geology
a. Extracted surface geology from QUEENSLAND GEOLOGY_AUGUST_2012 using Galilee BA region boundary with 25 kilometre boundary to form geodatabase QLD_geology_galilee
b. Selected relevant surface geology from QLD_geology_galilee based on field "Name" for each model layer and created new geodatabase layers
i. Basement_geology: Argentine Metamorphics,Running River Metamorphics,Charters Towers Metamorphics; Bimurra Volcanics, Foyle Volcanics, Mount Wyatt Formation, Saint Anns Formation, Silver Hills Volcanics, Stones Creek Volcanics; Bulliwallah Formation, Ducabrook Formation, Mount Rankin Formation, Natal Formation, Star of Hope Formation; Cape River Metamorphics; Einasleigh Metamorphics; Gem Park Granite; Macrossan Province Cambrian-Ordovician intrusives; Macrossan Province Ordovician-Silurian intrusives; Macrossan Province Ordovician intrusives; Mount Formartine, unnamed plutonic units; Pama Province Silurian-Devonian intrusives; Seventy Mile Range Group; and Kirk River beds, Les Jumelles beds.
ii. Joe_joe_geology: Joe Joe Group
iii. Galilee_permian_geology: Back Creek Group, Betts Creek Group, Blackwater Group
iv. Rewan_geology: Rewan Group
1. Later also made dunda_beds_geology to be included in Rewan model: Dunda beds
v. Clematis_geology: Clematis Group
1. Later also made warang_sandstone_geology to be included in Clematis model: Warang Sandstone
vi. Moolyember_surface_geology: Moolyember Formation
DEM for each model layer
a. Using surface geology geodatabase extent extract grid from dem_s_1s to represent the top of the model layer at the surface
i. Basement_dem
ii. Joejoe_dem
iii. Bandanna_dem
iv. Rewan_dem and dunda_dem
v. Clematis_dem and warang_dem
vi. Moolyember_surface_dem
b. Used Contour tool in ArcGIS to obtain a 25 metre contour geodatabase from the relevant model DEM
i. Basement_dem_contours
ii. Joejoe_dem_contours
iii. Bandanna_dem_contours
iv. Rewan_dem_contours and dunda_dem_contours
v. Clematis_dem_contours and warang_dem_contours
vi. Moolyember_dem_contours
c. For the purpose of guiding the model building process additional fields were added to each DEM contour geodatabase was added based on average thickness derived from groundwater bores and petroleum wells.
i. Basement_dem_contours: Joejoe, bandanna, rewan, clematis, moolyember
ii. Joejoe_dem_contours: basement, bandanna
iii. Bandanna_dem_contours: joejoe, rewan
iv. Rewan_dem_contours and dunda_dem_contours: clematis, rewan
v. Clematis_dem_contours and warang_dem_contours: moolyember, rewan
vi. Moolyember_dem_contours: clematis
The model building process is as follows:
Used the tope to raster tool to create surface based on the following rules
a. Environment
i. Extent
1. Top: -19.7012030024424
2. Right: 148.891511819054
3. Bottom: -27.5812030024424
4. Left: 139.141511819054
ii. Output cell size: 0.01 degrees
iii. Drainage enforcement: No_enforce
b. Input
i. Basement
1. Basement_dem_contour; field - contour; type - contour
2. Joejoe_dem_contour; field - basement; type - contour
3. Basement_contour; field - contour; type - contour
4. GSQ_basement_wells; field - top_m_AHD; type - point elevation
5. GW_basement; field - top_m_AHDl type - point elevation
6. GSQ_top_basement_fake; field - top_m_AHDl type - point elevation
7. Basement_faults; type - cliff
ii. Joe Joe Group
1. Joejoe_dem_contour; field - basement; type - contour
2. Basement_dem_contour; field - joejoe; type - contour
3. permian_dem_contour; field - joejoe, type - contour
4. joejoe_contour; field - joejoe; type - contour
5. GSQ_top_joejoe_group; field - top_m_AHD; type - point elevation
6. GW_bores_joe_joe_group; field - top_m_AHDl type - point elevation
7. joejoe_faults; type - cliff
iii. Bandanna Group
1. Permian_dem_contour; field - contour; type - contour
2. Joejoe_dem_contour; field - bandanna; type - contour
3. Rewan_dem_contour: field - bandanna; type - contour
4. Dunda_dem_contour; field - bandanna; type - contour
https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions
Warning: Large file size (over 1GB). Each monthly data set is large (over 4 million rows), but can be viewed in standard software such as Microsoft WordPad (save by right-clicking on the file name and selecting 'Save Target As', or equivalent on Mac OSX). It is then possible to select the required rows of data and copy and paste the information into another software application, such as a spreadsheet. Alternatively, add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets, can be used. The Microsoft PowerPivot add-on for Excel is available from Microsoft http://office.microsoft.com/en-gb/excel/download-power-pivot-HA101959985.aspx Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Once the data has been imported you can view it in a spreadsheet. What does the data cover? General practice prescribing data is a list of all medicines, dressings and appliances that are prescribed and dispensed each month. A record will only be produced when this has occurred and there is no record for a zero total. For each practice in England, the following information is presented at presentation level for each medicine, dressing and appliance, (by presentation name): - the total number of items prescribed and dispensed - the total net ingredient cost - the total actual cost - the total quantity The data covers NHS prescriptions written in England and dispensed in the community in the UK. Prescriptions written in England but dispensed outside England are included. The data includes prescriptions written by GPs and other non-medical prescribers (such as nurses and pharmacists) who are attached to GP practices. GP practices are identified only by their national code, so an additional data file - linked to the first by the practice code - provides further detail in relation to the practice. Presentations are identified only by their BNF code, so an additional data file - linked to the first by the BNF code - provides the chemical name for that presentation.
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Case Study 1- Bike Sharing Introduction: In 2016, Cyclistic launched a successful bike-share offering. Since then, the program has grown to a fleet of 5,824 bicycles that are geotracked and locked into a network of 692 stations across Chicago. The bikes can be unlocked from one station and returned to any other station in the system anytime. There are two types of members are sharing bike differently! 1.) Annual members- who bought annual membership. 2.) Casual members- who bought or buying single-ride passes, full-day passes.
Phase_1- Ask- 1. Identify the business task- • How do annual members and casual riders use Cyclistic bikes differently? • Why would casual riders buy Cyclistic annual memberships? • How can Cyclistic use digital media to influence casual riders to become members? 2. Consider key stakeholders- Lily Moreno: The director of marketing and manager, Cyclistic marketing analytics team, Cyclistic executive team.
Phase_2- Prepare--
I downloaded and store it in my excel sheet, I am using only one month (April_2020) data, and using excel for solving task, I am also sorting and filtering my data according to requirement.
I downloaded data from public source and it’s fully reliable, unbiased. Data is also, complete, consistent and accurate.
Phase_3- Process—
• I downloaded 202004-divvy-tripdata.cvs data and I unzip the file and converted into .xls file, here I am using only April data because this case study is my first case study and only for my learning, so I want to keep it simple. I am using excel this time because I am more comfortable with excel then other tools. I also want to perform good analysis and don’t want to lost in multiple sheets & large dataset, in initial stage.
• I Checked the data errors, and corrected some errors, I also did some calculation in my sheet, and try to clean data, so I can use sheet appropriately, Phase_4- analyze— I organize my data, performed sorting and filtering multiple time as I needed, did some calculation, add few pivots table and try to analyze data properly, also try to Identify trends and relationships.
Phase_5- Share— • After completing my analysis, I used some charts to present my findings. First, I found Total count of ride is 16383 and annual members took 11552 count of ride what is 71% of total ride, and casual riders took only 29% of ride which is 4831.
• I also found that casual riders using ride for some times but members are taking ride anytime no matter if they need bike for long time or short time, they are taking ride without any second thought, because after buying annual pass they no need to pay (any extra money or) every time.
• Clark St & Elm St is a most bike rented point, people took 180 bikes from this station, and 132 are the annual member from that. Also, I found other station where we need more bikes. Likewise, we also can find station name where most people end their ride, so they have plenty space for bikes. Phase_6- Act— Feeling happy to share my finding with you, feeling little confident after completing my first case study.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset has one EXCEL. xlsx file with two parts (1: Litter decomposition rate, 2: Soil fauna) in the article, specifically as follow: For the data of Litter decomposition rate, the variations includes Litter decomposition constants (k), Hemicellulose remaining (%), Lignin remaining (%), Total N remaining (%), Total S remaining (%) and Total P remaining (%) . The microsite have four as replicate. the sampling includes the initial substrate data in October 2017 and the subsequent the days of 60 (December 2017), 180 (April 2018), 300 (August 2018), 420 (December 2018), 540 (April 2019) and 660 (August 2019). Soil fauna treatment: Soil fauna present, and Soil fauna absent. Yak excrements addition: no addition (CK), addition of dung only (Dung), addition of urine only (Urine), addition of both dung and urine (Dung+Urine). For the data of Soil fauna, the dataset showed the density soil fauna community in soil underneath the excrement spot under different treatments in the incubation time after 300 days (Aug, 2018) and 660 days (Aug, 2019).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...