CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The LDMI experiment (Low-Disturbance Manure Incorporation) was designed to evaluate nutrient losses with conventional and improved liquid dairy manure management practices in a corn silage (Zea mays) / rye cover-crop (Secale cereale) system. The improved manure management treatments were designed to incorporate manure while maintaining crop residue for erosion control. Field observations included greenhouse gas (GHG) fluxes from soil, soil nutrient concentrations, crop growth and harvest biomass and nutrient content, as well as monitoring of soil physical and chemical properties. Observations from LDMI have been used for parameterization and validation of computer simulation models of GHG emissions from dairy farms (Gaillard et al., submitted). The LDMI experiment was performed as part of the Dairy CAP, described below. The experiment included ten different treatments: (1) broadcast manure with disk-harrow incorporation, (2) broadcast manure with no tillage incorporation, (3) manure application with “strip-tillage” which was sweep injection ridged with paired disks, (4) aerator band manure application, (5) low-disturbance sweep injection of manure, (6) Coulter injection of manure with sweep tillage, (7) no manure with urea to supply 60 lb N/acre (67 kg N/ha), (8) no manure with urea to supply 120 lb N/acre (135 kg N/ha), (9) no manure with urea to supply 180 lb N/acre (202 kg N/ha), (10) no manure / no fertilizer control. Manure was applied in the fall; fertilizer was applied in the spring. These ten treatments were replicated four times in a randomized complete block design. The LDMI experiment was conducted at the Marshfield Research Station of the University of Wisconsin and the USDA Agricultural Research Service (ARS) in Stratford, WI (Marathon County, Latitude 44.7627, Longitude -90.0938). Soils at the research station are from the Withee soil series, fine-loamy, mixed, superactive, frigid Aquic Glossudalf. Each experimental plot was approximately 70 square meters. A weather station was located at the south edge of field site. A secondary weather station (MARS South), for snow and snow water equivalence data and for backup of the main weather station, was located at Latitude 44.641445 and Longitude -90.133526 (16,093 meters southwest of the field site). The experiment was initiated on November 28, 2011 with fall tillage and manure application in each plot according to its treatment type. Each spring, corn silage was planted in rows at a rate of 87500 plants per hectare. The cultivar was Pioneer P8906HR. The LDMI experiment ended on November 30, 2015. The manure applied in this experiment was from the dairy herd at the Marshfield Research Station. Cows were fed a diet of 48% dry matter, 17.45% protein, and 72.8% total digestible nutrients. Liquid slurry manure, including feces, urine, and rain, was collected and stored in a lagoon on the site. Manure was withdrawn from the lagoon, spread on the plots and sampled for analysis all on the same day, once per year. Manure samples were analyzed at the University of Wisconsin Soil and Forage Lab in Marshfield (NH4-N, total P and total K) and at the Marshfield ARS (pH, dry matter, volatile solids, total N and total C). GHG fluxes from soil (CO2, CH4, N2O) were measured using static chambers as described in Parkin and Venterea (2010). Measurements were made with the chambers placed across the rows of corn. I Additional soil chemical and physical characteristics were measured as noted in the data dictionary and other metadata of the LDMI data set, included here. This experiment was part of “Climate Change Mitigation and Adaptation in Dairy Production Systems of the Great Lakes Region,” also known as the Dairy Coordinated Agricultural Project (Dairy CAP), funded by the United States Department of Agriculture - National Institute of Food and Agriculture (award number 2013-68002-20525). The main goal of the Dairy CAP was to improve understanding of the magnitudes and controlling factors over GHG emissions from dairy production in the Great Lakes region. Using this knowledge, the Dairy CAP has improved life cycle analysis (LCA) of GHG production by Great Lakes dairy farms, developing farm management tools, and conducting extension, education and outreach activities. Resources in this dataset:Resource Title: Data_dictionary_DairyCAP_LDMI. File Name: Data_dictionary_DairyCAP_LDMI.xlsxResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI. (Separate spreadsheet tabs)Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: DairyCAP_LDMI. File Name: DairyCAP_LDMI.xlsxResource Description: This is the data from the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.Resource Software Recommended: Microsoft Excel 2016,url: https://products.office.com/en-us/excel Resource Title: Data Dictionary DairyCAP LDMI. File Name: Data_dictionary_DairyCAP_LDMI.csvResource Description: This is the data dictionary for the Low-Disturbance Manure Incorporation (LDMI) experiment, conducted at the USDA-ARS research station in Marshfield, WI.
Resource Title: Biomass Data. File Name: LDMI_Biomass.csvResource Title: Experimental Set-up Data. File Name: LDMI_Exp_setup.csvResource Title: Gas Flux Data. File Name: LDMI_Gas_Fluxes.csvResource Title: Management History Data. File Name: LDMI_Management_History.csvResource Title: Manure Analysis Data. File Name: LDMI_Manure_Analysis.csvResource Title: Soil Chemical Data. File Name: LDMI_Soil_Chem.csvResource Title: Soil Physical Data. File Name: LDMI_Soil_Phys.csvResource Title: Weather Data. File Name: LDMI_Weather.csv
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundConfusion between look-alike and sound-alike (LASA) medication names (such as mercaptamine and mercaptopurine) accounts for up to one in four medication errors, threatening patient safety. Error reduction strategies include computerized physician order entry interventions, and ‘Tall Man’ lettering. The purpose of this study is to explore the medication name designation process, to elucidate properties that may prime the risk of confusion.Methods and FindingsWe analysed the formal and semantic properties of 7,987 International Non-proprietary Names (INNs), in relation to naming guidelines of the World Health Organization (WHO) INN programme, and have identified potential for errors. We explored: their linguistic properties, the underlying taxonomy of stems to indicate pharmacological interrelationships, and similarities between INNs. We used Microsoft Excel for analysis, including calculation of Levenshtein edit distance (LED). Compliance with WHO naming guidelines was inconsistent. Since the 1970s there has been a trend towards compliance in formal properties, such as word length, but longer names published in the 1950s and 1960s are still in use. The stems used to show pharmacological interrelationships are not spelled consistently and the guidelines do not impose an unequivocal order on them, making the meanings of INNs difficult to understand. Pairs of INNs sharing a stem (appropriately or not) often have high levels of similarity (
With this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them.
Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format.
A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out.
Macros must be enabled for the tool to function.
When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names.
In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window.
To run a choropleth, select the menu option 'Run Choropleth' to get this form.
To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button.
The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time.
The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author.
Acknowledgments
Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.
The Agricultural Research Service of the US Department of Agriculture (USDA) in collaboration with other government agencies has a program to track changes in the sodium content of commercially processed and restaurant foods. This monitoring program includes these activities: Tracking sodium levels of ~125 popular foods, called "Sentinel Foods," by periodically sampling them at stores and restaurants around the country, followed by laboratory analyses. Tracking levels of "related" nutrients that could change when manufacturers reformulate their foods to reduce sodium; these related nutrients are potassium, total and saturated fat, total dietary fiber, and total sugar. Sharing the results of these monitoring activities to the public periodically in the Sodium Monitoring Dataset and USDA National Nutrient Database for Standard Reference and once every two years in the Food and Nutrient Database for Dietary Studies. The Sodium Monitoring Dataset is downloadable in Excel spreadsheet format. Resources in this dataset:Resource Title: Data Dictionary. File Name: SodiumMonitoringDataset_datadictionary.csvResource Description: Defines variables, descriptions, data types, character length, etc. for each of the spreadsheets in this Excel data file: Sentinel Foods - Baseline; Priority-2 Foods - Baseline; Sentinel Foods - Monitoring; Priority-2 Foods - Monitoring.Resource Title: Sodium Monitoring Dataset (MS Excel download). File Name: SodiumMonitoringDatasetUpdatedJuly2616.xlsxResource Description: Microsoft Excel : Sentinel Foods - Baseline; Priority-2 Foods - Baseline; Sentinel Foods - Monitoring; Priority Foods - Monitoring.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the framework of Articles 23 and 33 of Regulation (EC) No 178/2002 EFSA has received from the European Commission a mandate (M-2010-0374) to collect all available data on the occurrence of chemical contaminants in food and feed. These data are used in EFSA’s scientific opinions and reports on contaminants in food and feed.
This data providers package provides the data collection configuration and supporting materials for reporting Chemical Contaminants in SSD1. These are to be used for the official data reporting phase.
The package includes:
The Standard Sample Description Version 2 XSD schema definition for CONTAMINANTS reporting.
The general and CONTAMINANTS SSD1 specific business rules applied for the automatic validation of the submitted datasets.
Excel Mapping tool to convert excel files after mapping into XML document.
Please follow the instructions below for the correct use of the mapping tool to avoid compromising its functionalities:
Download and save the MS Excel® Standard Sample Description file to your computer (do not open the file before saving and do not change the file name)
Download and save the file MS Excel® Simplified Reporting Format (do not open the file before saving)
Keep both Excel files in the same folder
Open both Excel files and enable the macros
Keep both files open in the same Excel instance when filling in the data
Guidance on how to run the validation report after submitting data to the DCF.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 5. An Excel sheet listing the genes forming a common transcriptional response in DC and Mø in response to live and inactivated S. Typhimurium (FDR ≤ 5%, fold change ≥ 1.8). The average fold changes compared to unstimulated cells are listed for each sample set; DC infected with live S. Typhimurium (DC_L), DC stimulated with inactivated S. Typhimurium (DC_D), Mø infected with live S. Typhimurium (Mø_L) and Mø stimulated with inactivated S. Typhimurium (Mø_D). Genes shown in bold are represented by more than one probe-set and the average fold change across the probe-sets is listed. Where possible the HGNC gene names and symbols have been used. The list includes five unannotated transcripts and their position on the bovine genome relative to neighbouring genes is described. Fold changes in red highlight where the fold change in DC and/or Mø is more than 1.5-fold different in response to live and inactivated S. Typhimurium.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...