CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
Introduction Preservation and management of semi-arid ecosystems requires understanding of the processes involved in soil erosion and their interaction with plant community. Rainfall simulations on natural plots provide an effective way of obtaining a large amount of erosion data under controlled conditions in a short period of time. This dataset contains hydrological (rainfall, runoff, flow velocity), erosion (sediment concentration and rate), vegetation (plant cover), and other supplementary information from 272 rainfall simulation experiments conducted on 23 rangeland locations in Arizona and Nevada between 2002 and 2013. The dataset advances our understanding of basic hydrological and biological processes that drive soil erosion on arid rangelands. It can be used to quantify runoff, infiltration, and erosion rates on a variety of ecological sites in the Southwestern USA. Inclusion of wildfire and brush treatment locations combined with long term observations makes it important for studying vegetation recovery, ecological transitions, and effect of management. It is also a valuable resource for erosion model parameterization and validation. Instrumentation Rainfall was generated by a portable, computer-controlled, variable intensity simulator (Walnut Gulch Rainfall Simulator). The WGRS can deliver rainfall rates ranging between 13 and 178 mm/h with variability coefficient of 11% across 2 by 6.1 m area. Estimated kinetic energy of simulated rainfall was 204 kJ/ha/mm and drop size ranged from 0.288 to 7.2 mm. Detailed description and design of the simulator is available in Stone and Paige (2003). Prior to each field season the simulator was calibrated over a range of intensities using a set of 56 rain gages. During the experiments windbreaks were setup around the simulator to minimize the effect of wind on rain distribution. On some of the plots, in addition to rainfall only treatment, run-on flow was applied at the top edge of the plot. The purpose of run-on water application was to simulate hydrological processes that occur on longer slopes (>6 m) where upper portion of the slope contributes runoff onto the lower portion. Runoff rate from the plot was measured using a calibrated V-shaped supercritical flume equipped with depth gage. Overland flow velocity on the plots was measured using electrolyte and fluorescent dye solution. Dye moving from the application point at 3.2 m distance to the outlet was timed with stopwatch. Electrolyte transport in the flow was measured by resistivity sensors imbedded in edge of the outlet flume. Maximum flow velocity was defined as velocity of the leading edge of the solution and was determined from beginning of the electrolyte breakthrough curve and verified by visual observation (dye). Mean flow velocity was calculated using mean travel time obtained from the electrolyte solution breakthrough curve using moment equation. Soil loss from the plots was determined from runoff samples collected during each run. Sampling interval was variable and aimed to represent rising and falling limbs of the hydrograph, any changes in runoff rate, and steady state conditions. This resulted in approximately 30 to 50 samples per simulation. Shortly before every simulation plot surface and vegetative cover was measured at 400 point grid using a laser and line-point intercept procedure (Herrick et al., 2005). Vegetative cover was classified as forbs, grass, and shrub. Surface cover was characterized as rock, litter, plant basal area, and bare soil. These 4 metrics were further classified as protected (located under plant canopy) and unprotected (not covered by the canopy). In addition, plant canopy and basal area gaps were measured on the plots over three lengthwise and six crosswise transects. Experimental procedure Four to eight 6.1 m by 2 m replicated rainfall simulation plots were established on each site. The plots were bound by sheet metal borders hammered into the ground on three sides. On the down slope side a collection trough was installed to channel runoff into the measuring flume. If a site was revisited, repeat simulations were always conducted on the same long term plots. The experimental procedure was as follows. First, the plot was subjected to 45 min, 65 mm/h intensity simulated rainfall (dry run) intended to create initial saturated condition that could be replicated across all sites. This was followed by a 45 minute pause and a second simulation with varying intensity (wet run). During wet runs two modes of water application were used as: rainfall or run-on. Rainfall wet runs typically consisted of series of application rates (65, 100, 125, 150, and 180 mm/h) that were increased after runoff had reached steady state for at least five minutes. Runoff samples were collected on the rising and falling limb of the hydrograph and during each steady state (a minimum of 3 samples). Overland flow velocities were measured during each steady state as previously described. When used, run-on wet runs followed the same procedure as rainfall runs, except water application rates varied between 100 and 300 mm/h. In approximately 20% of simulation experiments the wet run was followed by another simulation (wet2 run) after a 45 min pause. Wet2 runs were similar to wet runs and also consisted of series of varying intensity rainfalls and/or run-on inputs. Resulting Data The dataset contains hydrological, erosion, vegetation, and ecological data from 272 rainfall simulation experiments conducted on 12 sq. m plots at 23 rangeland locations in Arizona and Nevada. The experiments were conducted between 2002 and 2013, with some locations being revisited multiple times. Resources in this dataset:Resource Title: Appendix B. Lists of sites and general information. File Name: Rainfall Simulation Sites Summary.xlsxResource Description: The table contains list or rainfall simulation sites and individual plots, their coordinates, topographic, soil, ecological and vegetation characteristics, and dates of simulation experiments. The sites grouped by common geographic area.Resource Software Recommended: Microsoft Excel,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/excel Resource Title: Appendix F. Site pictures. File Name: Site photos.zipResource Description: Pictures of rainfall simulation sites and plots.Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Access,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/access Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Excel,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/excel Resource Title: Appendix E. Simulation sites map. File Name: Rainfall Simulator Sites Map.zipResource Description: Map of rainfall simulation sites with embedded images in Google Earth.Resource Software Recommended: Google Earth,url: https://res1wwwd-o-tgoogled-o-tcom.vcapture.xyz/earth/ Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Access,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/access Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Excel,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Excel,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Access,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/access Resource Title: Appendix C. Rainfall simulations (revised). File Name: Rainfall simulation (R11272017).csvResource Description: The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experiments (updated 11/27/2017)Resource Software Recommended: Microsoft Access,url: https://res1productsd-o-tofficed-o-tcom.vcapture.xyz/en-us/access
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global graph database market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 8.5 billion by 2032, growing at a CAGR of 21.2% from 2024 to 2032. The substantial growth of this market is driven primarily by increasing data complexity, advancements in data analytics technologies, and the rising need for more efficient database management systems.
One of the primary growth factors for the graph database market is the exponential increase in data generation. As organizations generate vast amounts of data from various sources such as social media, e-commerce platforms, and IoT devices, the need for sophisticated data management and analysis tools becomes paramount. Traditional relational databases struggle to handle the complexity and interconnectivity of this data, leading to a shift towards graph databases which excel in managing such intricate relationships.
Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies. These technologies rely heavily on connected data for predictive analytics and decision-making processes. Graph databases, with their inherent ability to model relationships between data points effectively, provide a robust foundation for AI and ML applications. This synergy between AI/ML and graph databases further accelerates market growth.
Additionally, the increasing prevalence of personalized customer experiences across industries like retail, finance, and healthcare is fueling demand for graph databases. Businesses are leveraging graph databases to analyze customer behaviors, preferences, and interactions in real-time, enabling them to offer tailored recommendations and services. This enhanced customer experience translates to higher customer satisfaction and retention, driving further adoption of graph databases.
From a regional perspective, North America currently holds the largest market share due to early adoption of advanced technologies and the presence of key market players. However, significant growth is also anticipated in the Asia-Pacific region, driven by rapid digital transformation, increasing investments in IT infrastructure, and growing awareness of the benefits of graph databases. Europe is also expected to witness steady growth, supported by stringent data management regulations and a strong focus on data privacy and security.
The graph database market can be segmented into two primary components: software and services. The software segment holds the largest market share, driven by extensive adoption across various industries. Graph database software is designed to create, manage, and query graph databases, offering features such as scalability, high performance, and efficient handling of complex data relationships. The growth in this segment is propelled by continuous advancements and innovations in graph database technologies. Companies are increasingly investing in research and development to enhance the capabilities of their graph database software products, catering to the evolving needs of their customers.
On the other hand, the services segment is also witnessing substantial growth. This segment includes consulting, implementation, and support services provided by vendors to help organizations effectively deploy and manage graph databases. As businesses recognize the benefits of graph databases, the demand for expert services to ensure successful implementation and integration into existing systems is rising. Additionally, ongoing support and maintenance services are crucial for the smooth operation of graph databases, driving further growth in this segment.
The increasing complexity of data and the need for specialized expertise to manage and analyze it effectively are key factors contributing to the growth of the services segment. Organizations often lack the in-house skills required to harness the full potential of graph databases, prompting them to seek external assistance. This trend is particularly evident in large enterprises, where the scale and complexity of data necessitate robust support services.
Moreover, the services segment is benefiting from the growing trend of outsourcing IT functions. Many organizations are opting to outsource their database management needs to specialized service providers, allowing them to focus on their core business activities. This shift towards outsourcing is further bolstering the demand for graph database services, driving market growth.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Complete annotations for the tabular data are presented below. Tab Fig 1: (A) The heatmap data of G protein family members in the hippocampal tissue of 6-month-old Wildtype (n = 6) and 5xFAD (n = 6) mice; (B) The heatmap data of G protein family members in the cortical tissue of 6-month-old Wildtype (n = 6) and 5xFAD (n = 6) mice; (C) The data in the overlapping part of the Venn diagram (132 elements); (D) The data information for creating volcano plot; (E) The data information for creating heatmap of GPCR-related DEGs; (F) Expression of Gnb5 in the large sample dataset GSE44772; Control, n = 303; AD, n = 387; (H) Statistical analysis of Gnb5 protein levels from panel G; Wildtype, n = 4; 5xFAD, n = 4; (J) Statistical analysis of Gnb5 protein levels from panel I; Wildtype, n = 4; 5xFAD, n = 4; (L) Quantitative analysis of Gnb5 fluorescence intensity in 5xFAD and Wildtype groups; Wildtype, n = 4; 5xFAD, n = 4. Tab Fig 2: (D) qPCR data of Gnb5 knockout in hippocampal tissue; Gnb5F/F, n = 6; Gnb5-CCKO, n = 6; (E–I, L–N) Animal behavioral tests in mice, Gnb5F/F, n = 22; Gnb5-CCKO, n = 16; (E) Total distance traveled in the open field experiment; (F) Training curve in the Morris water maze (MWM); (F-day6) Data from the sixth day of MWM training; (G) Percentage of time spent by the mouse in the target quadrant in the MWM; (H) Statistical analysis of the number of times the mouse traverses the target quadrant in the MWM; (I) Latency to first reach the target quadrant in the MWM; (L) Baseline freezing percentage of mice in an identical testing context; (M) Percentage of freezing time of mice during the Context phase; (N) Percentage of freezing time of mice during the Cue phase. Tab Fig 3: (D–F, H) MWM tests in mice; Wildtype+AAV-GFP, n = 20; Wildtype+AAV-Gnb5-GFP, n = 23; 5xFAD + AAV-GFP, n = 23; 5xFAD + AAV-Gnb5-GFP, n = 26; (D) Training curve in the MWM; (D-day6) Data from the sixth day of MWM training; (E) Percentage of time spent in the target quadrant in the MWM; (F) Statistical analysis of the number of entries in the target quadrant in the MWM; (H) Movement speed of mice in the MWM; (I–K) The contextual fear conditioning test in mice; 5xFAD + AAV-GFP, n = 23; 5xFAD + AAV-Gnb5-GFP, n = 26; (I) Baseline freezing percentage of mice in an identical testing context; (J) Percentage of freezing time of mice during the Context phase; (K) Percentage of freezing time of mice during the Cue phase; (L) Total distance traveled in the open field test; (M) Percentage of time spent in the center area during the open field test. Tab Fig 4: (B, C) Quantification of Aβ plaques in the hippocampus sections from Wildtype and 5xFAD mice injected with either AAV-Gnb5 or AAV-GFP; Wildtype+AAV-GFP, n = 4; Wildtype+AAV-Gnb5-GFP, n = 4; 5xFAD + AAV-GFP, n = 4; 5xFAD + AAV-Gnb5-GFP, n = 4; (B) Quantification of Aβ plaques number; (C) Quantification of Aβ plaques size; (F, G) Quantification of Aβ pylaques from indicted mice lines; WT&Gnb5F/F&CamKIIa-CreERT+Vehicle, n = 4; 5xFAD&Gnb5F/F&CamKIIa-CreERT+Vehicle, n = 4; 5xFAD&Gnb5F/F&CamKIIa-CreERT+Tamoxifen, n = 4; (F) Quantification of Aβ plaque size; (G) Quantification of Aβ plaque number. Tab Fig 5: (B) Overexpression of Gnb5-AAV in 5xFAD mice affects the expression of proteins related to APP cleavage (BACE1, β-CTF, Nicastrin and APP); Statistical analysis of protein levels; n = 4, respectively; (D) Tamoxifen-induced Gnb5 knockdown in 5xFAD mice affects APP-cleaving proteins; Statistical analysis of protein levels; n = 4, respectively; (F) Gnb5-CCKO mice show altered expression of APP-cleaving proteins; Statistical analysis of protein levels; n = 6, respectively. Tab Fig 7: (C, D) Quantification of Aβ plaques in the overexpressed full-length Gnb5, truncated fragments, and mutant truncated fragment AAV in 5xFAD mice; n = 4, respectively; (C) Quantification of Aβ plaques size; (D) Quantification of Aβ plaques number; (F) Effect of overexpressing full-length Gnb5, truncated fragments, and mutant truncated fragment viruses on the expression of proteins related to APP cleavage process in 5xFAD; Statistical analysis of protein levels; n = 3, respectively. (XLSX)
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This Kaggle dataset comes from an output dataset that powers my March Madness Data Analysis dashboard in Domo. - Click here to view this dashboard: Dashboard Link - Click here to view this dashboard features in a Domo blog post: Hoops, Data, and Madness: Unveiling the Ultimate NCAA Dashboard
This dataset offers one the most robust resource you will find to discover key insights through data science and data analytics using historical NCAA Division 1 men's basketball data. This data, sourced from KenPom, goes as far back as 2002 and is updated with the latest 2025 data. This dataset is meticulously structured to provide every piece of information that I could pull from this site as an open-source tool for analysis for March Madness.
Key features of the dataset include: - Historical Data: Provides all historical KenPom data from 2002 to 2025 from the Efficiency, Four Factors (Offense & Defense), Point Distribution, Height/Experience, and Misc. Team Stats endpoints from KenPom's website. Please note that the Height/Experience data only goes as far back as 2007, but every other source contains data from 2002 onward. - Data Granularity: This dataset features an individual line item for every NCAA Division 1 men's basketball team in every season that contains every KenPom metric that you can possibly think of. This dataset has the ability to serve as a single source of truth for your March Madness analysis and provide you with the granularity necessary to perform any type of analysis you can think of. - 2025 Tournament Insights: Contains all seed and region information for the 2025 NCAA March Madness tournament. Please note that I will continually update this dataset with the seed and region information for previous tournaments as I continue to work on this dataset.
These datasets were created by downloading the raw CSV files for each season for the various sections on KenPom's website (Efficiency, Offense, Defense, Point Distribution, Summary, Miscellaneous Team Stats, and Height). All of these raw files were uploaded to Domo and imported into a dataflow using Domo's Magic ETL. In these dataflows, all of the column headers for each of the previous seasons are standardized to the current 2025 naming structure so all of the historical data can be viewed under the exact same field names. All of these cleaned datasets are then appended together, and some additional clean up takes place before ultimately creating the intermediate (INT) datasets that are uploaded to this Kaggle dataset. Once all of the INT datasets were created, I joined all of the tables together on the team name and season so all of these different metrics can be viewed under one single view. From there, I joined an NCAAM Conference & ESPN Team Name Mapping table to add a conference field in its full length and respective acronyms they are known by as well as the team name that ESPN currently uses. Please note that this reference table is an aggregated view of all of the different conferences a team has been a part of since 2002 and the different team names that KenPom has used historically, so this mapping table is necessary to map all of the teams properly and differentiate the historical conferences from their current conferences. From there, I join a reference table that includes all of the current NCAAM coaches and their active coaching lengths because the active current coaching length typically correlates to a team's success in the March Madness tournament. I also join another reference table to include the historical post-season tournament teams in the March Madness, NIT, CBI, and CIT tournaments, and I join another reference table to differentiate the teams who were ranked in the top 12 in the AP Top 25 during week 6 of the respective NCAA season. After some additional data clean-up, all of this cleaned data exports into the "DEV _ March Madness" file that contains the consolidated view of all of this data.
This dataset provides users with the flexibility to export data for further analysis in platforms such as Domo, Power BI, Tableau, Excel, and more. This dataset is designed for users who wish to conduct their own analysis, develop predictive models, or simply gain a deeper understanding of the intricacies that result in the excitement that Division 1 men's college basketball provides every year in March. Whether you are using this dataset for academic research, personal interest, or professional interest, I hope this dataset serves as a foundational tool for exploring the vast landscape of college basketball's most riveting and anticipated event of its season.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A dataset of synchrotron X-ray diffraction (SXRD) analysis files, recording the refinement of crystallographic texture from a number of Ti-6Al-4V (Ti-64) sample matrices, containing a total of 93 hot-rolled samples, from three different orthogonal sample directions. The aim of the work was to accurately quantify bulk macro-texture for both the α (hexagonal close packed, hcp) and β (body-centred cubic, bcc) phases across a range of different processing conditions.
Material
Prior to the experiment, the Ti-64 materials had been hot-rolled at a range of different temperatures, and to different reductions, followed by air-cooling, using a rolling mill at The University of Manchester. Rectangular specimens (6 mm x 5 mm x 2 mm) were then machined from the centre of these rolled blocks, and from the starting material. The samples were cut along different orthogonal rolling directions and are referenced according to alignment of the rolling directions (RD – rolling direction, TD – transverse direction, ND – normal direction) with the long horizontal (X) axis and short vertical (Y) axis of the rectangular specimens. Samples of the same orientation were glued together to form matrices for the synchrotron analysis. The material, rolling conditions, sample orientations and experiment reference numbers used for the synchrotron diffraction analysis are included in the data as an excel spreadsheet.
SXRD Data Collection
Data was recorded using a high energy 90 keV synchrotron X-ray beam and a 5 second exposure at the detector for each measurement point. The slits were adjusted to give a 0.5 x 0.5 mm beam area, chosen to optimally resolve both the α and β phase peaks. The SXRD data was recorded by stage-scanning the beam in sequential X-Y positions at 0.5 mm increments across the rectangular sample matrices, containing a number of samples glued together, to analyse a total of 93 samples from the different processing conditions and orientations. Post-processing of the data was then used to sort the data into a rectangular grid of measurement points from each individual sample.
Diffraction Pattern Averaging
The stage-scan diffraction pattern images from each matrix were sorted into individual samples, and the images averaged together for each specimen, using a Python notebook [sxrd-tiff-summer]( https://github.com/LightForm-group/sxrd-tiff-summer). The averaged .tiff images each capture average diffraction peak intensities from an area of about 30 mm2 (equivalent to a total volume of ~ 60 mm3), with three different sample orientations then used to calculate the bulk crystallographic texture from each rolling condition.
SXRD Data Analysis
A new Fourier-based peak fitting method from the Continuous-Peak-Fit Python package was used to fit full diffraction pattern ring intensities, using a range of different lattice plane peaks for determining crystallographic texture in both the α and β phases. Bulk texture was calculated by combining the ring intensities from three different sample orientations.
A .poni calibration file was created using Dioptas, through a refinement matching peak intensities from a LaB6 or CeO2 standard diffraction pattern image. Two calibrations were needed as some of the data was collected in July 2022 and some of the data was collected in August 2022. Dioptas was then used to determine peak bounds in 2θ for characterising a total of 22 α and 4 β lattice plane rings from the averaged Ti-64 diffraction pattern images, which were recorded in a .py input script. Using these two inputs, Continuous-Peak-Fit automatically converts full diffraction pattern rings into profiles of intensity versus azimuthal angle, for each 2θ section, which can also include multiple overlapping α and β peaks.
The Continuous-Peak-Fit refinement can be launched in a notebook or from the terminal, to automatically calculate a full mathematical description, in the form of Fourier expansion terms, to match the intensity variation of each individual lattice plane ring. The results for peak position, intensity and half-width for all 22 α and 4 β lattice plane peaks were recorded at an azimuthal resolution of 1º and stored in a .fit output file. Details for setting up and running this analysis can be found in the continuous-peak-fit-analysis package. This package also includes a Python script for extracting lattice plane ring intensity distributions from the .fit files, matching the intensity values with spherical polar coordinates to parametrise the intensity distributions from each of the three different sample orientations, in the form of pole figures. The script can also be used to combine intensity distributions from different sample orientations. The final intensity variations are recorded for each of the lattice plane peaks as text files, which can be loaded into MTEX to plot and analyse both the α and β phase crystallographic texture.
Metadata
An accompanying YAML text file contains associated SXRD beamline metadata for each measurement. The raw data is in the form of synchrotron diffraction pattern .tiff images which were too large to upload to Zenodo and are instead stored on The University of Manchester's Research Database Storage (RDS) repository. The raw data can therefore be obtained by emailing the authors.
The material data folder documents the machining of the samples and the sample orientations.
The associated processing metadata for the Continuous-Peak-Fit analyses records information about the different packages used to process the data, along with details about the different files contained within this analysis dataset.
This is the dataset generated from the analysis of four Thai manuscripts at CSMC. The results have been published in the research article 'Sathiyamani S, Ngiam S, Bonnerot O, Jaengsawang S, Panarut P, Helman-Wazny A, Colini C. Material Characterisation of 19–20th Century Manuscripts from Northern Thailand. Restaurator. International Journal for the Preservation of Library and Archival Material. 2024;45(2-3): 117-140. https://doi.org/10.1515/res-2023-0028' The dataset includes: 01_protocol: Protocol of the analysis featuring spots measured using various techniques 02_Dino: Digital microscopy images of the four manuscripts, present under the respective sub-folders as .bmp file 03_Elio: Point analysis was carried out using Bruker / XG Lab Elio with a Rh X-ray tube, a 25 mm2 large-area silicon drift detector (SDD) with an interaction spot of 1 mm. The measurements were conducted at 40 kV voltage, 80 μA current, with a measurement time of 120 s per spot. dataset includes subfolders featuring individual spectra as spx and txt files. The integrated intensities for the evaluated XRF data is given as the corresponding excel file 04_ARTAX: Acquired using Bruker ARTAX with a Mo X-ray tube, polycapillary X-ray optics, an electrothermally cooled Xflash SDD detector, and an interaction spot of 100 μm. XRF dataset from lines scans are present in subfolders as RTX file. The integrated intensities and the results from the line scans were exported as the corresponding excel file 05_JET: Spatial maps were obtained using Bruker M6 Jetstream with a Rh X-ray tube, polycapillary X-ray optics, a 50 mm2 Xflash SDD detector, and an adjustable measuring spot ranging from 50 to 650 µm. The measurements were conducted at 50 kV voltage and 600 μA current, with a spot size of 50µm, an acquisition time 30 ms per spot and a step size of 50 μm.The results from M6 Jetstream, corresponding to spatial maps, are present in the respective subfolders, as raw bcf file, processed bcf file and processed rpl file; the corresponding results have been exported as jpg images under relative and absolute intensities. 06_FTIR: FTIR measurements were carried out using portable Agilent 4100 ExoScan FT-IR spectrometer using Diffuse reflectance (DRIFTS) mode with a measuring spot of 1 cm, spectral range 650–4000 cm-1, 256 scans per measurement and a resolution of 4 cm-1. The raw data is uploaded as individual spc files, and the combined data is present in the excel file. 07_NIR: NIR measurements were carried out using Malvern-Panalytical Analytical Spectral Devices (ASD) LabSpec 4 Hi-Res spectrophotometer, measuring the reflectance spectrum in the range 350–2500 nm with a scanning time of 100 ms. The first derivative of the spectrum was used for the characterization of the pigments. The raw data is present as txt files. 08_Raman: Raman measurements were carried out using a Renishaw inVia Raman spectrometer equipped with a 100 mW 532 nm laser and a 300 mW 785 nm laser and with a 100x objective. The raw spectra are present in the respective subfolders as txt files. 09_IRR: APOLLO Infrared Reflectography Imaging System (IRR) from OPUS Instruments with a Long Wave Pass Filter (LWP1510, range 1510–1700 nm) and two 20W halogen lamps as light sources to image a section of manuscript Thai Ms 7, image is provided as a jpg. 10_results: This file provides information about all the datasets, including a summary of results obtained using the different techniques. 11_plots: The origin file corresponding to the FTIR, NIR and Raman plots, as opj. The research for this project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC 2176 'Understanding Written Artefacts: Material, Interaction and Transmission in Manuscript Cultures', project no. 390893796. The research was conducted within the scope of the Centre for the Study of Manuscript Cultures (CSMC) at Universität Hamburg.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Small-angle neutron scattering (SANS) and Ultra-SANS (USANS) were employed to understand the agglomeration behavior of nanoplastics (NPs) formed from a biodegradable mulch film, and microparticles of vermiculite (V), an artificial soil, suspended in water in the presence of low convective shear (ex-situ stirring) prior to measurements. Neutron contrast matching was employed to minimize the signal of V (by 100-fold) and thereby isolate the signal due to NPs in the neutron beam, as the contrast match point (CMP) for V (67 vol% deuteration in water) differed from that of NPs by more than 20%. The original NPs’ size distribution was bimodal: < 200 nm and 500-1200 nm, referred to as small and large NPs, i.e., SNPs and LNPs, respectively. In the absence of V, SNPs formed agglomerates at higher concentrations, with size decreasing slightly with stirring time to 40-50 nm, while the size of LNPs remained unchanged. The presence of V at 2-fold lower concentration than NPs did not change the size of SNPs but reduced the size of LNPs by nearly 2-fold as stirring time increased. Because the size of SNPs and LNPs did not differ substantially between solvents, both at CMP and 100% D2O, even with nanosized V particles contributing toward scattered intensity for the latter solvent, it is evident that SNPs and LNPs are mainly composed of NPs and not V. The results suggest that LNPs are susceptible to size reduction through collisions with soil microparticles via convection, yielding SNPs near soil-water interfaces within vadose zones. Methods Data for Fig 1 (nanoplastic recovery suspended in water and settling out) was collected in the laboratory and the results were recorded in a Microsoft Excel file. Other data was collected on the small-angle neutron scattering (SANS) and ultra-SANS instruments at Oak Ridge National Laboratory, specifically, the Bio-SANS (high-flux isotope reactor) and Beamline 1A (spallation neutron source), respectively (downloaded into Microsoft Excel files and displayed in Figs 2 and S1-S5). Data for Figs 3 and S6 include form factor-structure factor modeling of merged SANS and USANS data, after subtraction of a power law relationship. Modeling was done using Igor Pro-based software written by National Institute of Standards scientists and the model fits to the data and resultant parameters were downloaded to Microsoft Excel files. The models' parameters allowed for determination of box plots and histograms of nanoplastic size and size distribution under several different conditions, Figs 4 and S7, respectively. The latter two figures were generated using JMP software, and were subsequently downloaded to Microsoft Excel files.
This layer visualizes over 60,000 commercial flight paths. The data was obtained from openflights.org, and was last updated in June 2014. The site states, "The third-party that OpenFlights uses for route data ceased providing updates in June 2014. The current data is of historical value only. As of June 2014, the OpenFlights/Airline Route Mapper Route Database contains 67,663 routes between 3,321 airports on 548 airlines spanning the globe. Creating and maintaining this database has required and continues to require an immense amount of work. We need your support to keep this database up-to-date."To donate, visit the site and click the PayPal link.Routes were created using the XY-to-line tool in ArcGIS Pro, inspired by Kenneth Field's work, and following a modified methodology from Michael Markieta (www.spatialanalysis.ca/2011/global-connectivity-mapping-out-flight-routes).Some cleanup was required in the original data, including adding missing location data for several airports and some missing IATA codes. Before performing the point to line conversion, the key to preserving attributes in the original data is a combination of the INDEX and MATCH functions in Microsoft Excel. Example function: =INDEX(Airlines!$B$2:$B$6200,MATCH(Routes!$A2,Airlines!$D$2:Airlines!$D$6200,0))
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
These ward level well being scores present a combined measure of well-being indicators of the resident population based on 12 different indicators. Where possible each indicator score is compared with the England and Wales average, which is zero. Scores over 0 indicate a higher probability that the population on average will experience better well-being according to these measures. Users can adjust the weight of each indicator depending on what they consider to be the more or less important, thus generating bespoke scores. This is done either by entering a number between 0 and 10. The scores throughout the spreadsheet will update automatically. The tool combines data across a range of themes for the last five years of available data (2009-2013). Either view the results in the online interactive tool here, Or download the interactive spreadsheet here The well-being scores are then presented in a ranked bar chart for each borough, and a ward map of London. The spreadsheet also highlights wards in the top and bottom 25 per cent in London. Wards that have shown significant improvement or reduction in their scores relative to the average over the five year period are also highlighted. Borough figures are provided to assist with comparisons. Rankings and summary tables are included. The source data that the tool is based on is included in the spreadsheet. The Excel file is 8.1MB. IMPORTANT NOTE, users must enable macros when prompted upon opening the Excel spreadsheet (or reset security to medium/low) for the map to function. The rest of the tool will function without macros. If you cannot download the Excel file directly try this zip file (2.6MB). If you experience any difficulties with downloading this spreadsheet, please contact the London Datastore in the Intelligence Unit. Detailed information about definitions and sources is contained within the spreadsheet. The 12 measures included are: Health - Life Expectancy - Childhood Obesity - Incapacity Benefits claimant rate Economic security - Unemployment rate Safety - Crime rate - Deliberate Fires Education - GCSE point scores Children - Unauthorised Pupil Absence Families - Children in out-of-work households Transport - Public Transport Accessibility Scores (PTALs) Environment - Access to public open space & nature Happiness - Composite Subjective Well-being Score (Life Satisfaction, Worthwhileness, Anxiety, and Happiness) (New data only available since 2011/12) With some measures if the data shows a high figure that indicates better well-being, and with other measures a low figure indicates better well-being. Therefore scores for Life Expectancy, GCSE scores, PTALs, and Access to Public Open Space/Nature have been reversed so that in all measures low scores indicate probable lower well-being. The data has been turned into scores where each indicator in each year has a standard deviation of 10. This means that each indicator will have an equal effect on the final score when the weightings are set to equal. Why should measuring well-being be important to policy makers? Following research by the Cabinet Office and Office for National Statistics, the government is aiming to develop policy that is more focused on ‘all those things that make life worthwhile’ (David Cameron, November 2010). They are interested in developing new and better ways to understand how policy and public services affect well-being. Why measure well-being for local areas? It is important for London policy makers to consider well-being at a local level (smaller than borough level) because of the often huge differences within boroughs. Local authorities rely on small area data in order to target resources, and with local authorities currently gaining more responsibilities from government, this is of increasing importance. But small area data is also of interest to academics, independent analysts and members of the public with an interest in the subject of well-being. How can well-being be measured within small areas? The Office for National Statistics have been developing new measures of national well-being, and as part of this, at a national and regional level, the ONS has published some subjective data to measure happiness. ONS have not measured well-being for small areas, so this tool has been designed to fill this gap. However, DCLG have published a tool that models life satisfaction data for LSOAs based on a combination of national level happiness data, and 'ACORN' data. Happiness data is not available for small areas because there are no surveys large enough for this level of detail, and so at this geography the focus is on objective indicators. Data availability for small areas is far more limited than for districts, and this means the indicators that the scores are based on are not all perfect measures of well-being, though they are the best available. However, by using a relatively high number of measures across a number of years, this increases the reliability of the well-being scores. How can this tool be used to help policy makers? Each neighbourhood will have its own priorities, but the data in this tool could help provide a solid evidence base for informed local policy-making, and the distribution of regeneration funds. In addition, it could assist users to identify the causes behind an improvement in well-being in certain wards, where examples of good practice could be applied elsewhere. Differences to the previous report This is the 2013 edition of this publication, and there is one change from 2012. Indicators of Election turnout has been replaced with a composite score of subjective well-being indicators. Past versions are still available for 2011 and 2012. The rationale/methodology paper from 2011 is here. The scores from the 2012 spreadsheet are also available in PDF format. The scores in Intelligence Update 21-2012 are based on equal weightings across each measure. This tool was created by the GLA Intelligence Unit. Please contact datastore@london.gov.uk for more information.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...