CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
With a step-by-step approach, learn to prepare Excel files, data worksheets, and individual data columns for data analysis; practice conditional formatting and creating pivot tables/charts; go over basic principles of Research Data Management as they might apply to an Excel project. Avec une approche étape par étape, apprenez à préparer pour l’analyse des données des fichiers Excel, des feuilles de calcul de données et des colonnes de données individuelles; pratiquez la mise en forme conditionnelle et la création de tableaux croisés dynamiques ou de graphiques; passez en revue les principes de base de la gestion des données de recherche tels qu’ils pourraient s’appliquer à un projet Excel.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data supports the assertions made in a paper (same name as this project) which surveyed 3 Second Language Acquisition journals (Modern Language Journal, Language Learning, and Studies in Second Language Acquisition) from the time of their inception to 2011/2012. The raw data used for calculations about the number of graphics and which type of graphics were published is included in the attached Excel file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Using the User Manual as a guide and the Excel Graph Input Data Example file as a reference, the user enters the semantics of the graph model in this file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains a collection of data about 454 value chains from 23 rural European areas of 16 countries. This data is obtained through a semi-automatic workflow that transforms raw textual data from an unstructured MS Excel sheet into semantic knowledge graphs.In particular, the repository contains:MS Excel sheet containing different value chains details provided by MOuntain Valorisation through INterconnectedness and Green growth (MOVING) European project;454 CSV files containing events, titles, entities and coordinates of narratives of each value chain, obtained by pre-processing the MS Excel sheet454 Web Ontology Language (OWL) files. This collection of files is the result of the semi-automatic workflow, and is organized as a semantic knowledge graph of narratives, where each narrative is a sub-graph explaining one among the 454 value chains and its territory aspects. The knowledge graph is based on the Narrative Ontology, an ontology developed by Institute of Information Science and Technologies (ISTI-CNR) as an extension of CIDOC CRM, FRBRoo, and OWL Time.Two CSV files that compile all the possible available information extracted from 454 Web Ontology Language (OWL) files.GeoPackage files with the geographic coordinates related to the narratives.The HTML files that show all the different SPARQL and GeoSPARQL queries.The HTML files that show the story maps about the 454 value chains.An image showing how the various components of the dataset interact with each other.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Excel spreadsheet contains the quantitative questions (Questions 1, 3 and 4). Each question is analysed in the form of a frequency distribution table and a pie chart.
https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-10849https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-10849
"The Statistical Abstract of the United States, published since 1878, is the standard summary of statistics on the social, political, and economic organization of the United States. It is designed to serve as a convenient volume for statistical reference and as a guide to other statistical publications and sources. The latter function is served by the introductory text to each section, the source note appearing below each table, and Appendix I, which comprises the Guide to Sources of Statisti cs, the Guide to State Statistical Abstracts, and the Guide to Foreign Statistical Abstracts. The Statistical Abstract sections and tables are compiled into one Adobe PDF named StatAbstract2009.pdf. This PDF is bookmarked by section and by table and can be searched using the Acrobat Search feature. The Statistical Abstract on CD-ROM is best viewed using Adobe Acrobat 5, or any subsequent version of Acrobat or Acrobat Reader. The Statistical Abstract tables and the metropolitan areas tables from Appendix II are available as Excel(.xls or .xlw) spreadsheets. In most cases, these spreadsheet files offer the user direct access to more data than are shown either in the publication or Adobe Acrobat. These files usually contain more years of data, more geographic areas, and/or more categories of subjects than those shown in the Acrobat version. The extensive selection of statistics is provided for the United States, with selected data for regions, divisions, states, metropolitan areas, cities, and foreign countries from reports and records of government and private agencies. Software on the disc can be used to perform full-text searches, view official statistics, open tables as Lotus worksheets or Excel workbooks, and link directly to source agencies and organizations for supporting information. Except as indicated, figures are for the United States as presently constituted. Although emphasis in the Statistical Abstract is primarily given to national data, many tables present data for regions and individual states and a smaller number for metropolitan areas and cities.Statistics for the Commonwealth of Puerto Rico and for island areas of the United States are included in many state tables and are supplemented by information in Section 29. Additional information for states, cities, counties, metropolitan areas, and other small units, as well as more historical data are available in various supplements to the Abstract. Statistics in this edition are generally for the most recent year or period available by summer 2006. Each year over 1,400 tables and charts are reviewed and evaluated; new tables and charts of current interest are added, continuing series are updated, and less timely data are condensed or eliminated. Text notes and appendices are revised as appropriate. This year we have introduced 72 new tables covering a wide range of subject areas. These cover a variety of topics including: learning disability for children, people impacted by the hurricanes in the Gulf Coast area, employees with alternative work arrangements, adult computer and Internet users by selected characteristics, North America cruise industry, women- and minority-owned businesses, and the percentage of the adult population considered to be obese. Some of the annually surveyed topics are population; vital statistics; health and nutrition; education; law enforcement, courts and prison; geography and environment; elections; state and local government; federal government finances and employment; national defense and veterans affairs; social insurance and human services; labor force, employment, and earnings; income, expenditures, and wealth; prices; business enterprise; science and technology; agriculture; natural resources; energy; construction and housing; manufactures; domestic trade and services; transportation; information and communication; banking, finance, and insurance; arts, entertainment, and recreation; accommodation, food services, and other services; foreign commerce and aid; outlying areas; and comparative international statistics." Note to Users: This CD is part of a collection located in the Data Archive of the Odum Institute for Research in Social Science, at the University of North Carolina at Chapel Hill. The collection is located in Room 10, Manning Hall. Users may check the CDs out subscribing to the honor system. Items can be checked out for a period of two weeks. Loan forms are located adjacent to the collection.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global graph database market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 8.5 billion by 2032, growing at a CAGR of 21.2% from 2024 to 2032. The substantial growth of this market is driven primarily by increasing data complexity, advancements in data analytics technologies, and the rising need for more efficient database management systems.
One of the primary growth factors for the graph database market is the exponential increase in data generation. As organizations generate vast amounts of data from various sources such as social media, e-commerce platforms, and IoT devices, the need for sophisticated data management and analysis tools becomes paramount. Traditional relational databases struggle to handle the complexity and interconnectivity of this data, leading to a shift towards graph databases which excel in managing such intricate relationships.
Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies. These technologies rely heavily on connected data for predictive analytics and decision-making processes. Graph databases, with their inherent ability to model relationships between data points effectively, provide a robust foundation for AI and ML applications. This synergy between AI/ML and graph databases further accelerates market growth.
Additionally, the increasing prevalence of personalized customer experiences across industries like retail, finance, and healthcare is fueling demand for graph databases. Businesses are leveraging graph databases to analyze customer behaviors, preferences, and interactions in real-time, enabling them to offer tailored recommendations and services. This enhanced customer experience translates to higher customer satisfaction and retention, driving further adoption of graph databases.
From a regional perspective, North America currently holds the largest market share due to early adoption of advanced technologies and the presence of key market players. However, significant growth is also anticipated in the Asia-Pacific region, driven by rapid digital transformation, increasing investments in IT infrastructure, and growing awareness of the benefits of graph databases. Europe is also expected to witness steady growth, supported by stringent data management regulations and a strong focus on data privacy and security.
The graph database market can be segmented into two primary components: software and services. The software segment holds the largest market share, driven by extensive adoption across various industries. Graph database software is designed to create, manage, and query graph databases, offering features such as scalability, high performance, and efficient handling of complex data relationships. The growth in this segment is propelled by continuous advancements and innovations in graph database technologies. Companies are increasingly investing in research and development to enhance the capabilities of their graph database software products, catering to the evolving needs of their customers.
On the other hand, the services segment is also witnessing substantial growth. This segment includes consulting, implementation, and support services provided by vendors to help organizations effectively deploy and manage graph databases. As businesses recognize the benefits of graph databases, the demand for expert services to ensure successful implementation and integration into existing systems is rising. Additionally, ongoing support and maintenance services are crucial for the smooth operation of graph databases, driving further growth in this segment.
The increasing complexity of data and the need for specialized expertise to manage and analyze it effectively are key factors contributing to the growth of the services segment. Organizations often lack the in-house skills required to harness the full potential of graph databases, prompting them to seek external assistance. This trend is particularly evident in large enterprises, where the scale and complexity of data necessitate robust support services.
Moreover, the services segment is benefiting from the growing trend of outsourcing IT functions. Many organizations are opting to outsource their database management needs to specialized service providers, allowing them to focus on their core business activities. This shift towards outsourcing is further bolstering the demand for graph database services, driving market growth.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of the heuristic evaluation performed on four different versions of the same chart created with Microsoft Excel (XSLX, DOCX, HTML and SVG) related to the paper published in: Alcaraz Martínez, Rubén; Ribera, Mireia, Roig, Jordi; Pascual, Afra. Can we create accessible charts with Microsoft MS Excel?: a review of possibilities and limits, with a special focus to users with low vision. In Interacción '24: Proceedings of the XXIV International Conference on Human Computer Interaction.
The data originates from Promusicae’s official website, which provides weekly and yearly charts for albums, singles, and additional music-related content categories. The paper associated to this sample is available here:
Sánchez-Olmos, C. (2025). Gender Inequality in Spain’s Official Music Charts: Neither Representation nor Success for Female Artists (2008–2020). Journalism and Media, 6(1), 10. https://doi.org/10.3390/journalmedia6010010
This dataset features the top 50 from 2008 to 2020, comprising 1300 recording units with an equal split between albums (650) and singles (650) (Figure 1). Promusicae represents Spanish record labels affiliated with the International Federation of the Phonographic Industry (IFPI) and is responsible for publishing these official charts. The analysis period started in 2008 when Promusicae published its first top 50 singles chart, which was later expanded to a top 100 format in 2015. Since Promusicae has published the singles chart since 2008, this year marks the beginning of the analysis period, ending in 2020.
Both charts were downloaded in Excel format from the Promusicae website. All albums and singles are coded to feature the following variables: artist, title, year of chart appearance, gender (soloist or band), position on the chart, and success achieved. The gender of the featured position is also coded in the single chart.
This code has its limitations. First, the use of binary gender coding fails to capture the diversity of sexual identities (de Boise, 2019). Furthermore, several methods for categorising mixed bands were identified based on the roles of men and women (including composers, singers, or instrumentalists). However, to facilitate discussion, we chose the categories proposed by Lafrance et al. (2011). Consequently, the final coding includes five distinct categories: male artists, male bands (entirely composed of men), female artists, female bands (consisting solely of women), and male–female groups (mixed duos, trios, or bands featuring both women and men).
This dataset contains Weekly U.S. Ending Stocks of Distillate Fuel Oil 2015-2022. Data from US Energy information administration.This series is available through the EIA open data API and can be downloaded to Excel or embedded as an interactive chart or map on your website.
Purpose – The conversion between calibrated airspeed (CAS) and equivalent airspeed (EAS) is relatively cumbersome, because it involves the calculation of incompressible flow, for which the equations are quite long. If calculations on the computer are required, conversions with equations are necessary. In contrast, this project calculates a CAS to EAS Compressibility Correction Chart, which allows to convert CAS to EAS very quickly by reading the correction from a graph. --- Methodology – In Excel, compressibility correction is achieved through flight mechanics formulas. The correction is calculated with two distinct functions, one based on Mach Number and the other on pressure altitude. These functions are graphed individually and then integrated to produce the Compressibility Correction Chart. --- Findings – The Compressibility Correction Chart was successfully recreated as a 2-D graph. Upon comparison with other correction charts, the EAS-CAS-results demonstrate a mere 0% deviation, proving the accuracy of the findings and validating their near-perfect alignment. --- Research Limitations – Due to a limitation in Excel, which allows for 255 series for plotting, the range of input parameters had to be adjusted accordingly. The iterations of altitude span 1000 ft intervals, while those for Mach Number span 0.05 intervals. --- Practical Implications – Pilots can easily use the Compressibility Correction Chart for quick and highly accurate calculations when needed. --- Originality – CAS-EAS Compressibility Correction Charts are available in other sources. This paper represents a recreation of the 2-D Correction Chart by the combination of plots: one as function of Mach Number and the other of pressure altitude, using the Excel Software.
Analyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:
1- Data Import and Transformation:
2- Data Quality Assessment:
3- Calculating COGS:
4- Discount Analysis:
5- Sales Metrics:
6- Visualization:
7- Report Generation:
Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead of
urban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
According to our latest research, the global graph database market size in 2024 stands at USD 2.92 billion, with a robust compound annual growth rate (CAGR) of 21.6% projected from 2025 to 2033. By the end of 2033, the market is expected to reach approximately USD 21.1 billion. The rapid expansion of this market is primarily driven by the rising need for advanced data analytics, real-time big data processing, and the growing adoption of artificial intelligence and machine learning across various industry verticals. As organizations continue to seek innovative solutions to manage complex and interconnected data, the demand for graph database technologies is accelerating at an unprecedented pace.
One of the most significant growth factors for the graph database market is the exponential increase in data complexity and volume. Traditional relational databases often struggle to efficiently handle highly connected data, which is becoming more prevalent in modern business environments. Graph databases excel at managing relationships between data points, making them ideal for applications such as fraud detection, social network analysis, and recommendation engines. The ability to visualize and query data relationships in real-time provides organizations with actionable insights, enabling faster and more informed decision-making. This capability is particularly valuable in sectors like BFSI, healthcare, and e-commerce, where understanding intricate data connections can lead to substantial competitive advantages.
Another key driver fueling market growth is the widespread digital transformation initiatives undertaken by enterprises worldwide. As businesses increasingly migrate to cloud-based infrastructures and adopt advanced analytics tools, the need for scalable and flexible database solutions becomes paramount. Graph databases offer seamless integration with cloud platforms, supporting both on-premises and cloud deployment models. This flexibility allows organizations to efficiently manage growing data workloads while ensuring security and compliance. Additionally, the proliferation of IoT devices and the surge in unstructured data generation further amplify the demand for graph database solutions, as they are uniquely equipped to handle dynamic and heterogeneous data sources.
The integration of artificial intelligence and machine learning with graph databases is also a pivotal growth factor. AI-driven analytics require robust data models capable of uncovering hidden patterns and relationships within vast datasets. Graph databases provide the foundational infrastructure for such applications, enabling advanced features like predictive analytics, anomaly detection, and personalized recommendations. As more organizations invest in AI-powered solutions to enhance customer experiences and operational efficiency, the adoption of graph database technologies is expected to surge. Furthermore, continuous advancements in graph processing algorithms and the emergence of open-source graph database platforms are lowering entry barriers, fostering innovation, and expanding the market’s reach.
From a regional perspective, North America currently dominates the graph database market, owing to the early adoption of advanced technologies and the presence of major industry players. However, the Asia Pacific region is anticipated to witness the highest growth rate over the forecast period, driven by rapid digitalization, increasing investments in IT infrastructure, and the rising demand for data-driven decision-making across emerging economies. Europe also holds a significant share, supported by stringent data privacy regulations and the growing emphasis on innovation across sectors such as finance, healthcare, and manufacturing. As organizations across all regions recognize the value of graph databases in unlocking business insights, the global market is poised for sustained growth.
The graph database market is broadly segmented by component into s
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...