Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Vrinda Store: Interactive Ms Excel dashboardVrinda Store: Interactive Ms Excel dashboard Feb 2024 - Mar 2024Feb 2024 - Mar 2024 The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022?
And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022? And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel Skills: Data Analysis · Data Analytics · ms excel · Pivot Tables
Facebook
TwitterExcel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
Facebook
TwitterThese datasets contains all the data used to make the figures in the associated paper. The excel files are self-explanatory and can be directly used. While the other files in netcdf format, need a visualization tool (such as VERDI) or statistical software (such as R) to make statistical summary or plots. Portions of this dataset are inaccessible because: data will be uploaded when paper will be accepted by journal. They can be accessed through the following means: For excel files, the data can be directly used to make summary or plots. For netcdf files, another visualization tool or statistical package (such as R) can be used. All the netcdf files can be visualized using VERDI. Format: Two types of data formats. One is the excel files which are self-explanatory. The other type is netcdf files which are used to make the spatial plots in the paper. This dataset is associated with the following publication: Kang, D., J. Willison, G. Sarwar, M. Madden, C. Hogrefe, R. Mathur, B. Gantt, and S. Alfonso. Improving the Characterization of the Natural Emissions in CMAQ. EM Magazine. Air and Waste Management Association, Pittsburgh, PA, USA, (10): 1-7, (2021).
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
his project involves the creation of an interactive Excel dashboard for SwiftAuto Traders to analyze and visualize car sales data. The dashboard includes several visualizations to provide insights into car sales, profits, and performance across different models and manufacturers. The project makes use of various charts and slicers in Excel for the analysis.
Objective: The primary goal of this project is to showcase the ability to manipulate and visualize car sales data effectively using Excel. The dashboard aims to provide:
Profit and Sales Analysis for each dealer. Sales Performance across various car models and manufacturers. Resale Value Analysis comparing prices and resale values. Insights into Retention Percentage by car models. Files in this Project: Car_Sales_Kaggle_DV0130EN_Lab3_Start.xlsx: The original dataset used to create the dashboard. dashboards.xlsx: The final Excel file that contains the complete dashboard with interactive charts and slicers. Key Visualizations: Average Price and Year Resale Value: A bar chart comparing the average price and resale value of various car models. Power Performance Factor: A column chart displaying the performance across different car models. Unit Sales by Model: A donut chart showcasing unit sales by car model. Retention Percentage: A pie chart illustrating customer retention by car model. Tools Used: Microsoft Excel for creating and organizing the visualizations and dashboard. Excel Slicers for interactive filtering. Charts: Bar charts, pie charts, column charts, and sunburst charts. How to Use: Download the Dataset: You can download the Car_Sales_Kaggle_DV0130EN_Lab3_Start.xlsx file from Kaggle and follow the steps to create a similar dashboard in Excel. Open the Dashboard: The dashboards.xlsx file contains the final version of the dashboard. Simply open it in Excel and start exploring the interactive charts and slicers.
Facebook
TwitterStudents typically find linear regression analysis of data sets in a biology classroom challenging. These activities could be used in a Biology, Chemistry, Mathematics, or Statistics course. The collection provides student activity files with Excel instructions and Instructor Activity files with Excel instructions and solutions to problems.
Students will be able to perform linear regression analysis, find correlation coefficient, create a scatter plot and find the r-square using MS Excel 365. Students will be able to interpret data sets, describe the relationship between biological variables, and predict the value of an output variable based on the input of an predictor variable.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset named “Dataset: Flow manipulation in a Hele-Shaw cell with an electrically-controlled viscous obstruction” consists of Raw time-averaged images, which are generated by sequence of 100 frames extracted from experimental videos captured at various voltages (5V, 10V, 15V, 20V, and 50V), and saved as .tif files. These images were analysed to produce the data used in figure 2 and 3 of the article. The dataset also includes two Excel files named as “Figure 2_Experimental data.xlsx” and “Figure 3_Experimental data.xlsx”. These excel files contain the data used to create the experimental plots shown in Figure 2C, and Figure 3 of the research article respectively.
In the “Figure 2C_Experimental Data.xlsx” excel file, each sheet corresponds to a different voltage value shown in the figure, and contains three columns: A, B, and C. which represents the X-location, Y-location, and orientation angle (in degrees) of the experimental plot (red rods in the figure) respectively. This plot is overlaid on the model data (black rods in the figure) and displayed in Figure 2C given in the article.
The “Figure 3_Experimental data.xlsx” file contains three sheets for each voltage (5V, 10V, 15V, 20V, and 50V) and each of these three sheets provide data at three different X-locations (X=579, X= 1079, and X= 1779) as a function of Y-location as shown in the Figure 3 of the article. Each sheet has five columns: A, B, C, D, and E. These columns represent the X-location, Y-location, Orientation angle (in degrees), Coherency, and Error in the orientation angle (in degrees), respectively. These data points are used to create the experimental scatter plot shown in Figure 3 of the article.
Facebook
TwitterDr. Kevin Bronson provides a unique nitrogen and water management in cotton agricultural research dataset for compute, including notation of field events and operations, an intermediate analysis mega-table of correlated and calculated parameters, and laboratory analysis results generated during the experimentation, plus high-resolution plot level intermediate data analysis tables of SAS process output, as well as the complete raw data sensor recorded logger outputs. This data was collected using a Hamby rig as a high-throughput proximal plant phenotyping platform. The Hamby 6000 rig Ellis W. Chenault, & Allen F. Wiese. (1989). Construction of a High-Clearance Plot Sprayer. Weed Technology, 3(4), 659–662. http://www.jstor.org/stable/3987560 Dr. Bronson modified an old high-clearance Hamby 6000 rig, adding a tank and pump with a rear boom, to perform precision liquid N applications. A Raven control unit with GPS supplied variable rate delivery options. The 12 volt Holland Scientific GeoScoutX data recorder and associated CropCircle ACS-470 sensors with GPS signal, was easy to mount and run on the vehicle as an attached rugged data acquisition module, and allowed the measuring of plants using custom proximal active optical reflectance sensing. The HS data logger was positioned near the operator, and sensors were positioned in front of the rig, on forward protruding armature attached to a hydraulic front boom assembly, facing downward in nadir view 1 m above the average canopy height. A 34-size class AGM battery sat under the operator and provided the data system electrical power supply. Data suffered reduced input from Conley. Although every effort was afforded to capture adequate quality across all metrics, experiment exterior considerations were such that canopy temperature data is absent, and canopy height is weak due to technical underperformance. Thankfully, reflectance data quality was maintained or improved through the implementation of new hardware by Bronson. See included README file for operational details and further description of the measured data signals. Summary: Active optical proximal cotton canopy sensing spatial data and including few additional related metrics and weak low-frequency ultrasonic derived height are presented. Agronomic nitrogen and irrigation management related field operations are listed. Unique research experimentation intermediate analysis table is made available, along with raw data. The raw data recordings, and annotated table outputs with calculated VIs are made available. Plot polygon coordinate designations allow a re-intersection spatial analysis. Data was collected in the 2014 season at Maricopa Agricultural Center, Arizona, USA. High throughput proximal plant phenotyping via electronic sampling and data processing method approach is exampled using a modified high-clearance Hamby spray-rig. Acquired data conforms to location standard methodologies of the plant phenotyping. SAS and GIS compute processing output tables, including Excel formatted examples are presented, where data tabulation and analysis is available. Additional ultrasonic data signal explanation is offered as annotated time-series charts. The weekly proximal sensing data collected include the primary canopy reflectance at six wavelengths. Lint and seed yields, first open boll biomass, and nitrogen uptake were also determined. Soil profile nitrate to 1.8 m depth was determined in 30-cm increments, before planting and after harvest. Nitrous oxide emissions were determined with 1-L vented chambers (samples taken at 0, 12, and 24 minutes). Nitrous oxide was determined by gas chromatography (electron detection detector).
Facebook
TwitterOverview The SUMR-D CART2 turbine data are recorded by the CART2 wind turbine's supervisory control and data acquisition (SCADA) system for the Advanced Research Projects Agency–Energy (ARPA-E) SUMR-D project located at the National Renewable Energy Laboratory (NREL) Flatirons Campus. For the project, the CART2 wind turbine was outfitted with a highly flexible rotor specifically designed and constructed for the project. More details about the project can be found here: https://sumrwind.com/. The data include power, loads, and meteorological information from the turbine during startup, operation, and shutdown, and when it was parked and idle. Data Details Additional files are attached: sumr_d_5-Min_Database.mat - a database file in MATLAB format of this dataset, which can be used to search for desired data files; sumr_d_5-Min_Database.xlsx - a database file in Microsoft Excel format of this dataset, which can be used to search for desired data files; loadcartU.m - this script loads in a CART data file and puts it in your workspace as a Matlab matrix (you can call this script from your own Matlab scripts to do your own analysis); charts.mat - this is a dependency file needed for the other scripts (it allows you to make custom preselections for cartPlotU.m); cartLoadHdrU.m - this script loads in the header file information for the data file (the header is embedded in each data file at the beginning); cartPlotU.m - this is a graphic user interface (GUI) that allows you to interactively look at different channels (to use it, run the script in Matlab, and load in the data file(s) of interest; from there, you can select different channels and plot things against each other; note that this script has issues with later versions of MATLAB; the preferred version to use is R2011b). Data Quality Wind turbine blade loading data were calibrated using blade gravity calibrations prior to data collection and throughout the data collection period. Blade loading was also checked for data quality following data collection as strain gauge measurements drifted throughout the data collection. These drifts in the strain gauge measurements were removed in post processing.
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Graph Database Market Size 2025-2029
The graph database market size is valued to increase by USD 11.24 billion, at a CAGR of 29% from 2024 to 2029. Open knowledge network gaining popularity will drive the graph database market.
Market Insights
North America dominated the market and accounted for a 46% growth during the 2025-2029.
By End-user - Large enterprises segment was valued at USD 1.51 billion in 2023
By Type - RDF segment accounted for the largest market revenue share in 2023
Market Size & Forecast
Market Opportunities: USD 670.01 million
Market Future Opportunities 2024: USD 11235.10 million
CAGR from 2024 to 2029 : 29%
Market Summary
The market is experiencing significant growth due to the increasing demand for low-latency query capabilities and the ability to handle complex, interconnected data. Graph databases are deployed in both on-premises data centers and cloud regions, providing flexibility for businesses with varying IT infrastructures. One real-world business scenario where graph databases excel is in supply chain optimization. In this context, graph databases can help identify the shortest path between suppliers and consumers, taking into account various factors such as inventory levels, transportation routes, and demand patterns. This can lead to increased operational efficiency and reduced costs.
However, the market faces challenges such as the lack of standardization and programming flexibility. Graph databases, while powerful, require specialized skills to implement and manage effectively. Additionally, the market is still evolving, with new players and technologies emerging regularly. Despite these challenges, the potential benefits of graph databases make them an attractive option for businesses seeking to gain a competitive edge through improved data management and analysis.
What will be the size of the Graph Database Market during the forecast period?
Get Key Insights on Market Forecast (PDF) Request Free Sample
The market is an evolving landscape, with businesses increasingly recognizing the value of graph technology for managing complex and interconnected data. According to recent research, the adoption of graph databases is projected to grow by over 20% annually, surpassing traditional relational databases in certain use cases. This trend is particularly significant for industries requiring advanced data analysis, such as finance, healthcare, and telecommunications. Compliance is a key decision area where graph databases offer a competitive edge. By modeling data as nodes and relationships, organizations can easily trace and analyze interconnected data, ensuring regulatory requirements are met. Moreover, graph databases enable real-time insights, which is crucial for budgeting and product strategy in today's fast-paced business environment.
Graph databases also provide superior performance compared to traditional databases, especially in handling complex queries involving relationships and connections. This translates to significant time and cost savings, making it an attractive option for businesses seeking to optimize their data management infrastructure. In conclusion, the market is experiencing robust growth, driven by its ability to handle complex data relationships and offer real-time insights. This trend is particularly relevant for industries dealing with regulatory compliance and seeking to optimize their data management infrastructure.
Unpacking the Graph Database Market Landscape
In today's data-driven business landscape, the adoption of graph databases has surged due to their unique capabilities in handling complex network data modeling. Compared to traditional relational databases, graph databases offer a significant improvement in query performance for intricate relationship queries, with some reports suggesting up to a 500% increase in query response time. Furthermore, graph databases enable efficient data lineage tracking, ensuring regulatory compliance and enhancing data version control. Graph databases, such as property graph models and RDF databases, facilitate node relationship management and real-time graph processing, making them indispensable for industries like finance, healthcare, and social media. With the rise of distributed and knowledge graph databases, organizations can achieve scalability and performance improvements, handling massive datasets with ease. Security, indexing, and deployment are essential aspects of graph databases, ensuring data integrity and availability. Query performance tuning and graph analytics libraries further enhance the value of graph databases in data integration and business intelligence applications. Ultimately, graph databases offer a powerful alternative to NoSQL databases, providing a more flexible and efficient approach to managing complex data relationships.
Key Market Drivers Fueling Growth
The growing popularity o
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Facebook
TwitterIntroduction Preservation and management of semi-arid ecosystems requires understanding of the processes involved in soil erosion and their interaction with plant community. Rainfall simulations on natural plots provide an effective way of obtaining a large amount of erosion data under controlled conditions in a short period of time. This dataset contains hydrological (rainfall, runoff, flow velocity), erosion (sediment concentration and rate), vegetation (plant cover), and other supplementary information from 272 rainfall simulation experiments conducted on 23 rangeland locations in Arizona and Nevada between 2002 and 2013. The dataset advances our understanding of basic hydrological and biological processes that drive soil erosion on arid rangelands. It can be used to quantify runoff, infiltration, and erosion rates on a variety of ecological sites in the Southwestern USA. Inclusion of wildfire and brush treatment locations combined with long term observations makes it important for studying vegetation recovery, ecological transitions, and effect of management. It is also a valuable resource for erosion model parameterization and validation. Instrumentation Rainfall was generated by a portable, computer-controlled, variable intensity simulator (Walnut Gulch Rainfall Simulator). The WGRS can deliver rainfall rates ranging between 13 and 178 mm/h with variability coefficient of 11% across 2 by 6.1 m area. Estimated kinetic energy of simulated rainfall was 204 kJ/ha/mm and drop size ranged from 0.288 to 7.2 mm. Detailed description and design of the simulator is available in Stone and Paige (2003). Prior to each field season the simulator was calibrated over a range of intensities using a set of 56 rain gages. During the experiments windbreaks were setup around the simulator to minimize the effect of wind on rain distribution. On some of the plots, in addition to rainfall only treatment, run-on flow was applied at the top edge of the plot. The purpose of run-on water application was to simulate hydrological processes that occur on longer slopes (>6 m) where upper portion of the slope contributes runoff onto the lower portion. Runoff rate from the plot was measured using a calibrated V-shaped supercritical flume equipped with depth gage. Overland flow velocity on the plots was measured using electrolyte and fluorescent dye solution. Dye moving from the application point at 3.2 m distance to the outlet was timed with stopwatch. Electrolyte transport in the flow was measured by resistivity sensors imbedded in edge of the outlet flume. Maximum flow velocity was defined as velocity of the leading edge of the solution and was determined from beginning of the electrolyte breakthrough curve and verified by visual observation (dye). Mean flow velocity was calculated using mean travel time obtained from the electrolyte solution breakthrough curve using moment equation. Soil loss from the plots was determined from runoff samples collected during each run. Sampling interval was variable and aimed to represent rising and falling limbs of the hydrograph, any changes in runoff rate, and steady state conditions. This resulted in approximately 30 to 50 samples per simulation. Shortly before every simulation plot surface and vegetative cover was measured at 400 point grid using a laser and line-point intercept procedure (Herrick et al., 2005). Vegetative cover was classified as forbs, grass, and shrub. Surface cover was characterized as rock, litter, plant basal area, and bare soil. These 4 metrics were further classified as protected (located under plant canopy) and unprotected (not covered by the canopy). In addition, plant canopy and basal area gaps were measured on the plots over three lengthwise and six crosswise transects. Experimental procedure Four to eight 6.1 m by 2 m replicated rainfall simulation plots were established on each site. The plots were bound by sheet metal borders hammered into the ground on three sides. On the down slope side a collection trough was installed to channel runoff into the measuring flume. If a site was revisited, repeat simulations were always conducted on the same long term plots. The experimental procedure was as follows. First, the plot was subjected to 45 min, 65 mm/h intensity simulated rainfall (dry run) intended to create initial saturated condition that could be replicated across all sites. This was followed by a 45 minute pause and a second simulation with varying intensity (wet run). During wet runs two modes of water application were used as: rainfall or run-on. Rainfall wet runs typically consisted of series of application rates (65, 100, 125, 150, and 180 mm/h) that were increased after runoff had reached steady state for at least five minutes. Runoff samples were collected on the rising and falling limb of the hydrograph and during each steady state (a minimum of 3 samples). Overland flow velocities were measured during each steady state as previously described. When used, run-on wet runs followed the same procedure as rainfall runs, except water application rates varied between 100 and 300 mm/h. In approximately 20% of simulation experiments the wet run was followed by another simulation (wet2 run) after a 45 min pause. Wet2 runs were similar to wet runs and also consisted of series of varying intensity rainfalls and/or run-on inputs. Resulting Data The dataset contains hydrological, erosion, vegetation, and ecological data from 272 rainfall simulation experiments conducted on 12 sq. m plots at 23 rangeland locations in Arizona and Nevada. The experiments were conducted between 2002 and 2013, with some locations being revisited multiple times. Resources in this dataset:Resource Title: Appendix B. Lists of sites and general information. File Name: Rainfall Simulation Sites Summary.xlsxResource Description: The table contains list or rainfall simulation sites and individual plots, their coordinates, topographic, soil, ecological and vegetation characteristics, and dates of simulation experiments. The sites grouped by common geographic area.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix F. Site pictures. File Name: Site photos.zipResource Description: Pictures of rainfall simulation sites and plots.Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Access,url: https://products.office.com/en-us/access Resource Title: Appendix C. Rainfall simulations. File Name: Rainfall simulation.csvResource Description: Please see Appendix C. Rainfall simulations (revised) for data with errors corrected (11/27/2017). The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experimentsResource Software Recommended: MS Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix E. Simulation sites map. File Name: Rainfall Simulator Sites Map.zipResource Description: Map of rainfall simulation sites with embedded images in Google Earth.Resource Software Recommended: Google Earth,url: https://www.google.com/earth/ Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Access,url: https://products.office.com/en-us/access Resource Title: Appendix D. Ground and vegetation cover. File Name: Plot Ground and Vegetation Cover.csvResource Description: The table contains ground (rock, litter, basal, bare soil) cover, foliar cover, and basal gap on plots immediately prior to simulation experiments. Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Excel,url: https://products.office.com/en-us/excel Resource Title: Appendix A. Data dictionary. File Name: Data dictionary.csvResource Description: Explanation of terms and unitsResource Software Recommended: MS Access,url: https://products.office.com/en-us/access Resource Title: Appendix C. Rainfall simulations (revised). File Name: Rainfall simulation (R11272017).csvResource Description: The table contains rainfall, runoff, sediment, and flow velocity data from rainfall simulation experiments (updated 11/27/2017)Resource Software Recommended: Microsoft Access,url: https://products.office.com/en-us/access
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Analyzing Coffee Shop Sales: Excel Insights 📈
In my first Data Analytics Project, I Discover the secrets of a fictional coffee shop's success with my data-driven analysis. By Analyzing a 5-sheet Excel dataset, I've uncovered valuable sales trends, customer preferences, and insights that can guide future business decisions. 📊☕
DATA CLEANING 🧹
• REMOVED DUPLICATES OR IRRELEVANT ENTRIES: Thoroughly eliminated duplicate records and irrelevant data to refine the dataset for analysis.
• FIXED STRUCTURAL ERRORS: Rectified any inconsistencies or structural issues within the data to ensure uniformity and accuracy.
• CHECKED FOR DATA CONSISTENCY: Verified the integrity and coherence of the dataset by identifying and resolving any inconsistencies or discrepancies.
DATA MANIPULATION 🛠️
• UTILIZED LOOKUPS: Used Excel's lookup functions for efficient data retrieval and analysis.
• IMPLEMENTED INDEX MATCH: Leveraged the Index Match function to perform advanced data searches and matches.
• APPLIED SUMIFS FUNCTIONS: Utilized SumIFs to calculate totals based on specified criteria.
• CALCULATED PROFITS: Used relevant formulas and techniques to determine profit margins and insights from the data.
PIVOTING THE DATA 𝄜
• CREATED PIVOT TABLES: Utilized Excel's PivotTable feature to pivot the data for in-depth analysis.
• FILTERED DATA: Utilized pivot tables to filter and analyze specific subsets of data, enabling focused insights. Specially used in “PEAK HOURS” and “TOP 3 PRODUCTS” charts.
VISUALIZATION 📊
• KEY INSIGHTS: Unveiled the grand total sales revenue while also analyzing the average bill per person, offering comprehensive insights into the coffee shop's performance and customer spending habits.
• SALES TREND ANALYSIS: Used Line chart to compute total sales across various time intervals, revealing valuable insights into evolving sales trends.
• PEAK HOUR ANALYSIS: Leveraged Clustered Column chart to identify peak sales hours, shedding light on optimal operating times and potential staffing needs.
• TOP 3 PRODUCTS IDENTIFICATION: Utilized Clustered Bar chart to determine the top three coffee types, facilitating strategic decisions regarding inventory management and marketing focus.
*I also used a Timeline to visualize chronological data trends and identify key patterns over specific times.
While it's a significant milestone for me, I recognize that there's always room for growth and improvement. Your feedback and insights are invaluable to me as I continue to refine my skills and tackle future projects. I'm eager to hear your thoughts and suggestions on how I can make my next endeavor even more impactful and insightful.
THANKS TO: WsCube Tech Mo Chen Alex Freberg
TOOLS USED: Microsoft Excel
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?:
This data set contains all the experimental raw data, analysis and source files for the final figures reported in the manuscript: "Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?". It is divided into five (1-5) zipped folders, named as the technique used to obtain the data. Each of them, where applicable, consists of three different subfolders (raw data, analysed data, final graph). Read below for more details.
1) ConfocalMicroscopy
1a) Raw_Data: the raw images are reported as .dat and .tif formats, divided into folders (according to date first yymmdd, and within the same day according to composition). Each folder contains a .txt file reporting the experimental details
1b) GUVs_Statistics - GUVs_Statistics.txt explains how we generated the bar plot shown in Fig. 1E
1c) Final_Graph - Figure_1B_1D.png is the figure representing figure 1B and 1D - Figure1E_%ofGUVswithCaMAdsorbptions.csv is the source file x-y of the bar plot shown in figure 1E (% of GUVs which showed adsorption of CaM over the total amount of measured GUVs) - Where_To_Find_Representative_Images.txt states the folders where the raw images chosen for figure 1 can be found
2) FCS 2a) Raw_Data: - 1_points: .ptu files - 2_points: .ht3 files - Raw_Data_Description.docx which compositions and conditions correspond to which point in the two data sets 2b) Final_Graphs: - Figure_2A.xlsx contains the x-y source file for figure 2A
2c) Analysis: - FCS_Fits.xlsx outcome of the global fitting procedure described in the .docx below (each group of points represents a certain composition and calcium concentration, read the Raw_Data_Description.docx in the FCS > Raw_Data) - Notes_for_FCS_Analysis.docx contains a brief description of the analysis of the autocorrelation curves
3) GPLaurdan 3a) Raw Data: all the spectra are stored in folders named by date (yymmdd_lipidcomposition_Laurdan) and are in both .FS and .txt formats
3b) GP calculations: contains all the .xlsx files calculating the GP values from the raw emission and excitation spectra
3c) Final_Graphs - Data_Processing_For_Fig_2D.csv contains the data processing from the GP values calculated from the spectra to the DeltaGP (GP with- GP without CaM) reported in fig. 2D - Figure_2C_2D.xlsx contains the x-y source file for the figure 2C and 2D
4) LiveCellsImaging
3a) Intensity_Protrusions_vs_Cell_Body: - contains all the .xlsx files calculating the intensity of the various images. File renamed by date (yymmdd) - All data in all excel sheets gathered in another Excel file to create a final graph
3b) Final_Graphs - Figure_S2B.xlsx contains the x-y source file for the figure S2B
5) LiveCellImaging_Raw_Data: it contains some of the images, which are given in .tif. They are divided by date (yymmdd) and each contains subfolders renamed by sample name, concentration of ionomycin. Within the subfolders, the images are divided into folders distinguishing the data acquired before and after the ionomycin treatment and the incubation time.
6) 211124_BioCev_Imaging_1 folder has the .jpg files of the time laps, these are shown in fig 1A and S2.
7) 211124_BioCev_Imaging_2 and 8) 211124_BioCev_Imaging_3 contain the images of HeLa cells expressing EGFP-CaM after treatment with ionomycin 200 nM (A1) and 1 uM (A2), respectively.
9) SPR
9a) Raw Data: - SPR_Raw_Data.xlsx x/y exported sensorgrams - the .jpg files of the software are also reported and named by lipid composition
9b) Final_Graph: - Fig.2B.xlsx contains the x-y source file for the figure 2B
9c) Analysis - SPR_Analysis.xlsx: excel file containing step-by-step (sheet by sheet) how we processed the raw data to obtain the final figure (details explained in the .docx below) - Analysis of SPR data_notes.docx: read me for detailed explanation
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global market size for Graph Database Platforms for Supply Chain reached USD 1.54 billion in 2024, driven by the increasing complexity of supply chain operations and the need for advanced data analytics. The market is expected to grow at a robust CAGR of 22.1% from 2025 to 2033, reaching USD 8.03 billion by 2033. This significant growth is primarily fueled by the rising adoption of digital transformation initiatives across industries, coupled with the demand for real-time supply chain visibility and risk management capabilities.
One of the primary growth drivers for the Graph Database Platforms for Supply Chain market is the rapidly increasing complexity of global supply chains. As organizations expand their operations across borders and deal with a multitude of suppliers, logistics partners, and regulatory environments, traditional relational databases often fall short in capturing the intricate relationships and dependencies inherent in modern supply chains. Graph database platforms excel in visualizing and analyzing these connections, enabling companies to identify bottlenecks, mitigate risks, and optimize workflows. The ability to map out entire supply chain networks in real time allows businesses to make faster, more informed decisions, which is crucial in today’s volatile market environment where disruptions are frequent and costly.
Another significant factor propelling market growth is the surging demand for enhanced supply chain transparency and traceability. With increasing consumer expectations, stricter regulatory requirements, and the ongoing need to combat fraud and counterfeiting, companies are investing heavily in technologies that provide end-to-end visibility. Graph database platforms allow organizations to track the journey of goods and materials from origin to destination, facilitating compliance with industry standards and improving accountability. This capability is especially vital in industries such as food & beverage, pharmaceuticals, and automotive, where product recalls and quality issues can have severe financial and reputational consequences. The integration of graph database platforms with IoT devices and blockchain further amplifies their value, offering real-time insights and immutable records for every transaction and movement within the supply chain.
The growing emphasis on supply chain resilience and agility in response to global disruptions, such as the COVID-19 pandemic and geopolitical tensions, has also accelerated the adoption of graph database platforms. Organizations are increasingly recognizing the need to proactively identify vulnerabilities and simulate various scenarios to ensure business continuity. Graph databases facilitate advanced risk modeling and predictive analytics, empowering supply chain leaders to anticipate disruptions, evaluate alternative sourcing strategies, and maintain optimal inventory levels. As the frequency and impact of supply chain shocks continue to rise, the demand for intelligent platforms that can quickly adapt to changing conditions is expected to sustain the market’s momentum over the next decade.
From a regional perspective, North America currently dominates the Graph Database Platforms for Supply Chain market, accounting for the largest share in 2024. This dominance is attributed to the early adoption of advanced technologies, strong presence of key market players, and a mature supply chain ecosystem. However, Asia Pacific is projected to exhibit the fastest growth rate during the forecast period, driven by the rapid expansion of manufacturing and e-commerce sectors, increasing investments in digital infrastructure, and a growing focus on supply chain optimization. Europe also remains a significant market, supported by stringent regulatory standards and a strong emphasis on sustainability and risk management. The Middle East & Africa and Latin America are gradually emerging as promising markets, buoyed by rising industrialization and efforts to modernize supply chain operations.
The Graph Database Platforms for Supply Chain market is segmented by component into software and services. The software segment currently holds the largest share of the market, as organizations increasingly invest in robust platforms that can handle vast and complex datasets. These software solutions are designed to provide advanced analytics, visuali
Facebook
TwitterComplete annotations for the tabular data are presented below. Tab Fig 1: (A) The heatmap data of G protein family members in the hippocampal tissue of 6-month-old Wildtype (n = 6) and 5xFAD (n = 6) mice; (B) The heatmap data of G protein family members in the cortical tissue of 6-month-old Wildtype (n = 6) and 5xFAD (n = 6) mice; (C) The data in the overlapping part of the Venn diagram (132 elements); (D) The data information for creating volcano plot; (E) The data information for creating heatmap of GPCR-related DEGs; (F) Expression of Gnb5 in the large sample dataset GSE44772; Control, n = 303; AD, n = 387; (H) Statistical analysis of Gnb5 protein levels from panel G; Wildtype, n = 4; 5xFAD, n = 4; (J) Statistical analysis of Gnb5 protein levels from panel I; Wildtype, n = 4; 5xFAD, n = 4; (L) Quantitative analysis of Gnb5 fluorescence intensity in 5xFAD and Wildtype groups; Wildtype, n = 4; 5xFAD, n = 4. Tab Fig 2: (D) qPCR data of Gnb5 knockout in hippocampal tissue; Gnb5F/F, n = 6; Gnb5-CCKO, n = 6; (E–I, L–N) Animal behavioral tests in mice, Gnb5F/F, n = 22; Gnb5-CCKO, n = 16; (E) Total distance traveled in the open field experiment; (F) Training curve in the Morris water maze (MWM); (F-day6) Data from the sixth day of MWM training; (G) Percentage of time spent by the mouse in the target quadrant in the MWM; (H) Statistical analysis of the number of times the mouse traverses the target quadrant in the MWM; (I) Latency to first reach the target quadrant in the MWM; (L) Baseline freezing percentage of mice in an identical testing context; (M) Percentage of freezing time of mice during the Context phase; (N) Percentage of freezing time of mice during the Cue phase. Tab Fig 3: (D–F, H) MWM tests in mice; Wildtype+AAV-GFP, n = 20; Wildtype+AAV-Gnb5-GFP, n = 23; 5xFAD + AAV-GFP, n = 23; 5xFAD + AAV-Gnb5-GFP, n = 26; (D) Training curve in the MWM; (D-day6) Data from the sixth day of MWM training; (E) Percentage of time spent in the target quadrant in the MWM; (F) Statistical analysis of the number of entries in the target quadrant in the MWM; (H) Movement speed of mice in the MWM; (I–K) The contextual fear conditioning test in mice; 5xFAD + AAV-GFP, n = 23; 5xFAD + AAV-Gnb5-GFP, n = 26; (I) Baseline freezing percentage of mice in an identical testing context; (J) Percentage of freezing time of mice during the Context phase; (K) Percentage of freezing time of mice during the Cue phase; (L) Total distance traveled in the open field test; (M) Percentage of time spent in the center area during the open field test. Tab Fig 4: (B, C) Quantification of Aβ plaques in the hippocampus sections from Wildtype and 5xFAD mice injected with either AAV-Gnb5 or AAV-GFP; Wildtype+AAV-GFP, n = 4; Wildtype+AAV-Gnb5-GFP, n = 4; 5xFAD + AAV-GFP, n = 4; 5xFAD + AAV-Gnb5-GFP, n = 4; (B) Quantification of Aβ plaques number; (C) Quantification of Aβ plaques size; (F, G) Quantification of Aβ pylaques from indicted mice lines; WT&Gnb5F/F&CamKIIa-CreERT+Vehicle, n = 4; 5xFAD&Gnb5F/F&CamKIIa-CreERT+Vehicle, n = 4; 5xFAD&Gnb5F/F&CamKIIa-CreERT+Tamoxifen, n = 4; (F) Quantification of Aβ plaque size; (G) Quantification of Aβ plaque number. Tab Fig 5: (B) Overexpression of Gnb5-AAV in 5xFAD mice affects the expression of proteins related to APP cleavage (BACE1, β-CTF, Nicastrin and APP); Statistical analysis of protein levels; n = 4, respectively; (D) Tamoxifen-induced Gnb5 knockdown in 5xFAD mice affects APP-cleaving proteins; Statistical analysis of protein levels; n = 4, respectively; (F) Gnb5-CCKO mice show altered expression of APP-cleaving proteins; Statistical analysis of protein levels; n = 6, respectively. Tab Fig 7: (C, D) Quantification of Aβ plaques in the overexpressed full-length Gnb5, truncated fragments, and mutant truncated fragment AAV in 5xFAD mice; n = 4, respectively; (C) Quantification of Aβ plaques size; (D) Quantification of Aβ plaques number; (F) Effect of overexpressing full-length Gnb5, truncated fragments, and mutant truncated fragment viruses on the expression of proteins related to APP cleavage process in 5xFAD; Statistical analysis of protein levels; n = 3, respectively. (XLSX)
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides a dynamic Excel model for prioritizing projects based on Feasibility, Impact, and Size.
It visualizes project data on a Bubble Chart that updates automatically when new projects are added.
Use this tool to make data-driven prioritization decisions by identifying which projects are most feasible and high-impact.
Organizations often struggle to compare multiple initiatives objectively.
This matrix helps teams quickly determine which projects to pursue first by visualizing:
Example (partial data):
| Criteria | Project 1 | Project 2 | Project 3 | Project 4 | Project 5 | Project 6 | Project 7 | Project 8 |
|---|---|---|---|---|---|---|---|---|
| Feasibility | 7 | 9 | 5 | 2 | 7 | 2 | 6 | 8 |
| Impact | 8 | 4 | 4 | 6 | 6 | 7 | 7 | 7 |
| Size | 10 | 2 | 3 | 7 | 4 | 4 | 3 | 1 |
| Quadrant | Description | Action |
|---|---|---|
| High Feasibility / High Impact | Quick wins | Top Priority |
| High Impact / Low Feasibility | Valuable but risky | Plan carefully |
| Low Impact / High Feasibility | Easy but minor value | Optional |
| Low Impact / Low Feasibility | Low return | Defer or drop |
Project_Priority_Matrix.xlsx. You can use this for:
- Portfolio management
- Product or feature prioritization
- Strategy planning workshops
Project_Priority_Matrix.xlsxFree for personal and organizational use.
Attribution is appreciated if you share or adapt this file.
Author: [Asjad]
Contact: [m.asjad2000@gmail.com]
Compatible With: Microsoft Excel 2019+ / Office 365
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.