Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.
• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.
• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Original excel files of tabular data that have been used to generate the visual presentation using graphs and charts of the techniques for the current research trends within 6 years (from years 2013 to 2018).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.
Facebook
Twitter*** Fake News on Twitter ***
These 5 datasets are the results of an empirical study on the spreading process of newly fake news on Twitter. Particularly, we have focused on those fake news which have given rise to a truth spreading simultaneously against them. The story of each fake news is as follow:
1- FN1: A Muslim waitress refused to seat a church group at a restaurant, claiming "religious freedom" allowed her to do so.
2- FN2: Actor Denzel Washington said electing President Trump saved the U.S. from becoming an "Orwellian police state."
3- FN3: Joy Behar of "The View" sent a crass tweet about a fatal fire in Trump Tower.
4- FN4: The animated children's program 'VeggieTales' introduced a cannabis character in August 2018.
5- FN5: In September 2018, the University of Alabama football program ended its uniform contract with Nike, in response to Nike's endorsement deal with Colin Kaepernick.
The data collection has been done in two stages that each provided a new dataset: 1- attaining Dataset of Diffusion (DD) that includes information of fake news/truth tweets and retweets 2- Query of neighbors for spreaders of tweets that provides us with Dataset of Graph (DG).
DD
DD for each fake news story is an excel file, named FNx_DD where x is the number of fake news, and has the following structure:
The structure of excel files for each dataset is as follow:
Each row belongs to one captured tweet/retweet related to the rumor, and each column of the dataset presents a specific information about the tweet/retweet. These columns from left to right present the following information about the tweet/retweet:
User ID (user who has posted the current tweet/retweet)
The description sentence in the profile of the user who has published the tweet/retweet
The number of published tweet/retweet by the user at the time of posting the current tweet/retweet
Date and time of creation of the account by which the current tweet/retweet has been posted
Language of the tweet/retweet
Number of followers
Number of followings (friends)
Date and time of posting the current tweet/retweet
Number of like (favorite) the current tweet had been acquired before crawling it
Number of times the current tweet had been retweeted before crawling it
Is there any other tweet inside of the current tweet/retweet (for example this happens when the current tweet is a quote or reply or retweet)
The source (OS) of device by which the current tweet/retweet was posted
Tweet/Retweet ID
Retweet ID (if the post is a retweet then this feature gives the ID of the tweet that is retweeted by the current post)
Quote ID (if the post is a quote then this feature gives the ID of the tweet that is quoted by the current post)
Reply ID (if the post is a reply then this feature gives the ID of the tweet that is replied by the current post)
Frequency of tweet occurrences which means the number of times the current tweet is repeated in the dataset (for example the number of times that a tweet exists in the dataset in the form of retweet posted by others)
State of the tweet which can be one of the following forms (achieved by an agreement between the annotators):
r : The tweet/retweet is a fake news post
a : The tweet/retweet is a truth post
q : The tweet/retweet is a question about the fake news, however neither confirm nor deny it
n : The tweet/retweet is not related to the fake news (even though it contains the queries related to the rumor, but does not refer to the given fake news)
DG
DG for each fake news contains two files:
A file in graph format (.graph) which includes the information of graph such as who is linked to whom. (This file named FNx_DG.graph, where x is the number of fake news)
A file in Jsonl format (.jsonl) which includes the real user IDs of nodes in the graph file. (This file named FNx_Labels.jsonl, where x is the number of fake news)
Because in the graph file, the label of each node is the number of its entrance in the graph. For example if node with user ID 12345637 be the first node which has been entered into the graph file then its label in the graph is 0 and its real ID (12345637) would be at the row number 1 (because the row number 0 belongs to column labels) in the jsonl file and so on other node IDs would be at the next rows of the file (each row corresponds to 1 user id). Therefore, if we want to know for example what the user id of node 200 (labeled 200 in the graph) is, then in jsonl file we should look at row number 202.
The user IDs of spreaders in DG (those who have had a post in DD) would be available in DD to get extra information about them and their tweet/retweet. The other user IDs in DG are the neighbors of these spreaders and might not exist in DD.
Facebook
Twitter
According to our latest research, the global graph data integration platform market size reached USD 2.1 billion in 2024, reflecting robust adoption across industries. The market is projected to grow at a CAGR of 18.4% from 2025 to 2033, reaching approximately USD 10.7 billion by 2033. This significant growth is fueled by the increasing need for advanced data management and analytics solutions that can handle complex, interconnected data across diverse organizational ecosystems. The rapid digital transformation and the proliferation of big data have further accelerated the demand for graph-based data integration platforms.
The primary growth factor driving the graph data integration platform market is the exponential increase in data complexity and volume within enterprises. As organizations collect vast amounts of structured and unstructured data from multiple sources, traditional relational databases often struggle to efficiently process and analyze these data sets. Graph data integration platforms, with their ability to map, connect, and analyze relationships between data points, offer a more intuitive and scalable solution. This capability is particularly valuable in sectors such as BFSI, healthcare, and telecommunications, where real-time data insights and dynamic relationship mapping are crucial for decision-making and operational efficiency.
Another significant driver is the growing emphasis on advanced analytics and artificial intelligence. Modern enterprises are increasingly leveraging AI and machine learning to extract actionable insights from their data. Graph data integration platforms enable the creation of knowledge graphs and support complex analytics, such as fraud detection, recommendation engines, and risk assessment. These platforms facilitate seamless integration of disparate data sources, enabling organizations to gain a holistic view of their operations and customers. As a result, investment in graph data integration solutions is rising, particularly among large enterprises seeking to enhance their analytics capabilities and maintain a competitive edge.
The surge in regulatory requirements and compliance mandates across various industries also contributes to the expansion of the graph data integration platform market. Organizations are under increasing pressure to ensure data accuracy, lineage, and transparency, especially in highly regulated sectors like finance and healthcare. Graph-based platforms excel in tracking data provenance and relationships, making it easier for companies to comply with regulations such as GDPR, HIPAA, and others. Additionally, the shift towards hybrid and multi-cloud environments further underscores the need for robust data integration tools capable of operating seamlessly across different infrastructures, further boosting market growth.
From a regional perspective, North America currently dominates the graph data integration platform market, accounting for the largest share due to early adoption of advanced data technologies, a strong presence of key market players, and significant investments in digital transformation initiatives. However, Asia Pacific is expected to witness the fastest growth over the forecast period, driven by rapid industrialization, expanding IT infrastructure, and increasing adoption of cloud-based solutions among enterprises in countries like China, India, and Japan. Europe also remains a significant contributor, supported by stringent data privacy regulations and a mature digital economy.
The component segment of the graph data integration platform market is bifurcated into software and services. The software segment currently commands the largest market share, reflecting the critical role of robust graph database engines, visualization tools, and integration frameworks in managing and analyzing complex data relationships. These software solutions are designed to deliver high scalability, flexibility, and real-time proces
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains a collection of data about 454 value chains from 23 rural European areas of 16 countries. This data is obtained through a semi-automatic workflow that transforms raw textual data from an unstructured MS Excel sheet into semantic knowledge graphs.In particular, the repository contains:MS Excel sheet containing different value chains details provided by MOuntain Valorisation through INterconnectedness and Green growth (MOVING) European project;454 CSV files containing events, titles, entities and coordinates of narratives of each value chain, obtained by pre-processing the MS Excel sheet454 Web Ontology Language (OWL) files. This collection of files is the result of the semi-automatic workflow, and is organized as a semantic knowledge graph of narratives, where each narrative is a sub-graph explaining one among the 454 value chains and its territory aspects. The knowledge graph is based on the Narrative Ontology, an ontology developed by Institute of Information Science and Technologies (ISTI-CNR) as an extension of CIDOC CRM, FRBRoo, and OWL Time.Two CSV files that compile all the possible available information extracted from 454 Web Ontology Language (OWL) files.GeoPackage files with the geographic coordinates related to the narratives.The HTML files that show all the different SPARQL and GeoSPARQL queries.The HTML files that show the story maps about the 454 value chains.An image showing how the various components of the dataset interact with each other.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...
Facebook
TwitterExcel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
About Datasets:
Domain : Sales Project: Coca Cola Sales Analysis Datasets: Power BI Dataset vF Dataset Type: Excel Data Dataset Size: 52k+ records
KPI's: 1. Analyze Profit Margins per Brand 2. Sales by Region 3. Price per unit 4. Operating Profit 5. Additional Analysis
Process: 1. Understanding the problem 2. Data Collection 3. Exploring and analyzing the data 4. Interpreting the results
This data contains Power Query, Q&A visual, Key influencers visual, map chart, matrix, dynamic timeline, dashboard, formatting, text box.
Facebook
Twitter
According to our latest research, the global graph database platform market size reached USD 2.5 billion in 2024, demonstrating robust demand across various sectors. The market is projected to expand at a CAGR of 22.7% from 2025 to 2033, reaching an estimated value of USD 19.1 billion by 2033. This impressive growth is primarily attributed to the increasing need for advanced data analytics, real-time intelligence, and the proliferation of connected data across enterprises worldwide.
A key factor propelling the growth of the graph database platform market is the surging adoption of big data analytics and artificial intelligence in business operations. As organizations manage ever-growing volumes of complex and connected data, traditional relational databases often fall short in terms of efficiency and scalability. Graph database platforms offer a more intuitive and efficient way to model, store, and query highly connected data, enabling faster insights and supporting sophisticated applications such as fraud detection, recommendation engines, and social network analysis. The need for real-time analytics and decision-making is driving enterprises to invest heavily in graph database technologies, further accelerating market expansion.
Another significant driver for the graph database platform market is the increasing incidence of cyber threats and fraudulent activities, especially within the BFSI and e-commerce sectors. Graph databases excel at uncovering hidden patterns, relationships, and anomalies within vast datasets, making them invaluable for fraud detection and risk management. Financial institutions are leveraging these platforms to identify suspicious transactions and prevent financial crimes, while retailers use them to optimize customer experience and personalize recommendations. The versatility of graph databases in supporting diverse use cases across multiple industry verticals is a major contributor to their rising adoption and market growth.
The rapid digital transformation of enterprises, coupled with the shift towards cloud-based solutions, is also fueling the graph database platform market. Cloud deployment offers scalability, flexibility, and cost-effectiveness, allowing organizations to seamlessly integrate graph databases into their existing IT infrastructure. The growing prevalence of Internet of Things (IoT) devices and the emergence of Industry 4.0 have further increased the demand for platforms capable of handling complex, interconnected data. As businesses strive for agility and innovation, graph database platforms are becoming a strategic asset for gaining competitive advantage.
From a regional perspective, North America currently dominates the graph database platform market, driven by the presence of leading technology providers, early adoption of advanced analytics, and substantial investments in digital infrastructure. However, Asia Pacific is emerging as the fastest-growing region, fueled by rapid economic development, expanding IT sectors, and increasing awareness of data-driven decision-making. Europe also holds a significant market share, supported by strong regulatory frameworks and widespread digital transformation initiatives. The market landscape is highly dynamic, with regional trends influenced by technological advancements, regulatory policies, and industry-specific demands.
The graph database platform market is segmented by component into software and services. The software segment holds the largest share, as organizations increasingly deploy advanced graph database solutions to manage and analyze complex data relationships. These software platforms provide robust features such as data modeling, visualization, and high-performance querying, enabling users to derive actionable insights from connected data. Vendors are continuously enhancing their offerings with AI and machine learning capabilities, making graph database software indispensable for modern data-driven enterprises.
</p&g
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?:
This data set contains all the experimental raw data, analysis and source files for the final figures reported in the manuscript: "Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?". It is divided into five (1-5) zipped folders, named as the technique used to obtain the data. Each of them, where applicable, consists of three different subfolders (raw data, analysed data, final graph). Read below for more details.
1) ConfocalMicroscopy
1a) Raw_Data: the raw images are reported as .dat and .tif formats, divided into folders (according to date first yymmdd, and within the same day according to composition). Each folder contains a .txt file reporting the experimental details
1b) GUVs_Statistics - GUVs_Statistics.txt explains how we generated the bar plot shown in Fig. 1E
1c) Final_Graph - Figure_1B_1D.png is the figure representing figure 1B and 1D - Figure1E_%ofGUVswithCaMAdsorbptions.csv is the source file x-y of the bar plot shown in figure 1E (% of GUVs which showed adsorption of CaM over the total amount of measured GUVs) - Where_To_Find_Representative_Images.txt states the folders where the raw images chosen for figure 1 can be found
2) FCS 2a) Raw_Data: - 1_points: .ptu files - 2_points: .ht3 files - Raw_Data_Description.docx which compositions and conditions correspond to which point in the two data sets 2b) Final_Graphs: - Figure_2A.xlsx contains the x-y source file for figure 2A
2c) Analysis: - FCS_Fits.xlsx outcome of the global fitting procedure described in the .docx below (each group of points represents a certain composition and calcium concentration, read the Raw_Data_Description.docx in the FCS > Raw_Data) - Notes_for_FCS_Analysis.docx contains a brief description of the analysis of the autocorrelation curves
3) GPLaurdan 3a) Raw Data: all the spectra are stored in folders named by date (yymmdd_lipidcomposition_Laurdan) and are in both .FS and .txt formats
3b) GP calculations: contains all the .xlsx files calculating the GP values from the raw emission and excitation spectra
3c) Final_Graphs - Data_Processing_For_Fig_2D.csv contains the data processing from the GP values calculated from the spectra to the DeltaGP (GP with- GP without CaM) reported in fig. 2D - Figure_2C_2D.xlsx contains the x-y source file for the figure 2C and 2D
4) LiveCellsImaging
3a) Intensity_Protrusions_vs_Cell_Body: - contains all the .xlsx files calculating the intensity of the various images. File renamed by date (yymmdd) - All data in all excel sheets gathered in another Excel file to create a final graph
3b) Final_Graphs - Figure_S2B.xlsx contains the x-y source file for the figure S2B
5) LiveCellImaging_Raw_Data: it contains some of the images, which are given in .tif. They are divided by date (yymmdd) and each contains subfolders renamed by sample name, concentration of ionomycin. Within the subfolders, the images are divided into folders distinguishing the data acquired before and after the ionomycin treatment and the incubation time.
6) 211124_BioCev_Imaging_1 folder has the .jpg files of the time laps, these are shown in fig 1A and S2.
7) 211124_BioCev_Imaging_2 and 8) 211124_BioCev_Imaging_3 contain the images of HeLa cells expressing EGFP-CaM after treatment with ionomycin 200 nM (A1) and 1 uM (A2), respectively.
9) SPR
9a) Raw Data: - SPR_Raw_Data.xlsx x/y exported sensorgrams - the .jpg files of the software are also reported and named by lipid composition
9b) Final_Graph: - Fig.2B.xlsx contains the x-y source file for the figure 2B
9c) Analysis - SPR_Analysis.xlsx: excel file containing step-by-step (sheet by sheet) how we processed the raw data to obtain the final figure (details explained in the .docx below) - Analysis of SPR data_notes.docx: read me for detailed explanation
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains in-air hand-written numbers and shapes data used in the paper:B. Alwaely and C. Abhayaratne, "Graph Spectral Domain Feature Learning With Application to in-Air Hand-Drawn Number and Shape Recognition," in IEEE Access, vol. 7, pp. 159661-159673, 2019, doi: 10.1109/ACCESS.2019.2950643.The dataset contains the following:-Readme.txt- InAirNumberShapeDataset.zip containing-Number Folder (With 2 sub folders for Matlab and Excel)-Shapes Folder (With 2 sub folders for Matlab and Excel)The datasets include the in-air drawn number and shape hand movement path captured by a Kinect sensor. The number sub dataset includes 500 instances per each number 0 to 9, resulting in a total of 5000 number data instances. Similarly, the shape sub dataset also includes 500 instances per each shape for 10 different arbitrary 2D shapes, resulting in a total of 5000 shape instances. The dataset provides X, Y, Z coordinates of the hand movement path data in Matlab (M-file) and Excel formats and their corresponding labels.This dataset creation has received The University of Sheffield ethics approval under application #023005 granted on 19/10/2018.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
By Health [source]
This dataset is a valuable resource for gaining insight into Inpatient Prospective Payment System (IPPS) utilization, average charges and average Medicare payments across the top 100 Diagnosis-Related Groups (DRG). With column categories such as DRG Definition, Hospital Referral Region Description, Total Discharges, Average Covered Charges, Average Medicare Payments and Average Medicare Payments 2 this dataset enables researchers to discover and assess healthcare trends in areas such as provider payment comparsons by geographic location or compare service cost across hospital. Visualize the data using various methods to uncover unique information and drive further hospital research
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset provides a provider level summary of Inpatient Prospective Payment System (IPPS) discharges, average charges and average Medicare payments for the Top 100 Diagnosis-Related Groups (DRG). This data can be used to analyze cost and utilization trends across hospital DRGs.
To make the most use of this dataset, here are some steps to consider:
- Understand what each column means in the table: Each column provides different information from the DRG Definition to Hospital Referral Region Description and Average Medicare Payments.
- Analyze the data by looking for patterns amongst the relevant columns: Compare different aspects such as total discharges or average Medicare payments by hospital referral region or DRG Definition. This can help identify any potential trends amongst different categories within your analysis.
- Generate visualizations: Create charts, graphs, or maps that display your data in an easy-to-understand format using tools such as Microsoft Excel or Tableau. Such visuals may reveal more insights into patterns within your data than simply reading numerical values on a spreadsheet could provide alone.
- Identifying potential areas of cost savings by drilling down to particular DRGs and hospital regions with the highest average covered charges compared to average Medicare payments.
- Establishing benchmarks for typical charges and payments across different DRGs and hospital regions to help providers set market-appropriate prices.
- Analyzing trends in total discharges, charges and Medicare payments over time, allowing healthcare organizations to measure their performance against regional peers
If you use this dataset in your research, please credit the original authors. Data Source
License: Open Database License (ODbL) v1.0 - You are free to: - Share - copy and redistribute the material in any medium or format. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - Keep intact - all notices that refer to this license, including copyright notices. - No Derivatives - If you remix, transform, or build upon the material, you may not distribute the modified material. - No additional restrictions - You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
File: 97k6-zzx3.csv | Column name | Description | |:-----------------------------------------|:------------------------------------------------------| | drg_definition | Diagnosis-Related Group (DRG) definition. (String) | | average_medicare_payments | Average Medicare payments for each DRG. (Numeric) | | hospital_referral_region_description | Description of the hospital referral region. (String) | | total_discharges | Total number of discharges for each DRG. (Numeric) | | average_covered_charges | Average covered charges for each DRG. (Numeric) | | average_medicare_payments_2 | Average Medicare payments for each DRG. (Numeric) |
**File: Inpatient_Prospective_Payment_System_IPPS_Provider_Summary_for_the_Top_100_Diagnosis-Related_Groups_DRG...
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
About Datasets: - Domain : Finance - Project: Bank loan of customers - Datasets: Finance_1.xlsx & Finance_2.xlsx - Dataset Type: Excel Data - Dataset Size: Each Excel file has 39k+ records
KPI's: 1. Year wise loan amount Stats 2. Grade and sub grade wise revol_bal 3. Total Payment for Verified Status Vs Total Payment for Non Verified Status 4. State wise loan status 5. Month wise loan status 6. Get more insights based on your understanding of the data
Process: 1. Understanding the problem 2. Data Collection 3. Data Cleaning 4. Exploring and analyzing the data 5. Interpreting the results
This data contains Power Query, Power Pivot, Merge data, Clustered Bar Chart, Clustered Column Chart, Line Chart, 3D Pie chart, Dashboard, slicers, timeline, formatting techniques.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
About Datasets: - Domain : Finance - Project: Bank loan of customers - Datasets: Finance_1.xlsx & Finance_2.xlsx - Dataset Type: Excel Data - Dataset Size: Each Excel file has 39k+ records
KPI's: 1. Year wise loan amount Stats 2. Grade and sub grade wise revol_bal 3. Total Payment for Verified Status Vs Total Payment for Non Verified Status 4. State wise loan status 5. Month wise loan status 6. Get more insights based on your understanding of the data
Process: 1. Understanding the problem 2. Data Collection 3. Data Cleaning 4. Exploring and analyzing the data 5. Interpreting the results
This data contains stacked column chart, Donut chart, Stacked area chart, pie chart, matrix, slicer, treemap, clustered column chart, Map, Dashboard, Page Navigator, card, text box.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset: The dataset used for this project is a Coffee Shop Sales Dataset sourced from Kaggle. It contains detailed sales transaction data from a coffee shop, including product categories (e.g., coffee, tea, bakery items), store locations, transaction dates, and revenue generated. This dataset is ideal for analyzing various business metrics, such as product performance, sales patterns, and store efficiency.
The dataset has the following key attributes:
Product Categories: Coffee, tea, bakery, branded items, etc. Store Locations: Hell’s Kitchen, Astoria, Lower Manhattan. Sales Transactions: Includes revenue, product type, and date/time details. Link to dataset: https://www.kaggle.com/datasets/ahmedmohamedibrahim1/coffee-shop-sales-dataset
Methodology Wrap-Up: In this project, the goal was to create an Interactive Sales Dashboard using Excel to derive actionable insights from the coffee shop's sales data. The process began with data collection and preparation, ensuring the dataset was ready for analysis. Various data analysis techniques were applied using Excel Pivot Tables, and multiple charts were created to visualize the data clearly and effectively.
The visualizations, including bar charts, pie charts, line charts, and treemaps, were enhanced by interactive slicers, enabling users to explore specific data segments, such as product categories, store locations, and time patterns. The analysis focused on identifying the best-performing products and stores, revenue trends, and sales distribution, providing business insights that could help the coffee shop optimize its operations.
This dashboard demonstrates the power of data visualization and business intelligence in understanding customer behavior and improving decision-making processes within a retail context.
Facebook
TwitterArchery is my favorite sport and I enjoy this as it keeps calming and keep me focused. I have played archery for over 4 years, 2 years in the UK in my local club Epping Archers and 2.5 years in Hong Kong with Target X.
Recently I decided to take the opportunity to enter the open competition at 18 meters with 6 ringed 80cm target face. To quality further to other competition you will need to pass the minimum score requirement. Example, beginners’ competition 18 meters. I would have to score minimum 600 points to qualify in the 30 meters tournaments.
The type of bow I use is a recurve bow it is most recognisable in the Olympics. The target face is 80cm and there is 6 ring scoring from 10, 9, 8, 7, 6, 5 with 10 being the highest point per arrow. The lowest is point is 5 and arrows that hits the outside is zero points known as ‘M’ a miss. In the competition there are total of 72 arrows, it is split into 2 rounds of 36 arrows. Each round has 6 ends a one end consists of 6 arrows. The maximum number of point per arrow is 10 points and a total of 72 arrows totals to 720 points. The minimum points to qualify for 18 meter shoot is 600 points.
I am using this opportunity to complete an analysis to calculate averages, maximum and minimum points scored. This will be followed by using R programming, Tableau and Excel spreadsheet to produce graphs, charts and dashboard.
The data collected are my personal best, it is recorded on a hand written sheet and it is transferred on to a spreadsheet and saved as a CSV file format. The time period is from 13th Dec 2020 to 28th Sep 2021, there are total of 13 games played during this time. Also with the Covid-19 social distancing, the games played are not consistent, i.e 1 game every Sunday week.
The dataset has index column which is my primary key, the days and date. The games represent all the 72 arrows, The rounds have been recorded as 1 and 2 The ends have been 1 to 12
The shots_1, shot_2 and so on… are the actual scoring, ‘M’ is recorded as a number ‘0’. Then total columns represents each end actual point scored. The hits represents the number of arrows that hit the 6 ring target face. Any arrows that does not land on the target face is not counted as it represents a miss. The tens columns represents the number of times I have hit 10.
Thank you to Target X for maintaining social distance in order to keep the club running.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.