Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.
• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.
• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global graph database market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 8.5 billion by 2032, growing at a CAGR of 21.2% from 2024 to 2032. The substantial growth of this market is driven primarily by increasing data complexity, advancements in data analytics technologies, and the rising need for more efficient database management systems.
One of the primary growth factors for the graph database market is the exponential increase in data generation. As organizations generate vast amounts of data from various sources such as social media, e-commerce platforms, and IoT devices, the need for sophisticated data management and analysis tools becomes paramount. Traditional relational databases struggle to handle the complexity and interconnectivity of this data, leading to a shift towards graph databases which excel in managing such intricate relationships.
Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies. These technologies rely heavily on connected data for predictive analytics and decision-making processes. Graph databases, with their inherent ability to model relationships between data points effectively, provide a robust foundation for AI and ML applications. This synergy between AI/ML and graph databases further accelerates market growth.
Additionally, the increasing prevalence of personalized customer experiences across industries like retail, finance, and healthcare is fueling demand for graph databases. Businesses are leveraging graph databases to analyze customer behaviors, preferences, and interactions in real-time, enabling them to offer tailored recommendations and services. This enhanced customer experience translates to higher customer satisfaction and retention, driving further adoption of graph databases.
From a regional perspective, North America currently holds the largest market share due to early adoption of advanced technologies and the presence of key market players. However, significant growth is also anticipated in the Asia-Pacific region, driven by rapid digital transformation, increasing investments in IT infrastructure, and growing awareness of the benefits of graph databases. Europe is also expected to witness steady growth, supported by stringent data management regulations and a strong focus on data privacy and security.
The graph database market can be segmented into two primary components: software and services. The software segment holds the largest market share, driven by extensive adoption across various industries. Graph database software is designed to create, manage, and query graph databases, offering features such as scalability, high performance, and efficient handling of complex data relationships. The growth in this segment is propelled by continuous advancements and innovations in graph database technologies. Companies are increasingly investing in research and development to enhance the capabilities of their graph database software products, catering to the evolving needs of their customers.
On the other hand, the services segment is also witnessing substantial growth. This segment includes consulting, implementation, and support services provided by vendors to help organizations effectively deploy and manage graph databases. As businesses recognize the benefits of graph databases, the demand for expert services to ensure successful implementation and integration into existing systems is rising. Additionally, ongoing support and maintenance services are crucial for the smooth operation of graph databases, driving further growth in this segment.
The increasing complexity of data and the need for specialized expertise to manage and analyze it effectively are key factors contributing to the growth of the services segment. Organizations often lack the in-house skills required to harness the full potential of graph databases, prompting them to seek external assistance. This trend is particularly evident in large enterprises, where the scale and complexity of data necessitate robust support services.
Moreover, the services segment is benefiting from the growing trend of outsourcing IT functions. Many organizations are opting to outsource their database management needs to specialized service providers, allowing them to focus on their core business activities. This shift towards outsourcing is further bolstering the demand for graph database services, driving market growth.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Using the User Manual as a guide and the Excel Graph Input Data Example file as a reference, the user enters the semantics of the graph model in this file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In "Sample Student Data", there are 6 sheets. There are three sheets with sample datasets, one for each of the three different exercise protocols described (CrP Sample Dataset, Glycolytic Dataset, Oxidative Dataset). Additionally, there are three sheets with sample graphs created using one of the three datasets (CrP Sample Graph, Glycolytic Graph, Oxidative Graph). Each dataset and graph pairs are from different subjects. · CrP Sample Dataset and CrP Sample Graph: This is an example of a dataset and graph created from an exercise protocol designed to stress the creatine phosphate system. Here, the subject was a track and field athlete who threw the shot put for the DeSales University track team. The NIRS monitor was placed on the right triceps muscle, and the student threw the shot put six times with a minute rest in between throws. Data was collected telemetrically by the NIRS device and then downloaded after the student had completed the protocol. · Glycolytic Dataset and Glycolytic Graph: This is an example of a dataset and graph created from an exercise protocol designed to stress the glycolytic energy system. In this example, the subject performed continuous squat jumps for 30 seconds, followed by a 90 second rest period, for a total of three exercise bouts. The NIRS monitor was place on the left gastrocnemius muscle. Here again, data was collected telemetrically by the NIRS device and then downloaded after he had completed the protocol. · Oxidative Dataset and Oxidative Graph: In this example, the dataset and graph are from an exercise protocol designed to stress the oxidative system. Here, the student held a sustained, light-intensity, isometric biceps contraction (pushing against a table). The NIRS monitor was attached to the left biceps muscle belly. Here, data was collected by a student observing the SmO2 values displayed on a secondary device; specifically, a smartphone with the IPSensorMan APP displaying data. The recorder student observed and recorded the data on an Excel Spreadsheet, and marked the times that exercise began and ended on the Spreadsheet.
The Nuclear Medicine National HQ System database is a series of MS Excel spreadsheets and Access Database Tables by fiscal year. They consist of information from all Veterans Affairs Medical Centers (VAMCs) performing or contracting nuclear medicine services in Veterans Affairs medical facilities. The medical centers are required to complete questionnaires annually (RCS 10-0010-Nuclear Medicine Service Annual Report). The information is then manually entered into the Access Tables, which includes: * Distribution and cost of in-house VA - Contract Physician Services, whether contracted services are made via sharing agreement (with another VA medical facility or other government medical providers) or with private providers. * Workload data for the performance and/or purchase of PET/CT studies. * Organizational structure of services. * Updated changes in key imaging service personnel (chiefs, chief technicians, radiation safety officers). * Workload data on the number and type of studies (scans) performed, including Medicare Relative Value Units (RVUs), also referred to as Weighted Work Units (WWUs). WWUs are a workload measure calculated as the product of a study's Current Procedural Terminology (CPT) code, which consists of total work costs (the cost of physician medical expertise and time), and total practice costs (the costs of running a practice, such as equipment, supplies, salaries, utilities etc). Medicare combines WWUs together with one other parameter to derive RVUs, a workload measure widely used in the health care industry. WWUs allow Nuclear Medicine to account for the complexity of each study in assessing workload, that some studies are more time consuming and require higher levels of expertise. This gives a more accurate picture of workload; productivity etc than using just 'total studies' would yield. * A detailed Full-Time Equivalent Employee (FTEE) grid, and staffing distributions of FTEEs across nuclear medicine services. * Information on Radiation Safety Committees and Radiation Safety Officers (RSOs). Beginning in 2011 this will include data collection on part-time and non VA (contract) RSOs; other affiliations they may have and if so to whom they report (supervision) at their VA medical center.Collection of data on nuclear medicine services' progress in meeting the special needs of our female veterans. Revolving documentation of all major VA-owned gamma cameras (by type) and computer systems, their specifications and ages. * Revolving data collection for PET/CT cameras owned or leased by VA; and the numbers and types of PET/CT studies performed on VA patients whether produced on-site, via mobile PET/CT contract or from non-VA providers in the community.* Types of educational training/certification programs available at VA sites * Ongoing funded research projects by Nuclear Medicine (NM) staff, identified by source of funding and research purpose. * Data on physician-specific quality indicators at each nuclear medicine service.* Academic achievements by NM staff, including published books/chapters, journals and abstracts. * Information from polling field sites re: relevant issues and programs Headquarters needs to address. * Results of a Congressionally mandated contracted quality assessment exercise, also known as a Proficiency study. Study results are analyzed for comparison within VA facilities (for example by mission or size), and against participating private sector health care groups. * Information collected on current issues in nuclear medicine as they arise. Radiation Safety Committee structures and membership, Radiation Safety Officer information and information on how nuclear medicine services provided for female Veterans are examples of current issues.The database is now stored completely within MS Access Database Tables with output still presented in the form of Excel graphs and tables.
According to our latest research, the global graph database market size in 2024 stands at USD 2.92 billion, with a robust compound annual growth rate (CAGR) of 21.6% projected from 2025 to 2033. By the end of 2033, the market is expected to reach approximately USD 21.1 billion. The rapid expansion of this market is primarily driven by the rising need for advanced data analytics, real-time big data processing, and the growing adoption of artificial intelligence and machine learning across various industry verticals. As organizations continue to seek innovative solutions to manage complex and interconnected data, the demand for graph database technologies is accelerating at an unprecedented pace.
One of the most significant growth factors for the graph database market is the exponential increase in data complexity and volume. Traditional relational databases often struggle to efficiently handle highly connected data, which is becoming more prevalent in modern business environments. Graph databases excel at managing relationships between data points, making them ideal for applications such as fraud detection, social network analysis, and recommendation engines. The ability to visualize and query data relationships in real-time provides organizations with actionable insights, enabling faster and more informed decision-making. This capability is particularly valuable in sectors like BFSI, healthcare, and e-commerce, where understanding intricate data connections can lead to substantial competitive advantages.
Another key driver fueling market growth is the widespread digital transformation initiatives undertaken by enterprises worldwide. As businesses increasingly migrate to cloud-based infrastructures and adopt advanced analytics tools, the need for scalable and flexible database solutions becomes paramount. Graph databases offer seamless integration with cloud platforms, supporting both on-premises and cloud deployment models. This flexibility allows organizations to efficiently manage growing data workloads while ensuring security and compliance. Additionally, the proliferation of IoT devices and the surge in unstructured data generation further amplify the demand for graph database solutions, as they are uniquely equipped to handle dynamic and heterogeneous data sources.
The integration of artificial intelligence and machine learning with graph databases is also a pivotal growth factor. AI-driven analytics require robust data models capable of uncovering hidden patterns and relationships within vast datasets. Graph databases provide the foundational infrastructure for such applications, enabling advanced features like predictive analytics, anomaly detection, and personalized recommendations. As more organizations invest in AI-powered solutions to enhance customer experiences and operational efficiency, the adoption of graph database technologies is expected to surge. Furthermore, continuous advancements in graph processing algorithms and the emergence of open-source graph database platforms are lowering entry barriers, fostering innovation, and expanding the market’s reach.
From a regional perspective, North America currently dominates the graph database market, owing to the early adoption of advanced technologies and the presence of major industry players. However, the Asia Pacific region is anticipated to witness the highest growth rate over the forecast period, driven by rapid digitalization, increasing investments in IT infrastructure, and the rising demand for data-driven decision-making across emerging economies. Europe also holds a significant share, supported by stringent data privacy regulations and the growing emphasis on innovation across sectors such as finance, healthcare, and manufacturing. As organizations across all regions recognize the value of graph databases in unlocking business insights, the global market is poised for sustained growth.
The graph database market is broadly segmented by component into s
Analyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:
1- Data Import and Transformation:
2- Data Quality Assessment:
3- Calculating COGS:
4- Discount Analysis:
5- Sales Metrics:
6- Visualization:
7- Report Generation:
Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.
*** Fake News on Twitter ***
These 5 datasets are the results of an empirical study on the spreading process of newly fake news on Twitter. Particularly, we have focused on those fake news which have given rise to a truth spreading simultaneously against them. The story of each fake news is as follow:
1- FN1: A Muslim waitress refused to seat a church group at a restaurant, claiming "religious freedom" allowed her to do so.
2- FN2: Actor Denzel Washington said electing President Trump saved the U.S. from becoming an "Orwellian police state."
3- FN3: Joy Behar of "The View" sent a crass tweet about a fatal fire in Trump Tower.
4- FN4: The animated children's program 'VeggieTales' introduced a cannabis character in August 2018.
5- FN5: In September 2018, the University of Alabama football program ended its uniform contract with Nike, in response to Nike's endorsement deal with Colin Kaepernick.
The data collection has been done in two stages that each provided a new dataset: 1- attaining Dataset of Diffusion (DD) that includes information of fake news/truth tweets and retweets 2- Query of neighbors for spreaders of tweets that provides us with Dataset of Graph (DG).
DD
DD for each fake news story is an excel file, named FNx_DD where x is the number of fake news, and has the following structure:
The structure of excel files for each dataset is as follow:
Each row belongs to one captured tweet/retweet related to the rumor, and each column of the dataset presents a specific information about the tweet/retweet. These columns from left to right present the following information about the tweet/retweet:
User ID (user who has posted the current tweet/retweet)
The description sentence in the profile of the user who has published the tweet/retweet
The number of published tweet/retweet by the user at the time of posting the current tweet/retweet
Date and time of creation of the account by which the current tweet/retweet has been posted
Language of the tweet/retweet
Number of followers
Number of followings (friends)
Date and time of posting the current tweet/retweet
Number of like (favorite) the current tweet had been acquired before crawling it
Number of times the current tweet had been retweeted before crawling it
Is there any other tweet inside of the current tweet/retweet (for example this happens when the current tweet is a quote or reply or retweet)
The source (OS) of device by which the current tweet/retweet was posted
Tweet/Retweet ID
Retweet ID (if the post is a retweet then this feature gives the ID of the tweet that is retweeted by the current post)
Quote ID (if the post is a quote then this feature gives the ID of the tweet that is quoted by the current post)
Reply ID (if the post is a reply then this feature gives the ID of the tweet that is replied by the current post)
Frequency of tweet occurrences which means the number of times the current tweet is repeated in the dataset (for example the number of times that a tweet exists in the dataset in the form of retweet posted by others)
State of the tweet which can be one of the following forms (achieved by an agreement between the annotators):
r : The tweet/retweet is a fake news post
a : The tweet/retweet is a truth post
q : The tweet/retweet is a question about the fake news, however neither confirm nor deny it
n : The tweet/retweet is not related to the fake news (even though it contains the queries related to the rumor, but does not refer to the given fake news)
DG
DG for each fake news contains two files:
A file in graph format (.graph) which includes the information of graph such as who is linked to whom. (This file named FNx_DG.graph, where x is the number of fake news)
A file in Jsonl format (.jsonl) which includes the real user IDs of nodes in the graph file. (This file named FNx_Labels.jsonl, where x is the number of fake news)
Because in the graph file, the label of each node is the number of its entrance in the graph. For example if node with user ID 12345637 be the first node which has been entered into the graph file then its label in the graph is 0 and its real ID (12345637) would be at the row number 1 (because the row number 0 belongs to column labels) in the jsonl file and so on other node IDs would be at the next rows of the file (each row corresponds to 1 user id). Therefore, if we want to know for example what the user id of node 200 (labeled 200 in the graph) is, then in jsonl file we should look at row number 202.
The user IDs of spreaders in DG (those who have had a post in DD) would be available in DD to get extra information about them and their tweet/retweet. The other user IDs in DG are the neighbors of these spreaders and might not exist in DD.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Nuclear Medicine National HQ System database is a series of MS Excel spreadsheets and Access Database Tables by fiscal year. They consist of information from all Veterans Affairs Medical Centers (VAMCs) performing or contracting nuclear medicine services in Veterans Affairs medical facilities. The medical centers are required to complete questionnaires annually (RCS 10-0010-Nuclear Medicine Service Annual Report). The information is then manually entered into the Access Tables, which includes: * Distribution and cost of in-house VA - Contract Physician Services, whether contracted services are made via sharing agreement (with another VA medical facility or other government medical providers) or with private providers. * Workload data for the performance and/or purchase of PET/CT studies. * Organizational structure of services. * Updated changes in key imaging service personnel (chiefs, chief technicians, radiation safety officers). * Workload data on the number and type of studies (scans) performed, including Medicare Relative Value Units (RVUs), also referred to as Weighted Work Units (WWUs). WWUs are a workload measure calculated as the product of a study's Current Procedural Terminology (CPT) code, which consists of total work costs (the cost of physician medical expertise and time), and total practice costs (the costs of running a practice, such as equipment, supplies, salaries, utilities etc). Medicare combines WWUs together with one other parameter to derive RVUs, a workload measure widely used in the health care industry. WWUs allow Nuclear Medicine to account for the complexity of each study in assessing workload, that some studies are more time consuming and require higher levels of expertise. This gives a more accurate picture of workload; productivity etc than using just 'total studies' would yield. * A detailed Full-Time Equivalent Employee (FTEE) grid, and staffing distributions of FTEEs across nuclear medicine services. * Information on Radiation Safety Committees and Radiation Safety Officers (RSOs). Beginning in 2011 this will include data collection on part-time and non VA (contract) RSOs; other affiliations they may have and if so to whom they report (supervision) at their VA medical center.Collection of data on nuclear medicine services' progress in meeting the special needs of our female veterans. Revolving documentation of all major VA-owned gamma cameras (by type) and computer systems, their specifications and ages. * Revolving data collection for PET/CT cameras owned or leased by VA; and the numbers and types of PET/CT studies performed on VA patients whether produced on-site, via mobile PET/CT contract or from non-VA providers in the community.* Types of educational training/certification programs available at VA sites * Ongoing funded research projects by Nuclear Medicine (NM) staff, identified by source of funding and research purpose. * Data on physician-specific quality indicators at each nuclear medicine service.* Academic achievements by NM staff, including published books/chapters, journals and abstracts. * Information from polling field sites re: relevant issues and programs Headquarters needs to address. * Results of a Congressionally mandated contracted quality assessment exercise, also known as a Proficiency study. Study results are analyzed for comparison within VA facilities (for example by mission or size), and against participating private sector health care groups. * Information collected on current issues in nuclear medicine as they arise. Radiation Safety Committee structures and membership, Radiation Safety Officer information and information on how nuclear medicine services provided for female Veterans are examples of current issues.The database is now stored completely within MS Access Database Tables with output still presented in the form of Excel graphs and tables.
According to our latest research, the global Graph Database Vector Search market size reached USD 2.35 billion in 2024, exhibiting robust growth driven by the increasing demand for advanced data analytics and AI-powered search capabilities. The market is expected to expand at a CAGR of 21.7% during the forecast period, propelling the market size to an anticipated USD 16.8 billion by 2033. This remarkable growth trajectory is primarily fueled by the proliferation of big data, the widespread adoption of AI and machine learning, and the growing necessity for real-time, context-aware search solutions across diverse industry verticals.
One of the primary growth factors for the Graph Database Vector Search market is the exponential increase in unstructured and semi-structured data generated by enterprises worldwide. Organizations are increasingly seeking efficient ways to extract meaningful insights from complex datasets, and graph databases paired with vector search capabilities are emerging as the preferred solution. These technologies enable organizations to model intricate relationships and perform semantic searches with unprecedented speed and accuracy. Additionally, the integration of AI and machine learning algorithms with graph databases is enhancing their ability to deliver context-rich, relevant results, thereby improving decision-making processes and business outcomes.
Another significant driver is the rising adoption of recommendation systems and fraud detection solutions across various sectors, particularly in BFSI, retail, and e-commerce. Graph database vector search platforms excel at identifying patterns, anomalies, and connections that traditional relational databases often miss. This capability is crucial for detecting fraudulent activities, building sophisticated recommendation engines, and powering knowledge graphs that underpin intelligent digital experiences. The growing need for personalized customer engagement and proactive risk mitigation is prompting organizations to invest heavily in these advanced technologies, further accelerating market growth.
Furthermore, the shift towards cloud-based deployment models is catalyzing the adoption of graph database vector search solutions. Cloud platforms offer scalability, flexibility, and cost-effectiveness, making it easier for organizations of all sizes to implement and scale graph-powered applications. The availability of managed services and API-driven architectures is reducing the complexity associated with deployment and maintenance, enabling faster time-to-value. As more enterprises migrate their data infrastructure to the cloud, the demand for cloud-native graph database vector search solutions is expected to surge, driving sustained market expansion.
Geographically, North America currently dominates the Graph Database Vector Search market, owing to its advanced IT infrastructure, high adoption rate of AI-driven technologies, and presence of leading technology vendors. However, rapid digital transformation initiatives across Europe and the Asia Pacific are positioning these regions as high-growth markets. The increasing focus on data-driven decision-making, coupled with supportive regulatory frameworks and government investments in AI and big data analytics, is expected to fuel robust growth in these regions over the forecast period.
The Component segment of the Graph Database Vector Search market is broadly categorized into software and services. The software sub-segment commands the largest share, driven by the relentless innovation in graph database technologies and the integration of advanced vector search functionalities. Organizations are increasingly deploying graph database software to manage complex data relationships, power semantic search, and enhance the performance of AI and machine learning applications. The software market is characterized by the proliferation of both open-source and proprietary solutions, with vendors
Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
According to Cognitive Market Research, the global Graph Analytics market size will be USD 2522 million in 2024 and will expand at a compound annual growth rate (CAGR) of 34.0% from 2024 to 2031. Market Dynamics of Graph Analytics Market
Key Drivers for Graph Analytics Market
Increasing Recognition of the Advantages of Graph Databases- One of the main reasons for the Graph Analytics market is the increasing recognition of the advantages of graph databases. Unlike traditional relational databases, graph databases excel at handling complex relationships and interconnected data, making them ideal for use cases such as fraud detection, recommendation engines, and social network analysis. Businesses are leveraging these capabilities to uncover insights and patterns that were previously difficult to detect. The rise of big data and the need for real-time analytics are further driving the adoption of graph databases, as they offer enhanced performance and scalability for large-scale data sets. Additionally, advancements in artificial intelligence and machine learning are amplifying the value of graph databases, enabling more sophisticated data modeling and predictive analytics.
Growing Uptake of Big Data Tools to Drive the Graph Analytics Market's Expansion in the Years Ahead.
Key Restraints for Graph Analytics Market
Limited Awareness and Understanding pose a serious threat to the Graph Analytics industry.
The market also faces significant difficulties related to data security and privacy.
Introduction of the Graph Analytics Market
The Graph Analytics Market is rapidly expanding, driven by the growing need for advanced data analysis techniques in various sectors. Graph analytics leverages graph structures to represent and analyze relationships and dependencies, providing deeper insights than traditional data analysis methods. Key factors propelling this market include the rise of big data, the increasing adoption of artificial intelligence and machine learning, and the demand for real-time data processing. Industries such as finance, healthcare, telecommunications, and retail are major contributors, utilizing graph analytics for fraud detection, personalized recommendations, network optimization, and more. Leading vendors are continually innovating to offer scalable, efficient solutions, incorporating advanced features like graph databases and visualization tools.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the user manual provided at the end of the research manuscript, and the Graph Input Data Example.xlsx as a reference, the user provides all the graph semantic data required to evaluate all the performance criteria for the system.These criteria include the probability that the principal target can be reached, and the costs, elapsed times and total vulnerability resulting from a penetration attempt by one or more intruders.This performance computation is accurate and efficient, requiring an insignificant amount of computation time.It also resolves all the statistical dependencies and probabilistic uncertainties believed to be an important challenge to a risk manager and his or her analysts.User enters the Graph Topological data in this excel file, thereby creating a topological model.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?:
This data set contains all the experimental raw data, analysis and source files for the final figures reported in the manuscript: "Can calmodulin bind to lipids of the cytosolic leaflet of plasma membranes?". It is divided into five (1-5) zipped folders, named as the technique used to obtain the data. Each of them, where applicable, consists of three different subfolders (raw data, analysed data, final graph). Read below for more details.
1) ConfocalMicroscopy
1a) Raw_Data: the raw images are reported as .dat and .tif formats, divided into folders (according to date first yymmdd, and within the same day according to composition). Each folder contains a .txt file reporting the experimental details
1b) GUVs_Statistics
- GUVs_Statistics.txt explains how we generated the bar plot shown in Fig. 1E
1c) Final_Graph
- Figure_1B_1D.png is the figure representing figure 1B and 1D
- Figure1E_%ofGUVswithCaMAdsorbptions.csv is the source file x-y of the bar plot shown in figure 1E (% of GUVs which showed adsorption of CaM over the total amount of measured GUVs)
- Where_To_Find_Representative_Images.txt states the folders where the raw images chosen for figure 1 can be found
2) FCS
2a) Raw_Data:
- 1_points: .ptu files
- 2_points: .ht3 files
- Raw_Data_Description.docx which compositions and conditions correspond to which point in the two data sets
2b) Final_Graphs:
- Figure_2A.xlsx contains the x-y source file for figure 2A
2c) Analysis:
- FCS_Fits.xlsx outcome of the global fitting procedure described in the .docx below (each group of points represents a certain composition and calcium concentration, read the Raw_Data_Description.docx in the FCS > Raw_Data)
- Notes_for_FCS_Analysis.docx contains a brief description of the analysis of the autocorrelation curves
3) GPLaurdan
3a) Raw Data: all the spectra are stored in folders named by date (yymmdd_lipidcomposition_Laurdan) and are in both .FS and .txt formats
3b) GP calculations: contains all the .xlsx files calculating the GP values from the raw emission and excitation spectra
3c) Final_Graphs
- Data_Processing_For_Fig_2D.csv contains the data processing from the GP values calculated from the spectra to the DeltaGP (GP with- GP without CaM) reported in fig. 2D
- Figure_2C_2D.xlsx contains the x-y source file for the figure 2C and 2D
4) LiveCellsImaging
3a) Intensity_Protrusions_vs_Cell_Body:
- contains all the .xlsx files calculating the intensity of the various images. File renamed by date (yymmdd)
- All data in all excel sheets gathered in another Excel file to create a final graph
3b) Final_Graphs
- Figure_S2B.xlsx contains the x-y source file for the figure S2B
5) LiveCellImaging_Raw_Data: it contains some of the images, which are given in .tif. They are divided by date (yymmdd) and each contains subfolders renamed by sample name, concentration of ionomycin. Within the subfolders, the images are divided into folders distinguishing the data acquired before and after the ionomycin treatment and the incubation time.
6) 211124_BioCev_Imaging_1 folder has the .jpg files of the time laps, these are shown in fig 1A and S2.
7) 211124_BioCev_Imaging_2 and 8) 211124_BioCev_Imaging_3 contain the images of HeLa cells expressing EGFP-CaM after treatment with ionomycin 200 nM (A1) and 1 uM (A2), respectively.
9) SPR
9a) Raw Data:
- SPR_Raw_Data.xlsx x/y exported sensorgrams
- the .jpg files of the software are also reported and named by lipid composition
9b) Final_Graph:
- Fig.2B.xlsx contains the x-y source file for the figure 2B
9c) Analysis
- SPR_Analysis.xlsx: excel file containing step-by-step (sheet by sheet) how we processed the raw data to obtain the final figure (details explained in the .docx below)
- Analysis of SPR data_notes.docx: read me for detailed explanation
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
Graph and download economic data for Dow Jones Industrial Average (DJIA) from 2015-08-03 to 2025-08-01 about stock market, average, industry, and USA.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.