59 datasets found
  1. o

    dataset + target vs output

    • explore.openaire.eu
    Updated Dec 16, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Developer (2018). dataset + target vs output [Dataset]. http://doi.org/10.5281/zenodo.2336579
    Explore at:
    Dataset updated
    Dec 16, 2018
    Authors
    Developer
    Description

    210 data points meaning of each excel sheet: IN - input variable values for each data point (each data point is one row) TARGET - target variable values for each data point (each data point is one row) VARS - presents the units used for each input (independent) and output/target (dependent) variables TARGET vs OUTPUT - presents the 210 expected (experimental) values and the ones obtained by the proposed ANN Check reference below (to be added when the paper is published) https://www.researchgate.net/publication/329932699_Neural_Networks_-_Shear_Strength_-_Corrugated_Web_Girders

  2. Graph Input Data Example.xlsx

    • figshare.com
    xlsx
    Updated Dec 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Corynen (2018). Graph Input Data Example.xlsx [Dataset]. http://doi.org/10.6084/m9.figshare.7506209.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Dec 26, 2018
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Dr Corynen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.

  3. H

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale...

    • dataverse.harvard.edu
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Georgios Boumis; Brad Peter (2024). Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends [Dataset]. http://doi.org/10.7910/DVN/ZZDYM9
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Georgios Boumis; Brad Peter
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...

  4. d

    Data from: Coral Point Count (CPCe) summary data by transect, West Hawaii,...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Coral Point Count (CPCe) summary data by transect, West Hawaii, 2010-2011 [Dataset]. https://catalog.data.gov/dataset/coral-point-count-cpce-summary-data-by-transect-west-hawaii-2010-2011
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Hawaii
    Description

    Coral Point Count with Excel extensions (CPCe; Kohler and Gill, 2006) was used to help calculate percent of coral cover or other benthic substrates from a randomly selected subset of seafloor photographs collected on the west Hawaii Island coast.

  5. f

    Excel file with explanations of the columns in rows 1–2.

    • plos.figshare.com
    xlsx
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Norbert Brunner; Manfred Kühleitner; Katharina Renner-Martin (2023). Excel file with explanations of the columns in rows 1–2. [Dataset]. http://doi.org/10.1371/journal.pone.0250515.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Norbert Brunner; Manfred Kühleitner; Katharina Renner-Martin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Columns A-D provide the bird IDs together with general information, E-Q provide the brood-specific and environmental data for each nestling (and nest), R-Y list the bird-mass at days 1 to 15 (odd days, only), and AA-AH provide for each bird the best-fit parameters of its five-parameter BP-model together with additional information about the model. (XLSX)

  6. d

    Millions of POI locations for 2000+ companies updated weekly

    • datarade.ai
    Updated Apr 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scrapehero (2023). Millions of POI locations for 2000+ companies updated weekly [Dataset]. https://datarade.ai/data-products/millions-of-poi-locations-for-1600-companies-updated-weekly-scrapehero
    Explore at:
    .json, .csv, .xls, .txtAvailable download formats
    Dataset updated
    Apr 10, 2023
    Dataset authored and provided by
    Scrapehero
    Area covered
    United States
    Description

    https://store.scrapehero.com

    Location Data High quality retail store, hotel, car dealer, gas stations locations and other points of interest Download millions of accurate, verified, updated and affordable Points of Interest (POI) and locations instantly as an Excel spreadsheet

    Accurate The datasets we provide undergo at least 10 stringent quality checks before we publish them

    Updated We have one of the best update cycles in the industry. Most datasets are updated weekly

    Instant Download the data instantly from our website for immediate use in your projects or use it as an API feed to integrate it with your applications

    We have the best pricing online Shop around, we are certain you will not find better pricing for the latest location data anywhere online. We also have subscriptions available for getting unlimited updated data weekly

  7. c

    ckanext-excelforms

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-excelforms [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-excelforms
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The excelforms extension for CKAN provides a mechanism for users to input data into Table Designer tables using Excel-based forms, enhancing data entry efficiency. This extension focuses on streamlining the process of adding data rows to tables within CKAN's Table Designer. A key component of the functionality is the ability to import multiple rows in a single operation, which significant reduces overhead associated with entering multiple data points. Key Features: Excel-Based Forms: Users can enter data using familiar Excel spreadsheets, leveraging their existing skills and software. Table Designer Integration: Designed to work seamlessly with CKAN's Table Designer, extending its functionality to include Excel-based data entry. Multiple Row Import: Supports importing multiple rows of data at once, improving data entry efficiency, especially when dealing with large datasets. Data mapping: Simplifies the process of aligning excel column headers to their corresponding data fields in tables. Improved Data Entry Speed: Provides an alternative to manual data entry, resulting in faster population and easier updates. Technical Integration: The excelforms extension integrates with CKAN by introducing new functionalities and workflows around the Table Designer plugin. The installation instructions specify that this plugin to be added before the tabledesigner plugin. Benefits & Impact: By enabling Excel-based data entry, the excelforms extension improves the user experience for those familiar with spreadsheet software. The ability to import multiple rows simultaneously significantly reduces the time and effort required to populate tables, particularly when dealing with large amounts of data. The impact is better data accessibility through the streamlining of data population workflows.

  8. o

    Full Excel model: Life-cycle environmental impacts of food & drink products

    • ora.ox.ac.uk
    sheet
    Updated Jan 1, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Poore, J (2018). Full Excel model: Life-cycle environmental impacts of food & drink products [Dataset]. http://doi.org/10.5287/bodleian:0z9MYbMyZ
    Explore at:
    sheet(18266646)Available download formats
    Dataset updated
    Jan 1, 2018
    Dataset provided by
    University of Oxford
    Authors
    Poore, J
    License

    https://ora.ox.ac.uk/terms_of_usehttps://ora.ox.ac.uk/terms_of_use

    Description

    Full Excel model providing life-cycle impacts of food and drink products. Contains all original inventory data and mid-point impact data, remodelling assumptions, and final standardised results. Requires Microsoft Excel 2007 or later to use.

  9. FOI-01727 - Datasets - Open Data Portal

    • opendata.nhsbsa.net
    Updated Feb 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nhsbsa.net (2024). FOI-01727 - Datasets - Open Data Portal [Dataset]. https://opendata.nhsbsa.net/dataset/foi-01727
    Explore at:
    Dataset updated
    Feb 29, 2024
    Dataset provided by
    NHS Business Services Authority
    Description

    The number of COVID vaccinations carried out and payments made for these vaccinations to individual pharmacies, listed by their ODS code and with full postal address details. Could you provide the data for the month of January 2024 in EXCEL format please. The data should be: Column1--Administration Month Column2--ODS Code Column3--Pharmacy Name Column4--Pharmacy Trading Name Column5--Pharmacy Address Column6--Pharmacy Post Code Column7--Number of Vaccinations Claimed Column8--Number of Vaccinations Paid Column9--Payment Amount GB Response A copy of the information is attached. The NHSBSA calculates payments for Covid-19 vaccinations to Pharmacies and Primary Care Network (PCN) providers in England. Covid-19 vaccination data is keyed in via Point of Care (POC) Systems and they are transferred to the NHSBSA Manage Your Service (MYS) application. Each month, vaccine providers submit claims to request payment based on the data that has been transferred into MYS. To be paid in a timely fashion, such claims must be submitted during a specified declaration submission period. Should claims be submitted outside of the submission period, they will be processed in the following period. This means that in some cases, there is a difference between the number of vaccines that have been 'claimed' and the number that have been 'paid'. Both the number of 'claimed' and 'paid' vaccinations have been reported in this request. When considering the nature of the vaccine data, there are several ways it can be reported over time: Administration Month - This is the month in which the vaccine was administered to the patient. Payment Month - This is the month in which the payment was made to the vaccine dispenser. Note that all payments for Pharmacies are paid one month later than those for PCN providers. Keying Month - This is the month in which the vaccine record first appeared on the MYS system. Submission/Claim Month - This is the month in which the claim for payment for a vaccination occurred. For example, suppose that a PCN patient is given a Covid-19 vaccination dose 1 in January (Administration Month) and then the paper record of this is misplaced for a while. The record is found and keyed into a POC system during February (Keying Month). The Provider is allowed to claim for keying during February in the first five days of March, but they're slightly late and authorise the claim on 7 March (Submission Month). As the claim is outside the submission window, it is not paid in March, it will instead be processed during April (Payment Month). Another example could be a Pharmacy patient is given a Covid-19 vaccination dose 1 in January (Administration Month), keyed in January (Keying Month), then submitted in February (Submission Month) and then payments are calculated in February, however as this is for a pharmacy, the payments are held back and not paid until March (Payment Month). For the purposes of this request, we have chosen to report by Administration Month. Data included in this request is limited to vaccinations carried out by Pharmacies only. Data included in this request is also limited to vaccinations administered in January 2024. The latest data used is a snapshot of the MYS system data that was taken on 6 February 2024. This is the snapshot of data taken after the January 2024 submission period that was used to calculate payments. Pharmacy name and address are as held at this date. This payment data does not include any adjustments made by NHSBSA Provider Assurance as part of post-payment verification exercises. These adjustments are made at account level and may relate to several months of activity. Payment data includes payments made and those scheduled for payment in the future. Payments comprise an Item of Service (IoS) fee and potentially a supplementary fee. Payments do not relate to the value of the drugs dispensed. The total used for the payment calculation may not match the totals shown in 'live' POC systems or MYS that continue to receive updates after the snapshot used to calculate payments was taken. Vaccination records are limited to those which have been associated with a declaration submission. This may include late submission declarations received after the deadline for declarations such records are not processed until the next month. Please note that some vaccinations attract a supplementary fee, so it is not possible to determine the number of vaccinations by dividing the total paid by the basic IoS fee. It is possible for new records from old administration months to be entered in the future, thus the totals here for each administration month could change when more data is processed. Please note that this request and our response is published on our Freedom of Information disclosure log at: https://opendata.nhsbsa.net/dataset/foi-01727

  10. m

    5 Point Likert Scale Primary Data for Drivers of Institutional pressure,...

    • data.mendeley.com
    Updated Dec 4, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kazi Sirajul Islam (2019). 5 Point Likert Scale Primary Data for Drivers of Institutional pressure, Product Stewardship and the Adoption Propensity of Green ICT in Malaysia [Dataset]. http://doi.org/10.17632/wggvryfhsk.1
    Explore at:
    Dataset updated
    Dec 4, 2019
    Authors
    Kazi Sirajul Islam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Malaysia
    Description

    The data have been collected from survey questionnaires based on a 5 point Likert scale and inserted into the SPSS software for screening. Later, these data have been analyzed through PLS software. The data are consisted of drivers of Institutional pressure such as Coercive pressure, Mimetic pressure and Normative pressure. The data also are consisted of the variables such as Product Stewardship and the Adoption Propensity of Green ICT (Information Communication Technology) in Malaysia. Result shows that out of three (3) drivers, only Normative Pressure is significant towards the Adoption Propensity of Green ICT in Malaysia. Product stewardship is also significant towards the Adoption Propensity of Green ICT in Malaysia. Thus, it gives the policy makers a distinct direction regarding the right factors to consider in order to have Green ICT industry in Malaysia.

  11. D

    Graph Database Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Graph Database Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-graph-database-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 22, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Graph Database Market Outlook



    The global graph database market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 8.5 billion by 2032, growing at a CAGR of 21.2% from 2024 to 2032. The substantial growth of this market is driven primarily by increasing data complexity, advancements in data analytics technologies, and the rising need for more efficient database management systems.



    One of the primary growth factors for the graph database market is the exponential increase in data generation. As organizations generate vast amounts of data from various sources such as social media, e-commerce platforms, and IoT devices, the need for sophisticated data management and analysis tools becomes paramount. Traditional relational databases struggle to handle the complexity and interconnectivity of this data, leading to a shift towards graph databases which excel in managing such intricate relationships.



    Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies. These technologies rely heavily on connected data for predictive analytics and decision-making processes. Graph databases, with their inherent ability to model relationships between data points effectively, provide a robust foundation for AI and ML applications. This synergy between AI/ML and graph databases further accelerates market growth.



    Additionally, the increasing prevalence of personalized customer experiences across industries like retail, finance, and healthcare is fueling demand for graph databases. Businesses are leveraging graph databases to analyze customer behaviors, preferences, and interactions in real-time, enabling them to offer tailored recommendations and services. This enhanced customer experience translates to higher customer satisfaction and retention, driving further adoption of graph databases.



    From a regional perspective, North America currently holds the largest market share due to early adoption of advanced technologies and the presence of key market players. However, significant growth is also anticipated in the Asia-Pacific region, driven by rapid digital transformation, increasing investments in IT infrastructure, and growing awareness of the benefits of graph databases. Europe is also expected to witness steady growth, supported by stringent data management regulations and a strong focus on data privacy and security.



    Component Analysis



    The graph database market can be segmented into two primary components: software and services. The software segment holds the largest market share, driven by extensive adoption across various industries. Graph database software is designed to create, manage, and query graph databases, offering features such as scalability, high performance, and efficient handling of complex data relationships. The growth in this segment is propelled by continuous advancements and innovations in graph database technologies. Companies are increasingly investing in research and development to enhance the capabilities of their graph database software products, catering to the evolving needs of their customers.



    On the other hand, the services segment is also witnessing substantial growth. This segment includes consulting, implementation, and support services provided by vendors to help organizations effectively deploy and manage graph databases. As businesses recognize the benefits of graph databases, the demand for expert services to ensure successful implementation and integration into existing systems is rising. Additionally, ongoing support and maintenance services are crucial for the smooth operation of graph databases, driving further growth in this segment.



    The increasing complexity of data and the need for specialized expertise to manage and analyze it effectively are key factors contributing to the growth of the services segment. Organizations often lack the in-house skills required to harness the full potential of graph databases, prompting them to seek external assistance. This trend is particularly evident in large enterprises, where the scale and complexity of data necessitate robust support services.



    Moreover, the services segment is benefiting from the growing trend of outsourcing IT functions. Many organizations are opting to outsource their database management needs to specialized service providers, allowing them to focus on their core business activities. This shift towards outsourcing is further bolstering the demand for graph database services, driving market growth.


    &l

  12. r

    PC-Urban Outdoordataset for 3D Point Cloud semantic segmentation

    • researchdata.edu.au
    Updated 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ajmal Mian; Micheal Wise; Naveed Akhtar; Muhammad Ibrahim; Computer Science and Software Engineering (2021). PC-Urban Outdoordataset for 3D Point Cloud semantic segmentation [Dataset]. http://doi.org/10.21227/FVQD-K603
    Explore at:
    Dataset updated
    2021
    Dataset provided by
    IEEE DataPort
    The University of Western Australia
    Authors
    Ajmal Mian; Micheal Wise; Naveed Akhtar; Muhammad Ibrahim; Computer Science and Software Engineering
    Description

    The proposed dataset, termed PC-Urban (Urban Point Cloud), is captured with an Ouster LiDAR sensor with 64 channels. The sensor is installed on an SUV that drives through the downtown of Perth, Western Australia (WA), Australia. The dataset comprises over 4.3 billion points captured for 66K sensor frames. The labelled data is organized as registered and raw point cloud frames, where the former has a different number of registered consecutive frames. We provide 25 class labels in the dataset covering 23 million points and 5K instances. Labelling is performed with PC-Annotate and can easily be extended by the end-users employing the same tool.The data is organized into unlabelled and labelled 3D point clouds. The unlabelled data is provided in .PCAP file format, which is the direct output format of the used Ouster LiDAR sensor. Raw frames are extracted from the recorded .PCAP files in the form of Ply and Excel files using the Ouster Studio Software. Labelled 3D point cloud data consists of registered or raw point clouds. A labelled point cloud is a combination of Ply, Excel, Labels and Summary files. A point cloud in Ply file contains X, Y, Z values along with color information. An Excel file contains X, Y, Z values, Intensity, Reflectivity, Ring, Noise, and Range of each point. These attributes can be useful in semantic segmentation using deep learning algorithms. The Label and Label Summary files have been explained in the previous section. Our one GB raw data contains nearly 1,300 raw frames, whereas 66,425 frames are provided in the dataset, each comprising 65,536 points. Hence, 4.3 billion points captured with the Ouster LiDAR sensor are provided. Annotation of 25 general outdoor classes is provided, which include car, building, bridge, tree, road, letterbox, traffic signal, light-pole, rubbish bin, cycles, motorcycle, truck, bus, bushes, road sign board, advertising board, road divider, road lane, pedestrians, side-path, wall, bus stop, water, zebra-crossing, and background. With the released data, a total of 143 scenes are annotated which include both raw and registered frames.

  13. Z

    Dairy Supply Chain Sales Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panagiotis Sarigiannidis (2024). Dairy Supply Chain Sales Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7853252
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Panagiotis Sarigiannidis
    Dimitrios Pliatsios
    Vasileios Argyriou
    Konstantinos Georgakidis
    Anna Triantafyllou
    Athanasios Liatifis
    Christos Chaschatzis
    Ilias Siniosoglou
    Dimitris Iatropoulos
    Thomas Lagkas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1.Introduction

    Sales data collection is a crucial aspect of any manufacturing industry as it provides valuable insights about the performance of products, customer behaviour, and market trends. By gathering and analysing this data, manufacturers can make informed decisions about product development, pricing, and marketing strategies in Internet of Things (IoT) business environments like the dairy supply chain.

    One of the most important benefits of the sales data collection process is that it allows manufacturers to identify their most successful products and target their efforts towards those areas. For example, if a manufacturer could notice that a particular product is selling well in a certain region, this information could be utilised to develop new products, optimise the supply chain or improve existing ones to meet the changing needs of customers.

    This dataset includes information about 7 of MEVGAL’s products [1]. According to the above information the data published will help researchers to understand the dynamics of the dairy market and its consumption patterns, which is creating the fertile ground for synergies between academia and industry and eventually help the industry in making informed decisions regarding product development, pricing and market strategies in the IoT playground. The use of this dataset could also aim to understand the impact of various external factors on the dairy market such as the economic, environmental, and technological factors. It could help in understanding the current state of the dairy industry and identifying potential opportunities for growth and development.

    1. Citation

    Please cite the following papers when using this dataset:

    I. Siniosoglou, K. Xouveroudis, V. Argyriou, T. Lagkas, S. K. Goudos, K. E. Psannis and P. Sarigiannidis, "Evaluating the Effect of Volatile Federated Timeseries on Modern DNNs: Attention over Long/Short Memory," in the 12th International Conference on Circuits and Systems Technologies (MOCAST 2023), April 2023, Accepted

    1. Dataset Modalities

    The dataset includes data regarding the daily sales of a series of dairy product codes offered by MEVGAL. In particular, the dataset includes information gathered by the logistics division and agencies within the industrial infrastructures overseeing the production of each product code. The products included in this dataset represent the daily sales and logistics of a variety of yogurt-based stock. Each of the different files include the logistics for that product on a daily basis for three years, from 2020 to 2022.

    3.1 Data Collection

    The process of building this dataset involves several steps to ensure that the data is accurate, comprehensive and relevant.

    The first step is to determine the specific data that is needed to support the business objectives of the industry, i.e., in this publication’s case the daily sales data.

    Once the data requirements have been identified, the next step is to implement an effective sales data collection method. In MEVGAL’s case this is conducted through direct communication and reports generated each day by representatives & selling points.

    It is also important for MEVGAL to ensure that the data collection process conducted is in an ethical and compliant manner, adhering to data privacy laws and regulation. The industry also has a data management plan in place to ensure that the data is securely stored and protected from unauthorised access.

    The published dataset is consisted of 13 features providing information about the date and the number of products that have been sold. Finally, the dataset was anonymised in consideration to the privacy requirement of the data owner (MEVGAL).

    File

    Period

    Number of Samples (days)

    product 1 2020.xlsx

    01/01/2020–31/12/2020

    363

    product 1 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 1 2022.xlsx

    01/01/2022–31/12/2022

    365

    product 2 2020.xlsx

    01/01/2020–31/12/2020

    363

    product 2 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 2 2022.xlsx

    01/01/2022–31/12/2022

    365

    product 3 2020.xlsx

    01/01/2020–31/12/2020

    363

    product 3 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 3 2022.xlsx

    01/01/2022–31/12/2022

    365

    product 4 2020.xlsx

    01/01/2020–31/12/2020

    363

    product 4 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 4 2022.xlsx

    01/01/2022–31/12/2022

    364

    product 5 2020.xlsx

    01/01/2020–31/12/2020

    363

    product 5 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 5 2022.xlsx

    01/01/2022–31/12/2022

    365

    product 6 2020.xlsx

    01/01/2020–31/12/2020

    362

    product 6 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 6 2022.xlsx

    01/01/2022–31/12/2022

    365

    product 7 2020.xlsx

    01/01/2020–31/12/2020

    362

    product 7 2021.xlsx

    01/01/2021–31/12/2021

    364

    product 7 2022.xlsx

    01/01/2022–31/12/2022

    365

    3.2 Dataset Overview

    The following table enumerates and explains the features included across all of the included files.

    Feature

    Description

    Unit

    Day

    day of the month

    -

    Month

    Month

    -

    Year

    Year

    -

    daily_unit_sales

    Daily sales - the amount of products, measured in units, that during that specific day were sold

    units

    previous_year_daily_unit_sales

    Previous Year’s sales - the amount of products, measured in units, that during that specific day were sold the previous year

    units

    percentage_difference_daily_unit_sales

    The percentage difference between the two above values

    %

    daily_unit_sales_kg

    The amount of products, measured in kilograms, that during that specific day were sold

    kg

    previous_year_daily_unit_sales_kg

    Previous Year’s sales - the amount of products, measured in kilograms, that during that specific day were sold, the previous year

    kg

    percentage_difference_daily_unit_sales_kg

    The percentage difference between the two above values

    kg

    daily_unit_returns_kg

    The percentage of the products that were shipped to selling points and were returned

    %

    previous_year_daily_unit_returns_kg

    The percentage of the products that were shipped to selling points and were returned the previous year

    %

    points_of_distribution

    The amount of sales representatives through which the product was sold to the market for this year

    previous_year_points_of_distribution

    The amount of sales representatives through which the product was sold to the market for the same day for the previous year

    Table 1 – Dataset Feature Description

    1. Structure and Format

    4.1 Dataset Structure

    The provided dataset has the following structure:

    Where:

    Name

    Type

    Property

    Readme.docx

    Report

    A File that contains the documentation of the Dataset.

    product X

    Folder

    A folder containing the data of a product X.

    product X YYYY.xlsx

    Data file

    An excel file containing the sales data of product X for year YYYY.

    Table 2 - Dataset File Description

    1. Acknowledgement

    This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957406 (TERMINET).

    References

    [1] MEVGAL is a Greek dairy production company

  14. d

    Manual snow course observations, raw met data, raw snow depth observations,...

    • catalog.data.gov
    • datadiscoverystudio.org
    • +1more
    Updated Jun 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Climate Adaptation Science Centers (2024). Manual snow course observations, raw met data, raw snow depth observations, locations, and associated metadata for Oregon sites [Dataset]. https://catalog.data.gov/dataset/manual-snow-course-observations-raw-met-data-raw-snow-depth-observations-locations-and-ass
    Explore at:
    Dataset updated
    Jun 15, 2024
    Dataset provided by
    Climate Adaptation Science Centers
    Area covered
    Oregon
    Description

    OSU_SnowCourse Summary: Manual snow course observations were collected over WY 2012-2014 from four paired forest-open sites chosen to span a broad elevation range. Study sites were located in the upper McKenzie (McK) River watershed, approximately 100 km east of Corvallis, Oregon, on the western slope of the Cascade Range and in the Middle Fork Willamette (MFW) watershed, located to the south of the McKenzie. The sites were designated based on elevation, with a range of 1110-1480 m. Distributed snow depth and snow water equivalent (SWE) observations were collected via monthly manual snow courses from 1 November through 1 April and bi-weekly thereafter. Snow courses spanned 500 m of forested terrain and 500 m of adjacent open terrain. Snow depth observations were collected approximately every 10 m and SWE was measured every 100 m along the snow courses with a federal snow sampler. These data are raw observations and have not been quality controlled in any way. Distance along the transect was estimated in the field. OSU_SnowDepth Summary: 10-minute snow depth observations collected at OSU met stations in the upper McKenzie River Watershed and the Middle Fork Willamette Watershed during Water Years 2012-2014. Each meterological tower was deployed to represent either a forested or an open area at a particular site, and generally the locations were paired, with a meterological station deployed in the forest and in the open area at a single site. These data were collected in conjunction with manual snow course observations, and the meterological stations were located in the approximate center of each forest or open snow course transect. These data have undergone basic quality control. See manufacturer specifications for individual instruments to determine sensor accuracy. This file was compiled from individual raw data files (named "RawData.txt" within each site and year directory) provided by OSU, along with metadata of site attributes. We converted the Excel-based timestamp (seconds since origin) to a date, changed the NaN flags for missing data to NA, and added site attributes such as site name and cover. We replaced positive values with NA, since snow depth values in raw data are negative (i.e., flipped, with some correction to use the height of the sensor as zero). Thus, positive snow depth values in the raw data equal negative snow depth values. Second, the sign of the data was switched to make them positive. Then, the smooth.m (MATLAB) function was used to roughly smooth the data, with a moving window of 50 points. Third, outliers were removed. All values higher than the smoothed values +10, were replaced with NA. In some cases, further single point outliers were removed. OSU_Met Summary: Raw, 10-minute meteorological observations collected at OSU met stations in the upper McKenzie River Watershed and the Middle Fork Willamette Watershed during Water Years 2012-2014. Each meterological tower was deployed to represent either a forested or an open area at a particular site, and generally the locations were paired, with a meterological station deployed in the forest and in the open area at a single site. These data were collected in conjunction with manual snow course observations, and the meteorological stations were located in the approximate center of each forest or open snow course transect. These stations were deployed to collect numerous meteorological variables, of which snow depth and wind speed are included here. These data are raw datalogger output and have not been quality controlled in any way. See manufacturer specifications for individual instruments to determine sensor accuracy. This file was compiled from individual raw data files (named "RawData.txt" within each site and year directory) provided by OSU, along with metadata of site attributes. We converted the Excel-based timestamp (seconds since origin) to a date, changed the NaN and 7999 flags for missing data to NA, and added site attributes such as site name and cover. OSU_Location Summary: Location Metadata for manual snow course observations and meteorological sensors. These data are compiled from GPS data for which the horizontal accuracy is unknown, and from processed hemispherical photographs. They have not been quality controlled in any way.

  15. f

    Data used in the book "Analyzing Wimbledon - The Power of Statistics"

    • uvaauas.figshare.com
    xlsx
    Updated Feb 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    F. Klaassen; Jan R. Magnus (2023). Data used in the book "Analyzing Wimbledon - The Power of Statistics" [Dataset]. http://doi.org/10.21942/uva.21983555.v3
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 2, 2023
    Dataset provided by
    University of Amsterdam / Amsterdam University of Applied Sciences
    Authors
    F. Klaassen; Jan R. Magnus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Excel file contains:

    Point-by-point data of singles matches at Wimbledon 1992-1995: 256 men's matches with 59,466 points, and 223 women's matches with 29,417 points; Match-level data of the same matches; Point-by-point data of three famous recent matches: Federer-Nadal, Clijsters-Williams, and Djokovic-Nadal.

  16. a

    Point Bathymetry

    • nio-ne-pene-hub-srrb.hub.arcgis.com
    Updated Nov 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sahtu Renewable Resources Board (2021). Point Bathymetry [Dataset]. https://nio-ne-pene-hub-srrb.hub.arcgis.com/datasets/point-bathymetry
    Explore at:
    Dataset updated
    Nov 24, 2021
    Dataset authored and provided by
    Sahtu Renewable Resources Board
    Area covered
    Description

    The feature class comprises points and data generated/translatedfrom an excel spreadsheet containing bathymetric information fromvarious Land & Water Board public registries. Additional water licences were identified, reviewed, and incorporated into the spreadsheet during the review process against all files on the Public Registry.The public registry sources are listed below:Inuvialuit Water BoardGwich’in Land and Water BoardSahtu Land and Water BoardWek’eezhi Land and Water BoardTransboundary LicencesMackenzie Valley Land and Water BoardBathymetric information was collected by proponents under the NWT regulatory system and is contained within individual project files on the Land and Water Board (LWB) Public Registries. The data has been used for calculating water source capacity for water withdrawals and completing fish habitat assessments. Data compilation occurred in stages by Region and Activity. The first phase was completed in 2022, including available information from several Regions and Activities. The second phase includes the remainder of the Mackenzie Valley Land and Water Board activities and additional Water Licences added to the Public Registry after Phase 1.Data collection followed a common protocol. The LWB Public Registry website was accessed and filtered by Region, Authorization, and Activity. Each Water Licence webpage was then accessed and searched for bathymetric data. Information collected for each water source includes regulating Land and Water Board, project and water licence number, report reference, preparer, collection date, water body name, coordinates, surface area, maximum depth, average depth, ice depth, lake volume, volume of ice, and lake volume under ice cover. Estimated data is indicated as such within the spreadsheet. The data has been spot-checked for accuracy, with approximately 10% of hyperlinks tested and data checked for coordinate conversion and transcription errors. An Excel spreadhseet was created containing bathymetric information.This excel contained coordinate information that was used to spatialize the points of this data into this feature class. Pertinent attributes were included during this spatialization, some standardization was applied to different values of this data. I.e. Changing any Proponent value that had some variation of GNWT & INF(GNWT - INF, GNWT INF, GNWT Infrastructrure, etc.) to 'GNWT-INF", or fixing spelling errors in data values like 'Exporation' to "Exploration"and "Resurces" to "Resources". No data values were altered in a way that would reflect anything different than what was there originally.

  17. Z

    Data from: An exploratory approach to estimate point emission sources

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rosa, Miguel (2024). An exploratory approach to estimate point emission sources [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7653236
    Explore at:
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Ferreira, Joana
    Lopes, Myriam
    Rafael, Sandra
    Reis, Johnny
    Relvas, Hélder
    Graça, Daniel
    Rosa, Miguel
    Lopes, Diogo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The main purpose of this dataset is to improve the spatial and temporal distribution of the public power, refineries, manufacturing, and construction emissions in the European emission inventories using Portugal as a case study. More information about it can be found in the published article "An exploratory approach to estimate point emission sources" (https://doi.org/10.1016/j.atmosenv.2023.120026). This dataset includes the following folders and files:

    1. Source_Info

      1.1 Sources_Info_NFR_1_A_1_a.xlsx: this excel file includes the public plant names, situation, region, location, start, fuel used, power (MW), technology, treatment of gas effluents, stack height (m), and facility ID.

      1.2 Sources_Info_NFR_1_A_1_b.xlsx: this excel file includes the refineries' names, situation, region, location, start, fuel power (MW), stack height (m), and facility ID.

      1.3 Sources_Info_NFR_1_A_2.xlsx: this excel file includes the manufacturer's names, situation, region, NFR code, energy balance (PT_code), capital (€), stack height (m), production index code (PT_code), and facility ID.

    2. Spatial_Location

      2.1. NFR_1_A_1_a.gdb: geodatabase with the public power locations and their facility ID (polygons and points shapefiles).

      2.2. NFR_1_A_1_b.gdb: geodatabase with the refineries' locations and their facility ID (polygons and points shapefiles).

      2.3. NFR_1_A_2.gdb: geodatabase with the manufacturer's locations and their facility ID (polygons and points shapefiles).

      2.4. UrbanAreas.gdb: geodatabase with the Portuguese urban areas (polygons shapefiles).

    3. Temporal_Profiles

      3.1. Hourly

      3.1.1. Hourly_NFR_1_A_1_a.xlsx: excel file with the hourly profiles of the public power sites in Portugal's mainland (using wood waste, natural gas, hard coal, another thermal, “Central de Ciclo Combinado do Pego”, and “Central Termoeléctrica de Lares”) and Madeira island (using natural gas, fuel, and urban solid waste).

      3.1.2. Hourly_NFR_1_A_2_construction.xlsx: excel file with the hourly profiles for construction activities by Portuguese municipalities (305 municipalities).

      3.2. Monthly

      3.2.1. Monthly_NFR_1_A_1_a.xlsx: excel file with the monthly profiles of the public power sites in the Azores (using fuel oil and diesel).

      3.2.2. Monthly_ProductionIndex.xlsx: excel file with the monthly profiles of the Portuguese industries (including information for twenty-five types of activities).

    4. Emission factors

    4.1. EF_NFR1_A_1_a.xlsx: excel file with the emission factors for the public power sector.

    4.2. EF_NFR1_A_2.xlsx: excel file with the emission factors for the refinery activities.

    4.3. EF_NFR1_A_b.xlsx: excel file with the emission factors for the manufacturing and construction combustion activities.

    4.4 Fuels_to_EF.xlsx: excel file linking the fuels with available emission factors.

    1. Data_Sources.docx: this file provides the sources of the dataset.
  18. m

    Cross Regional Eucalyptus Growth and Environmental Data

    • data.mendeley.com
    Updated Oct 7, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Erasmus (2024). Cross Regional Eucalyptus Growth and Environmental Data [Dataset]. http://doi.org/10.17632/2m9rcy3dr9.3
    Explore at:
    Dataset updated
    Oct 7, 2024
    Authors
    Christopher Erasmus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset is provided in a single .xlsx file named "eucalyptus_growth_environment_data_V2.xlsx" and consists of fifteen sheets:

    Codebook: This sheet details the index, values, and descriptions for each field within the dataset, providing a comprehensive guide to understanding the data structure.

    ALL NODES: Contains measurements from all devices, totalling 102,916 data points. This sheet aggregates the data across all nodes.

    GWD1 to GWD10: These subset sheets include measurements from individual nodes, labelled according to the abbreviation “Generic Wireless Dendrometer” followed by device IDs 1 through 10. Each sheet corresponds to a specific node, representing measurements from ten trees (or nodes).

    Metadata: Provides detailed metadata for each node, including species, initial diameter, location, measurement frequency, battery specifications, and irrigation status. This information is essential for identifying and differentiating the nodes and their specific attributes.

    Missing Data Intervals: Details gaps in the data stream, including start and end dates and times when data was not uploaded. It includes information on the total duration of each missing interval and the number of missing data points.

    Missing Intervals Distribution: Offers a summary of missing data intervals and their distribution, providing insight into data gaps and reasons for missing data.

    All nodes utilize LoRaWAN for data transmission. Please note that intermittent data gaps may occur due to connectivity issues between the gateway and the nodes, as well as maintenance activities or experimental procedures.

    Software considerations: The provided R code named “Simple_Dendro_Imputation_and_Analysis.R” is a comprehensive analysis workflow that processes and analyses Eucalyptus growth and environmental data from the "eucalyptus_growth_environment_data_V2.xlsx" dataset. The script begins by loading necessary libraries, setting the working directory, and reading the data from the specified Excel sheet. It then combines date and time information into a unified DateTime format and performs data type conversions for relevant columns. The analysis focuses on a specified device, allowing for the selection of neighbouring devices for imputation of missing data. A loop checks for gaps in the time series and fills in missing intervals based on a defined threshold, followed by a function that imputes missing values using the average from nearby devices. Outliers are identified and managed through linear interpolation. The code further calculates vapor pressure metrics and applies temperature corrections to the dendrometer data. Finally, it saves the cleaned and processed data into a new Excel file while conducting dendrometer analysis using the dendRoAnalyst package, which includes visualizations and calculations of daily growth metrics and correlations with environmental factors such as vapour pressure deficit (VPD).

  19. Graph Database Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jun 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Graph Database Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/graph-database-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jun 28, 2025
    Dataset provided by
    Authors
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Graph Database Market Outlook



    According to our latest research, the global graph database market size in 2024 stands at USD 2.92 billion, with a robust compound annual growth rate (CAGR) of 21.6% projected from 2025 to 2033. By the end of 2033, the market is expected to reach approximately USD 21.1 billion. The rapid expansion of this market is primarily driven by the rising need for advanced data analytics, real-time big data processing, and the growing adoption of artificial intelligence and machine learning across various industry verticals. As organizations continue to seek innovative solutions to manage complex and interconnected data, the demand for graph database technologies is accelerating at an unprecedented pace.



    One of the most significant growth factors for the graph database market is the exponential increase in data complexity and volume. Traditional relational databases often struggle to efficiently handle highly connected data, which is becoming more prevalent in modern business environments. Graph databases excel at managing relationships between data points, making them ideal for applications such as fraud detection, social network analysis, and recommendation engines. The ability to visualize and query data relationships in real-time provides organizations with actionable insights, enabling faster and more informed decision-making. This capability is particularly valuable in sectors like BFSI, healthcare, and e-commerce, where understanding intricate data connections can lead to substantial competitive advantages.



    Another key driver fueling market growth is the widespread digital transformation initiatives undertaken by enterprises worldwide. As businesses increasingly migrate to cloud-based infrastructures and adopt advanced analytics tools, the need for scalable and flexible database solutions becomes paramount. Graph databases offer seamless integration with cloud platforms, supporting both on-premises and cloud deployment models. This flexibility allows organizations to efficiently manage growing data workloads while ensuring security and compliance. Additionally, the proliferation of IoT devices and the surge in unstructured data generation further amplify the demand for graph database solutions, as they are uniquely equipped to handle dynamic and heterogeneous data sources.



    The integration of artificial intelligence and machine learning with graph databases is also a pivotal growth factor. AI-driven analytics require robust data models capable of uncovering hidden patterns and relationships within vast datasets. Graph databases provide the foundational infrastructure for such applications, enabling advanced features like predictive analytics, anomaly detection, and personalized recommendations. As more organizations invest in AI-powered solutions to enhance customer experiences and operational efficiency, the adoption of graph database technologies is expected to surge. Furthermore, continuous advancements in graph processing algorithms and the emergence of open-source graph database platforms are lowering entry barriers, fostering innovation, and expanding the market’s reach.



    From a regional perspective, North America currently dominates the graph database market, owing to the early adoption of advanced technologies and the presence of major industry players. However, the Asia Pacific region is anticipated to witness the highest growth rate over the forecast period, driven by rapid digitalization, increasing investments in IT infrastructure, and the rising demand for data-driven decision-making across emerging economies. Europe also holds a significant share, supported by stringent data privacy regulations and the growing emphasis on innovation across sectors such as finance, healthcare, and manufacturing. As organizations across all regions recognize the value of graph databases in unlocking business insights, the global market is poised for sustained growth.





    Component Analysis



    The graph database market is broadly segmented by component into s

  20. Reference Model 3 Cost Breakdown (RM3: Wave Point Absorber)

    • data.openei.org
    • mhkdr.openei.org
    • +5more
    data, website
    Updated Sep 30, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Neary; Mirko Previsic; Scott Jenne; Kathleen Hallett; Vincent Neary; Mirko Previsic; Scott Jenne; Kathleen Hallett (2014). Reference Model 3 Cost Breakdown (RM3: Wave Point Absorber) [Dataset]. http://doi.org/10.15473/1819894
    Explore at:
    website, dataAvailable download formats
    Dataset updated
    Sep 30, 2014
    Dataset provided by
    United States Department of Energyhttp://energy.gov/
    Sandia National Laboratories
    Open Energy Data Initiative (OEDI)
    Authors
    Vincent Neary; Mirko Previsic; Scott Jenne; Kathleen Hallett; Vincent Neary; Mirko Previsic; Scott Jenne; Kathleen Hallett
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains the Reference Model 3 (RM3) spreadsheets with the cost breakdown structure (CBS) for the levelized cost of energy (LCOE) calculations for a single RM3 device and multiple unit arrays. These spreadsheets are contained within an XLSX file and a spreadsheet editor such as Microsoft Excel is needed to open the file. This data was generated upon completion of the project on September 30, 2014.

    The Reference Model Project (RMP), sponsored by the U.S. Department of Energy (DOE), was a partnered effort to develop open-source MHK point designs as reference models (RMs) to benchmark MHK technology performance and costs, and an open-source methodology for design and analysis of MHK technologies, including models for estimating their capital costs, operational costs, and levelized costs of energy. The point designs also served as open-source test articles for university researchers and commercial technology developers. The RMP project team, led by Sandia National Laboratories (SNL), included a partnership between DOE, three national laboratories, including the National Renewable Energy Laboratory (NREL), Pacific Northwest National Laboratory (PNNL), and Oak Ridge National Laboratory (ORNL), the Applied Research Laboratory of Penn State University, and Re Vision Consulting.

    Reference Model 3 (RM3) is a wave point absorber, also referred to as a wave power buoy, that was designed for a reference site located off the shore of Eureka in Humboldt County, California. The design of the device consists of a surface float that translates (oscillates) with wave motion relative to a vertical column spar buoy, which connects to a subsurface reaction plate. This two-body point absorber converts wave energy into electrical power predominately from the devices heave oscillation induced by incident waves; the float is designed to oscillate up and down the vertical shaft up to 4 m. The bottom of the reaction plate is about 35 m below the water surface. The device is targeted for deployment in water depths of 40 m to 100 m. The point absorber is also connected to a mooring system to keep the floating device in position.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Developer (2018). dataset + target vs output [Dataset]. http://doi.org/10.5281/zenodo.2336579

dataset + target vs output

Explore at:
26 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Dec 16, 2018
Authors
Developer
Description

210 data points meaning of each excel sheet: IN - input variable values for each data point (each data point is one row) TARGET - target variable values for each data point (each data point is one row) VARS - presents the units used for each input (independent) and output/target (dependent) variables TARGET vs OUTPUT - presents the 210 expected (experimental) values and the ones obtained by the proposed ANN Check reference below (to be added when the paper is published) https://www.researchgate.net/publication/329932699_Neural_Networks_-_Shear_Strength_-_Corrugated_Web_Girders

Search
Clear search
Close search
Google apps
Main menu