100+ datasets found
  1. m

    Data from: Potrika: Raw and Balanced Newspaper Datasets in the Bangla...

    • data.mendeley.com
    Updated Nov 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Istiak Ahmad (2022). Potrika: Raw and Balanced Newspaper Datasets in the Bangla Language with Eight Topics and Five Attributes [Dataset]. http://doi.org/10.17632/v362rp78dc.3
    Explore at:
    Dataset updated
    Nov 4, 2022
    Authors
    Istiak Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Knowledge is central to human and scientific developments. Natural Language Processing (NLP) allows automated analysis and creation of knowledge. Data is a crucial NLP and machine learning ingredient. The scarcity of open datasets is a well-known problem in the machine and deep learning research. This is very much the case for textual NLP datasets in English and other major world languages. For the Bangla language, the situation is even more challenging and the number of large datasets for NLP research is practically nil. We hereby present Potrika, a large single-label Bangla news article textual dataset curated for NLP research from six popular online news portals in Bangladesh (Jugantor, Jaijaidin, Ittefaq, Kaler Kontho, Inqilab, and Somoyer Alo) for the period 2014-2020. The articles are classified into eight distinct categories (National, Sports, International, Entertainment, Economy, Education, Politics, and Science & Technology) providing five attributes (News Article, Category, Headline, Publication Date, and Newspaper Source). The raw dataset contains 185.51 million words and 12.57 million sentences contained in 664,880 news articles. Moreover, using NLP augmentation techniques, we create from the raw (unbalanced) dataset another (balanced) dataset comprising 320,000 news articles with 40,000 articles in each of the eight news categories. Potrika contains both datasets (raw and balanced) to suit a wide range of NLP research. By far, to the best of our knowledge, Potrika is the largest and the most extensive dataset for news classification.

    Further details of the dataset, its collection, and usage for deep journalism including detection of the multi-perspective parameters for transportation can be found in our article here: https://doi.org/10.3390/su14095711.

  2. Cloud Computing for Science Data Processing in Support of Emergency Response...

    • data.wu.ac.at
    xml
    Updated Sep 16, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2017). Cloud Computing for Science Data Processing in Support of Emergency Response [Dataset]. https://data.wu.ac.at/schema/data_gov/ZjY5OThlZjYtOWNhMi00YTEwLTgyN2EtZGQyZjIwZGFjMDgx
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Sep 16, 2017
    Dataset provided by
    NASAhttp://nasa.gov/
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Cloud computing enables users to create virtual computers, each one with the optimal configuration of hardware and software for a job. The number of virtual computers can be increased to process large data sets or reduce processing time. Large scale scientific applications of the cloud, in many cases, are still in development.

    For example, in the event of an environmental crisis, such as the Deepwater Horizon oil spill, tornadoes, Mississippi River flooding, or a hurricane, up to date information is one of the most important commodities for decision makers. The volume of remote sensing data that is needed to be processed to accurately retrieve ocean properties from satellite measurements can easily exceed a terabyte, even for a small region such as the Mississippi Sound. Often, with current infrastructure, the time required to download, process and analyze the large volumes of remote sensing data, limits data processing capabilities to provide timely information to emergency responders. The use of a cloud computing platform, like NASA’s Nebula, can help eliminate those barriers.

    NASA Nebula was developed as an open-source cloud computing platform to provide an easily quantifiable and improved alternative to building additional expensive data centers and to provide an easier way for NASA scientists and researchers to share large, complex data sets with external partners and the public. Nebula was designed as an Infrastructure-as-a-Service (IaaS) implementation that provided scalable computing and storage for science data and Web-based applications. Nebula IaaS allowed users to unilaterally provision, manage, and decommission computing capabilities (virtual machine instances, storage, etc.) on an as-needed basis through a Web interface or a set of command-line tools.

    This project demonstrated a novel way to conduct large scale scientific data processing utilizing NASA’s cloud computer, Nebula. Remote sensing data from the Deepwater Horizon oil spill site was analyzed to assess changes in concentration of suspended sediments in the area surrounding the spill site.

    Software for processing time series of satellite remote sensing data was packaged together with a computer code that uses web services to download the data sets from a NASA data archive and distribution system. The new application package was able to be quickly deployed on a cloud computing platform when, and only for as long as, processing of the time series data is required to support emergency response. Fast network connection between the cloud system and the data archive enabled remote processing of the satellite data without the need for downloading the input data to a local computer system: only the output data products are transferred for further analysis.

    NASA was a pioneer in cloud computing by having established its own private cloud computing data center called Nebula in 2009 at the Ames Research Center (Ames). Nebula provided high-capacity computing and data storage services to NASA Centers, Mission Directorates, and external customers. In 2012, NASA shut down Nebula based on the results of a 5-month test that benchmarked Nebula’s capabilities against those of Amazon and Microsoft. The test found that public clouds were more reliable and cost effective and offered much greater computing capacity and better IT support services than Nebula.

  3. w

    Global Data Acquisition Computer Boards Market Research Report: By Type...

    • wiseguyreports.com
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Global Data Acquisition Computer Boards Market Research Report: By Type (Single-Board Computers (SBCs), Embedded Data Acquisition Systems, Multi-Channel Data Acquisition Cards, Data Logging Systems, Panel-Mount Data Acquisition Devices), By Signal Type (Analog Input, Digital Input, Analog Output, Digital Output, Counter/Timer), By Sampling Rate (Low Speed (0-10 kHz), Medium Speed (10-100 kHz), High Speed (100 kHz-1 MHz), Very High Speed (over 1 MHz)), By Purpose (Machine Control, Process Monitoring, Vibration Analysis, Data Logging, Power Analysis), By Industry (Manufacturing, Energy, Automotive, Medical, Aerospace & Defense) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/data-acquisition-computer-boards-market
    Explore at:
    Dataset updated
    Jun 11, 2024
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jun 1, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20235.86(USD Billion)
    MARKET SIZE 20246.37(USD Billion)
    MARKET SIZE 203212.4(USD Billion)
    SEGMENTS COVEREDArchitecture ,Data Acquisition Type ,Application ,Form Factor ,Number of Channels ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICS1 Rising demand for embedded systems 2 Growing adoption of IoT devices 3 Increasing automation in industries 4 Advancements in data analytics 5 Government initiatives for smart cities
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDNational Instruments Corporation ,Keysight Technologies Inc. ,Advantech Co. Ltd. ,Spectrum Instrumentation Corp. ,Plex Systems Corp. ,HBM GmbH ,Meilhaus Electronic GmbH ,GaGe Applied Technologies Inc. ,Agilent Technologies Inc. ,Dataforth Corp. ,NI ,Keysight ,Advantech ,Spectrum ,Plex ,HBM ,Meilhaus ,GaGe ,Agilent ,Dataforth
    MARKET FORECAST PERIOD2024 - 2032
    KEY MARKET OPPORTUNITIES1 Growing demand for IoT devices 2 Rise of industrial automation 3 Increasing use of sensors and data analytics 4 Expansion of cloud and edge computing 5 Government regulations and standards
    COMPOUND ANNUAL GROWTH RATE (CAGR) 8.68% (2024 - 2032)
  4. Single-person Portrait Matting Dataset

    • kaggle.com
    zip
    Updated Aug 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    maadaa.ai (2024). Single-person Portrait Matting Dataset [Dataset]. https://www.kaggle.com/datasets/maadaaai/single-person-portrait-matting-dataset
    Explore at:
    zip(29333761 bytes)Available download formats
    Dataset updated
    Aug 29, 2024
    Authors
    maadaa.ai
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Description

    Single-person Portrait Matting Dataset (MD-Image-003)

    Introduction

    Our "Single-person Portrait Matting Dataset" is a pivotal resource for the fashion, media, and social media industries, providing finely labeled portrait images that capture a wide range of postures and hairstyles from various countries. With a focus on high-resolution images exceeding 1080 x 1080 pixels, this dataset is tailored for applications requiring detailed segmentation, including hair, ears, fingers, and other intricate portrait features.

    If you has interested in the full version of the datasets, featuring 50k annotated images, please visit our website maadaa.ai and leave a request.

    Specification

    Dataset IDMD-Image-003
    Dataset NameSingle-person Portrait Matting Dataset
    Data TypeImage
    VolumeAbout 50k
    Data CollectionInternet collected person portrait image with variable posture and hairstyle, covering multiple countries. Image resolution >1080 x 1080 pixels.
    AnnotationContour Segmentation, Segmentation
    Annotation NotesFine labeling of portrait areas, including hair, ears, fingers, and other details.
    Application ScenariosMedia & Entertainment, Internet, Social Media, Fashion & Apparel

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22149246%2F67fcd50d42041ab8fcc41486dbf664bd%2Fsingle%20person.png?generation=1724923229039129&alt=media" alt="">

    About maadaa.ai

    Since 2015, maadaa.ai has been dedicated to delivering specialized AI data services. Our key offerings include:

    • Data Collection: Comprehensive data gathering tailored to your needs.

    • Data Annotation: High-quality annotation services for precise data labeling.

    • Off-the-Shelf Datasets: Ready-to-use datasets to accelerate your projects.

    • Annotation Platform: Maid-X is our data annotation platform built for efficient data annotation.

    We cater to various sectors, including automotive, healthcare, retail, and more, ensuring our clients receive the best data solutions for their AI initiatives.

  5. Kvasir Dataset

    • kaggle.com
    Updated Mar 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meet Nagadia (2022). Kvasir Dataset [Dataset]. https://www.kaggle.com/datasets/meetnagadia/kvasir-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 17, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Meet Nagadia
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    Description

    Automatic detection of diseases by use of computers is an important, but still unexplored field of research. Such innovations may improve medical practice and refine health care systems all over the world. However, datasets containing medical images are hardly available, making reproducibility and comparison of approaches almost impossible. Here, we present Kvasir, a dataset containing images from inside the gastrointestinal (GI) tract. The collection of images are classified into three important anatomical landmarks and three clinically significant findings. In addition, it contains two categories of images related to endoscopic polyp removal. Sorting and annotation of the dataset is performed by medical doctors (ex- perienced endoscopists). In this respect, Kvasir is important for research on both single- and multi-disease computer aided detec- tion. By providing it, we invite and enable multimedia researcher into the medical domain of detection and retrieval.

    Data Collection by original author

    The data is collected using endoscopic equipment at Vestre Viken Health Trust (VV) in Norway. The VV consists of 4 hospitals and provides health care to 470.000 people. One of these hospitals (the Bærum Hospital) has a large gastroenterology department from where training data have been collected and will be provided, making the dataset larger in the future. Furthermore, the images are carefully annotated by one or more medical experts from VV and the Cancer Registry of Norway (CRN). The CRN provides new knowledge about cancer through research on cancer. It is part of South-Eastern Norway Regional Health Authority and is organized as an independent institution under Oslo University Hospital Trust. CRN is responsible for the national cancer screening programmes with the goal to prevent cancer death by discovering cancers or pre-cancerous lesions as early as possible.

    Applications of the Dataset

    Our vision is that the available data may eventually help researchers to develop systems that improve the health-care system in the context of disease detection in videos of the GI tract. Such a system may automate video analysis and endoscopic findings detection in the esophagus, stomach, bowel and rectum. Important results will include higher detection accuracies, reduced manual labor for medical personnel, reduced average cost, less patient discomfort and possibly increased willingness to undertake the examination. In the end, the improved screening will probably significantly reduce mortality and number of luminal GI disease incidents. With respect to direct use in the multimedia research areas, the main application area of Kvasir is automatic detection, classification and localization of endoscopic pathological findings in an image captured in the GI tract. Thus, the provided dataset can be used in several scenarios where the aim is to develop and evaluate algoritmic analysis of images. Using the same collection of data, researchers can easier compare approaches and experimental results, and results can easier be reproduced. In particular, in the area of image retrieval and object detection, Kvasir will play an important initial role where the image collection can be divided into training and test sets for developments of and experiments for various image retrieval and object localization methods including search-based systems, neural-networks, video analysis, information retrieval, machine learning, object detection, deep learning, computer vision, data fusion and big data processing.

    Provenance

    The Kvasir dataset created within the Norwegian FRINATEK project "EONS" (#231687) at Simula Research Laboratory, Norway.

    Data Documentation

    The data and the detailed description and usage instructions are published online at the dataset web-page http://datasets.simula.no/kvasir/

    Work Flows:

    The images provided can be used for developing, testing and comparison of different image recognition and classification approaches regarding to their specific procedures.

    Suggested Metrics

    Looking at the list of related work in this area, there are a lot of different metrics used, with potentially different names when used in the medical area and the computer science (information retrieval) area. Here, we provide a small list of the most important metrics. For future research, in addition to describing the dataset with respect to total number of images, total number of images in each class and total number of positives, it might be good to provide as many of the metrics below as possible in order to enable an indirect comparison with older work:

    • True positive (TP) The number of correctly identified samples. The number of frames with an endoscopic finding which correctly is identified as a frame with an endoscopic finding.
    • True negative (TN) The number of correctly identified negative samples, i.e., frames wi...
  6. m

    Promset: An annoted dataset for translating natural language to PromQl

    • data.mendeley.com
    Updated Aug 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DAVE CHEDJOUN (2025). Promset: An annoted dataset for translating natural language to PromQl [Dataset]. http://doi.org/10.17632/mfy9ntjy7p.1
    Explore at:
    Dataset updated
    Aug 5, 2025
    Authors
    DAVE CHEDJOUN
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PromSet is an annotated dataset designed to support natural language processing (NLP) research for system monitoring. It is particularly suited to applications involving the training and evaluation of large language models to translate queries expressed in natural language into their equivalent in PromQL, the query language used by the Prometheus monitoring tool.

    An initial dataset was constructed from the results of our experiments on Prometheus, during which we created a set of queries and their natural language descriptions. We then added additional data by collecting PromQL queries and their descriptions from various web sources. This raw data was curated, reviewed, corrected, and enriched with Gemini, resulting in a high-quality dataset suitable for research and development.

    The dataset contains a total of 4,350 manually curated pairs, each linking an English description to a corresponding PromQL expression. It is provided in CSV format, with two fields: description (a human-readable query) and promql (its equivalent in PromQL syntax). Each record represents a concrete and practical monitoring scenario, such as metric aggregation, label filtering, or time-based calculations. In many cases, a single PromQL query is associated with multiple English-language descriptions, increasing linguistic variation and enabling more robust model training.

    By bridging the gap between human-readable instructions and machine-interpretable PromQL syntax, Promset enables the development of intelligent systems capable of automatically understanding and generating monitoring queries. This facilitates the creation of more intuitive observability tools, streamlines DevOps workflows, and opens new avenues in research on natural language-to-code translation.

  7. D

    Forensic Computer Workstation Market Report | Global Forecast From 2025 To...

    • dataintelo.com
    csv, pdf, pptx
    Updated Oct 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Forensic Computer Workstation Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/forensic-computer-workstation-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 16, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Forensic Computer Workstation Market Outlook



    The global forensic computer workstation market size was valued at USD 1.2 billion in 2023 and is projected to reach USD 2.6 billion by 2032, growing at a Compound Annual Growth Rate (CAGR) of 8.5% during the forecast period. The substantial growth of this market is primarily driven by the increasing incidence of cybercrimes and the escalating demand for advanced forensic tools to combat digital threats and support legal investigations.



    One major growth factor for the forensic computer workstation market is the rising complexity of cybercrimes. As cyber threats become more sophisticated, there is an increased need for advanced forensic workstations that can efficiently handle and analyze large volumes of digital data. These workstations are crucial for identifying, preserving, and presenting digital evidence in a forensically sound manner. The growth of the digital landscape and the proliferation of Internet-of-Things (IoT) devices have further complicated cyber investigations, thus driving the demand for robust forensic solutions.



    Another significant growth factor is the expanding regulatory frameworks and compliance requirements across various industries. Organizations, particularly in sectors like finance, healthcare, and government, are mandated to adhere to stringent data protection and privacy laws. The need to ensure data integrity and the ability to respond to breaches promptly has led to an increased investment in forensic computer workstations. These systems are essential for conducting thorough digital investigations and ensuring compliance with legal and regulatory standards.



    The continuous advancements in forensic technology also play a pivotal role in the market's growth. The development of new software tools and hardware capabilities has enhanced the efficiency and effectiveness of digital forensics. Innovations such as machine learning, artificial intelligence, and advanced analytics are being integrated into forensic workstations, enabling faster and more accurate analysis of digital evidence. These technological advancements are expected to further boost the adoption of forensic computer workstations across various sectors.



    Regionally, North America is anticipated to hold the largest share of the forensic computer workstation market during the forecast period. This dominance can be attributed to the high incidence of cybercrimes, the presence of leading market players, and the region's robust legal and regulatory framework. Additionally, the increasing adoption of digital forensics by law enforcement agencies and the growing awareness of cyber threats among enterprises are driving market growth in this region.



    Component Analysis



    The forensic computer workstation market can be segmented by component into hardware, software, and services. Each segment plays a crucial role in the overall functionality and effectiveness of forensic investigations. The hardware component includes high-performance computers, data storage devices, and specialized peripherals that are essential for handling and analyzing digital evidence. The demand for powerful hardware systems is driven by the need for rapid processing and the ability to manage large datasets, which are critical in complex investigations.



    On the software side, forensic tools and applications are indispensable for performing various tasks such as data recovery, evidence analysis, and reporting. The software component includes a broad range of tools that cater to different aspects of digital forensics, from file carving and data imaging to network analysis and malware detection. The continuous development of new and improved software solutions is a key factor contributing to the growth of this segment. For instance, the integration of artificial intelligence and machine learning algorithms into forensic software has significantly enhanced the accuracy and speed of forensic analysis.



    The services segment encompasses a wide array of professional services that support the implementation and operation of forensic computer workstations. These services include installation, maintenance, training, and consulting. The services segment is crucial for ensuring that organizations can effectively leverage forensic workstations and tools to conduct thorough investigations. As the complexity of cybercrimes increases, the demand for specialized forensic services is also expected to grow, providing a significant boost to this market segment.



    Furthermore, the synergy between hardware,

  8. Large-scale Labeled Faces (LSLF) Dataset.zip

    • figshare.com
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tarik Alafif; Zeyad Hailat; Melih Aslan; Xuewen Chen (2023). Large-scale Labeled Faces (LSLF) Dataset.zip [Dataset]. http://doi.org/10.6084/m9.figshare.13077329.v1
    Explore at:
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Tarik Alafif; Zeyad Hailat; Melih Aslan; Xuewen Chen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Our LSLF dataset consists of 1,195,976 labeled face images for 11,459 individuals. These images are stored in JPEG format with a total size of 5.36 GB. Individuals have a minimum of 1 face image and a maximum of 1,157 face images. The average number of face images per individual is 104. Each image is automatically named as (PersonName VideoNumber FrameNumber ImageNuumber) and stored in the related individual folder.

  9. Data from: A large EEG database with users' profile information for motor...

    • data.europa.eu
    • zenodo.org
    unknown
    Updated Jan 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2023). A large EEG database with users' profile information for motor imagery Brain-Computer Interface research [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7554429?locale=en
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Jan 8, 2023
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context : We share a large database containing electroencephalographic signals from 87 human participants, with more than 20,800 trials in total representing about 70 hours of recording. It was collected during brain-computer interface (BCI) experiments and organized into 3 datasets (A, B, and C) that were all recorded following the same protocol: right and left hand motor imagery (MI) tasks during one single day session. It includes the performance of the associated BCI users, detailed information about the demographics, personality and cognitive user’s profile, and the experimental instructions and codes (executed in the open-source platform OpenViBE). Such database could prove useful for various studies, including but not limited to: 1) studying the relationships between BCI users' profiles and their BCI performances, 2) studying how EEG signals properties varies for different users' profiles and MI tasks, 3) using the large number of participants to design cross-user BCI machine learning algorithms or 4) incorporating users' profile information into the design of EEG signal classification algorithms. Sixty participants (Dataset A) performed the first experiment, designed in order to investigated the impact of experimenters' and users' gender on MI-BCI user training outcomes, i.e., users performance and experience, (Pillette & al). Twenty one participants (Dataset B) performed the second one, designed to examined the relationship between users' online performance (i.e., classification accuracy) and the characteristics of the chosen user-specific Most Discriminant Frequency Band (MDFB) (Benaroch & al). The only difference between the two experiments lies in the algorithm used to select the MDFB. Dataset C contains 6 additional participants who completed one of the two experiments described above. Physiological signals were measured using a g.USBAmp (g.tec, Austria), sampled at 512 Hz, and processed online using OpenViBE 2.1.0 (Dataset A) & OpenVIBE 2.2.0 (Dataset B). For Dataset C, participants C83 and C85 were collected with OpenViBE 2.1.0 and the remaining 4 participants with OpenViBE 2.2.0. Experiments were recorded at Inria Bordeaux sud-ouest, France. Duration : Each participant's folder is composed of approximately 48 minutes EEG recording. Meaning six 7-minutes runs and a 6-minutes baseline. Documents Instructions: checklist read by experimenters during the experiments. Questionnaires: the Mental Rotation test used, the translation of 4 questionnaires, notably the Demographic and Social information, the Pre and Post-session questionnaires, and the Index of Learning style. English and french version Performance: The online OpenViBE BCI classification performances obtained by each participant are provided for each run, as well as answers to all questionnaires Scenarios/scripts : set of OpenViBE scenarios used to perform each of the steps of the MI-BCI protocol, e.g., acquire training data, calibrate the classifier or run the online MI-BCI Database : raw signals Dataset A : N=60 participants Dataset B : N=21 participants Dataset C : N=6 participants

  10. YouTube - Bounding Boxes

    • kaggle.com
    zip
    Updated Jun 13, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    G_R_S (2020). YouTube - Bounding Boxes [Dataset]. https://www.kaggle.com/danoozy44/youtube-bounding-boxes
    Explore at:
    zip(92189243 bytes)Available download formats
    Dataset updated
    Jun 13, 2020
    Authors
    G_R_S
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    YouTube
    Description

    Note: This dataset was obtained from the Google Research site. This link will direct you there.

    YouTube-BoundingBoxes is a large-scale data set of video URLs with densely-sampled high-quality single-object bounding box annotations.

    The data set consists of approximately 380,000 15-20s video segments extracted from 240,000 different publicly visible YouTube videos, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera.

    All these video segments were human-annotated with high precision classifications and bounding boxes at 1 frame per second.

    Our goal with the public release of this dataset is to help advance the state of the art of machine learning for video understanding.

    • The data set consists of 10.5 million human annotations on video frames.
    • The data set contains 5.6 million tight bounding boxes around tracked objects in video frames.
    • The data set consists of 380,000 15-20s video segments extracted from 240,000 different publicly visible YouTube videos, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera.
    • The use of a cascade of increasingly precise human annotators ensures a measured label accuracy above 95% for every class and tight bounding boxes around the tracked objects.
    • The objects tracked in the video segments belong to 23 different classes.
  11. Z

    Single Board Computer Market - by Service (Customization, System integration...

    • zionmarketresearch.com
    pdf
    Updated Nov 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zion Market Research (2025). Single Board Computer Market - by Service (Customization, System integration and Aftersales), By Processor (ARM, x86, Atom, PowerPC), By End-Use (Industrial Automation, Aerospace & Defence, Transportation, Medical, and Entertainment), By Application (Test & Measurement, Communication, Data Processing, and Research): Global Industry Perspective, Comprehensive Analysis and Forecast 2024 - 2032- [Dataset]. https://www.zionmarketresearch.com/report/single-board-computer-market
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Nov 22, 2025
    Dataset authored and provided by
    Zion Market Research
    License

    https://www.zionmarketresearch.com/privacy-policyhttps://www.zionmarketresearch.com/privacy-policy

    Time period covered
    2022 - 2030
    Area covered
    Global
    Description

    Single Board Computer Market size is set to expand from $ 3.17 Billion in 2023 to $ 4.98 Billion by 2032, with an anticipated CAGR of around 4.6% from 2024 to 2032.

  12. Colored Flowers in Bangladesh

    • kaggle.com
    zip
    Updated Apr 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jocelyn Dumlao (2025). Colored Flowers in Bangladesh [Dataset]. https://www.kaggle.com/datasets/jocelyndumlao/colored-flowers-in-bangladesh
    Explore at:
    zip(2914530750 bytes)Available download formats
    Dataset updated
    Apr 13, 2025
    Authors
    Jocelyn Dumlao
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    Bangladesh
    Description

    ColoredFlowersBD: A Comphrehensive Image Dataset of Colored Flowers in Bangladesh for Identification and Classification Using Machine Learning and Computer Vision

    Description

    Type of data: 720 x 720 px images of colored flowers. Data format: JEPG

    Dataset contents: Original images of different varieties of colored flowers in Bangladesh from single flower and bulk-flower perspectives.

    Number of classes: Thirteen colored flower varieties - (1) Chandramallika, (2) Cosmos Phul, (3) Gada, (4) Golap, (5) Jaba, (6) Kagoj Phul, (7) Noyontara, (8) Radhachura, (9) Rangan, (10) Salvia, (11) Sandhyamani, (12) Surjomukhi, and (13) Zinnia.

    Total number of images in the dataset: 7,993.

    Distribution of instances: (1) Chandramallika = 620 images in total. Single = 306, Bulk = 314. (2) Cosmos Phul = 620 images in total. Single = 307, Bulk = 313. (3) Gada = 617 images in total. Single = 304, Bulk = 313. (4) Golap = 605 images in total. Single = 302, Bulk = 303. (5) Jaba = 604 images in total. Single = 300, Bulk = 304. (6) Kagoj Phul = 612 images in total. Single = 301, Bulk = 311. (7) Noyontara = 609 images in total. Single = 303, Bulk = 306. (8) Radhachura = 617 images in total. Single = 309, Bulk = 308. (9) Rangan = 606 images in total. Single = 305, Bulk = 301. (10) Salvia = 634 images in total. Single = 313, Bulk = 321. (11) Sandhyamani = 615 images in total. Single = 305, Bulk = 310. (12) Surjomukhi = 621 images in total. Single = 310, Bulk = 311. (13) Zinnia = 613 images in total. Single = 307, Bulk = 306.

    Dataset size: The total size of the dataset is 2.79 GB and the compressed ZIP file size is 2.71 GB.

    Data acquisition process: Images of colored flowers are captured using a high-definition smartphone camera from different angles and two perspectives: single-flower and bulk-flower.

    Data source location: Plant nurseries, local gardens, and flower shops located in different areas of Dhaka and Gazipur districts of Bangladesh.

    Where applicable: Training and evaluating machine learning and deep learning models to distinguish colored flower varieties in Bangladesh to support automated identification and classification systems of various colored flowers which can be utilized in areas of computer vision, botanical research, floral biodiversity monitoring, agriculture and horticulture, environmental conservation, AI-based flower recognition, educational resources, food industry, pollination and ecology research, aesthetic and design applications.

    Categories

    Chemistry, Biochemistry, Ecology, Pharmacology, Horticulture, Computer Vision, Environmental Science, Plant Biology, Botany, Image Processing, Pharmaceutical Science, Object Detection, Machine Learning, Biodiversity, Image Classification, Pharmaceutical Industry, Medicinal and Aromatic Plants, Flower, Beverage Industry, Food Industry, Deep Learning, Crafts (Arts), Agriculture

    Acknowledgements & Source

    Md Hasanul Ferdaus,Rizvee Hassan Prito, Masud Ahmed, Tuly Rahman, Riya Saha, Kazi Minhazul Goni Sami, Md Sajjad Hossain

    Data Source: Mendeley Dataset

  13. P

    PICMG Full-size Single Board Computer Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jan 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). PICMG Full-size Single Board Computer Report [Dataset]. https://www.datainsightsmarket.com/reports/picmg-full-size-single-board-computer-1671100
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jan 12, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The PICMG full-size single board computer (SBC) market is estimated to be valued at approximately USD XX million in 2025. It is projected to expand at a CAGR of XX% from 2025 to 2033, reaching USD XX million by the end of the forecast period. This growth can be attributed to the increasing adoption of these computers in various industries, such as industrial automation, medical, and transportation. Key market trends include the growing demand for ruggedized SBCs for use in harsh environments, the integration of advanced technologies such as artificial intelligence (AI), and the increasing popularity of modular SBCs. The market is segmented by application (industrial automation, medical, transportation, etc.), type (x86-based, ARM-based, etc.), and company (Advantech, Axiomtek, ADLINK, etc.). The Asia Pacific region is expected to hold the largest market share during the forecast period due to the presence of a large number of manufacturing industries and the growing adoption of automation in the region.

  14. G

    Secure Single Board Computers Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Secure Single Board Computers Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/secure-single-board-computers-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Secure Single Board Computers Market Outlook



    According to our latest research, the global Secure Single Board Computers (SBC) market size reached USD 2.13 billion in 2024, supported by robust demand across industrial, defense, and healthcare sectors. The market is expected to grow at a CAGR of 8.9% from 2025 to 2033, reaching a forecasted value of USD 4.61 billion by 2033. This growth is primarily driven by increasing cybersecurity concerns, the proliferation of IoT and edge computing applications, and the need for compact, high-performance computing platforms in mission-critical environments.




    The rapid expansion of industrial automation and the Industrial Internet of Things (IIoT) is a significant growth factor for the Secure Single Board Computers market. As factories and manufacturing units integrate more connected devices, the risk of cyberattacks rises, making secure computing platforms essential. Secure SBCs, equipped with advanced security features such as Trusted Platform Modules (TPM), secure boot, and hardware encryption, are increasingly being adopted to ensure data integrity and protect critical operational processes. The demand for real-time data processing and secure communication channels in smart factories further amplifies the need for robust and secure single board computers, making this segment a major driver for market growth.




    Another key growth driver is the defense and aerospace sector, where secure single board computers are indispensable for mission-critical applications. These sectors require computing platforms that can withstand harsh environments while ensuring the highest levels of data security. The integration of secure SBCs in unmanned aerial vehicles (UAVs), communication systems, and weapons control platforms is growing due to their compact form factor, reliability, and advanced security functionalities. Regulatory mandates and the increasing sophistication of cyber threats targeting defense infrastructure have accelerated the adoption of secure SBCs, positioning this market segment for sustained long-term growth.




    Healthcare is also emerging as a pivotal sector for the Secure Single Board Computers market. The digitization of medical devices, telemedicine platforms, and patient monitoring systems has heightened the need for secure and reliable computing solutions. Secure SBCs help safeguard sensitive patient data, ensure the integrity of diagnostic results, and maintain compliance with regulatory frameworks such as HIPAA. As healthcare providers continue to invest in digital transformation and connected medical technologies, the demand for secure SBCs with robust encryption and tamper-proof features is expected to surge, further boosting market expansion.




    From a regional perspective, North America currently dominates the Secure Single Board Computers market, owing to its advanced industrial base, strong defense sector, and high adoption of cutting-edge technologies. However, Asia Pacific is expected to witness the fastest growth during the forecast period, driven by rapid industrialization, increasing investments in smart manufacturing, and expanding defense budgets in countries such as China, India, and Japan. Europe also presents significant growth opportunities due to stringent data protection regulations and a strong focus on industrial automation. Latin America and the Middle East & Africa, while smaller in market share, are gradually increasing their adoption of secure SBCs as digital transformation initiatives gain momentum.





    Product Type Analysis



    The Product Type segment of the Secure Single Board Computers market is categorized into ARM-based, x86-based, PowerPC-based, and others. ARM-based SBCs have gained significant traction due to their energy efficiency, scalability, and widespread use in embedded and IoT applications. These platforms are favored for their low power consumption and ability to deliver reliable performance in compact form factors, making them ideal for industrial automation, consumer electronic

  15. m

    Mushroom Disease Dataset (Healthy, Single Infected & Mixed Infected)

    • data.mendeley.com
    Updated Jun 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdullah Mazumdar (2025). Mushroom Disease Dataset (Healthy, Single Infected & Mixed Infected) [Dataset]. http://doi.org/10.17632/jrbx34k77g.2
    Explore at:
    Dataset updated
    Jun 30, 2025
    Authors
    Abdullah Mazumdar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Mushroom cultivation bags, categorized into healthy, single-infected, and mixed-infected classes, were photographed at the Mushroom Development Institute in Savar, Dhaka, Bangladesh. The dataset comprises a total of 761 high-resolution images, including 299 healthy samples, 147 single-infected samples, and 315 mixed-infected samples. The infected images depict contamination caused by green mold, black mold, or a combination of both. The mixed-infected category represents bags affected by multiple pathogens or overlapping infection patterns, typically involving both types of mold. All images were originally captured in HEIC format using an iPhone 11 Pro Max and were later converted to JPG format for standardization and broader compatibility.

  16. G

    Central Vehicle Computer Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Central Vehicle Computer Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/central-vehicle-computer-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Central Vehicle Computer Market Outlook



    According to our latest research, the global Central Vehicle Computer market size reached USD 6.42 billion in 2024, driven primarily by the rapid adoption of advanced automotive electronics and the increasing demand for centralized computing architectures in modern vehicles. The market is expected to exhibit a robust CAGR of 14.1% from 2025 to 2033, reaching a forecasted value of approximately USD 21.25 billion by 2033. The growth trajectory is underpinned by the automotive industry's shift towards electric and autonomous vehicles, as well as the integration of complex functionalities such as ADAS, infotainment, and vehicle connectivity into a single computing unit.



    One of the primary growth factors for the Central Vehicle Computer market is the escalating complexity of vehicle electronics, which necessitates a shift from distributed electronic control units (ECUs) to centralized computing platforms. Modern vehicles, particularly electric and hybrid models, are increasingly reliant on software-driven functionalities for safety, comfort, and connectivity. The consolidation of multiple ECUs into a central vehicle computer not only reduces wiring complexity and weight but also enhances system reliability and supports over-the-air updates. This transition enables automakers to streamline vehicle architecture, improve cybersecurity, and facilitate the integration of advanced driver-assistance systems (ADAS), thereby fueling market growth.



    Another significant driver is the surging demand for electric vehicles (EVs) and the ongoing advancements in autonomous driving technologies. Central vehicle computers play a crucial role in managing the intricate interplay between various vehicle subsystems, such as powertrain, battery management, infotainment, and body control. As EV adoption accelerates globally, automakers are prioritizing centralized computing solutions to optimize energy efficiency, enable real-time data processing, and ensure seamless communication between vehicle components. Additionally, the proliferation of autonomous and semi-autonomous vehicles is propelling the need for high-performance central vehicle computers capable of handling massive data streams from sensors, cameras, and radars, further amplifying market expansion.



    Furthermore, the growing emphasis on connected car ecosystems and the integration of IoT technologies into automotive platforms are catalyzing the demand for central vehicle computers. These systems serve as the nerve center for connectivity features, facilitating vehicle-to-everything (V2X) communication, predictive maintenance, and personalized user experiences. Automakers and technology providers are forging strategic partnerships to develop scalable, future-proof central computing platforms that can accommodate evolving regulatory standards and consumer preferences. The convergence of automotive and digital technologies is expected to create lucrative opportunities for market participants, particularly in regions with robust R&D capabilities and supportive regulatory frameworks.



    Computer Engineering plays a pivotal role in the development and optimization of central vehicle computers. As vehicles become increasingly reliant on sophisticated software and hardware systems, the demand for skilled computer engineers is on the rise. These professionals are essential in designing and implementing the algorithms and architectures that enable real-time data processing and seamless integration of various vehicle subsystems. Their expertise in areas such as embedded systems, cybersecurity, and machine learning is crucial for advancing the capabilities of central vehicle computers, ensuring they can meet the growing demands of modern automotive applications.



    Regionally, Asia Pacific dominates the Central Vehicle Computer market, accounting for the largest revenue share in 2024, followed by Europe and North America. The region's leadership is attributed to the presence of leading automotive manufacturers, rapid urbanization, and strong government initiatives promoting electric mobility and smart transportation infrastructure. China, Japan, and South Korea are at the forefront of technological innovation, investing heavily in next-generation vehicle electronics and autonomous driving solutions. Meanwhile, Europe is witnessing substantial growth, driven by stringent emission

  17. Executive Functioning Data

    • openneuro.org
    Updated Dec 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tracy Brandmeyer; Arnaud Delorme (2022). Executive Functioning Data [Dataset]. http://doi.org/10.18112/openneuro.ds004350.v1.1.1
    Explore at:
    Dataset updated
    Dec 9, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tracy Brandmeyer; Arnaud Delorme
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Executive Functioning Tasks

    The data of this dataset was collected as part of an executive functioning battery consisting of three separate tasks:

    1) N-Back (NB)

    2) Sustained Attention to Response Task (SART)

    3) Local Global (LG)

    The original experiment details in which these tasks were conducted in addition to can be read about here (https://doi.org/10.3389/fnhum.2020.00246).

    Experiment Design: Two sessions of each task were conducted on the first and last day of the neurofeedback experiment with 24 participants (mentioned above).

    [N-Back (NB)] Participants performed a visual sequential letter n-back working memory task, with memory load ranging from 1-back to 3-back. The visual stimuli consisted of a sequence of 4 letters (A, B, C, D) presented black on a gray background. Participants observed stimuli on a visual display and responded using the spacebar on a provided keyboard. In the 1-back condition, the target was any letter identical to the trial immediately preceding one. In the 2-back and 3-back conditions, the target was any letter that was presented two or three trials back, respectively. The stimuli were presented on a screen for a duration of 1 s, after which a fixation cross was presented for 500 ms. Participants responded to each stimulus by pressing the spacebar with their right hand upon target presentation. If no spacebar was pressed within 1500 ms of the stimulus presentation, a new stimulus was presented. Each n-back condition (1, 2, and 3-back) consisted of the presentation of 280 stimuli selected randomly in the 4-letter pool.

    [Sustained Attention to Response Task (SART)] Participants were presented with a series of single numerical digits (randomly selected from 0 to 9 - the same digit could not be presented twice in a row) and instructed to press the spacebar for each digit, except for when presented with the digit 3. Each number was presented for 400 ms in white on a gray background. The inter-stimulus interval was 2 s irrespective of the button press and a fixation cross was present at all times except for when the digits were presented. Participants performed the SART for approximately 10 minutes corresponding to 250 digit presentations.

    [Local Global (LG)] Participants were shown large letters (H and T) on a computer screen. The large letters were made up of an aggregate of smaller letters that could be congruent (i.e large H made of small Hs or large T made of small Ts) or incongruent (large H made of small Ts or large T made of small Hs) with respect to the large letter. The small letters were 0.8 cm high and the large letters were 8 cm high on the computer screen. A fixation cross was present at all times except when the stimulus letters were presented. Letters were shown on the computer screen until the subject responded. After each subject's response, there was a delay of 1 s before the next stimulus was presented. Before each sequence of letters, instructions were shown on a computer screen indicating to participants whether they should respond to the presence of small (local condition) or large (global condition) letters. The participants were instructed to categorize specifically large letters or small letters and to press the letter H or T on the computer keyboard to indicate their choice.

    Data Processing: Data processing was performed in Matlab and EEGLAB. The EEG data was average referenced and down-sampled from 2048 to 256 Hz. A high-pass filter at 1 HZ using an elliptical non-linear filter was applied and the data was then average referenced.

    Note: The data files in this dataset were converted into the .set format for EEGLAB. The .bdf files that were converted for each of the tasks can be found in the sourcedata folder.

    Exclusion Note: The second run of NB in session 1 of sub-11 and the run of SART in session 1 of sub-18 were both excluded due to issues with conversion to .set format. However, the .bdf files of these runs can be found in the sourcedata folder.

  18. D

    Data from: LGM: Large Multi-View Gaussian Model for High-Resolution 3D...

    • researchdata.ntu.edu.sg
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiaxiang Tang; Zhaoxi Chen; Xiaokang Chen; Tengfei Wang; Gang Zeng; Ziwei Liu; Jiaxiang Tang; Zhaoxi Chen; Xiaokang Chen; Tengfei Wang; Gang Zeng; Ziwei Liu (2024). LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation [Dataset]. http://doi.org/10.21979/N9/27JLJB
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    DR-NTU (Data)
    Authors
    Jiaxiang Tang; Zhaoxi Chen; Xiaokang Chen; Tengfei Wang; Gang Zeng; Ziwei Liu; Jiaxiang Tang; Zhaoxi Chen; Xiaokang Chen; Tengfei Wang; Gang Zeng; Ziwei Liu
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/27JLJBhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/27JLJB

    Dataset funded by
    Nanyang Technological University
    Ministry of Education (MOE)
    RIE2020 Industry Alignment Fund– Industry Collaboration Projects (IAF-ICP) Funding Initiative
    Description

    3D content creation has achieved significant progress in terms of both quality and speed. Although current feed-forward models can produce 3D objects in seconds, their resolution is constrained by the intensive computation required during training. In this paper, we introduce Large Multi-view Gaussian Model (LGM), a novel framework designed to generate high-resolution 3D models from text prompts or single-view images. Our key insights are two-fold: (1) 3D Representation: We propose multi-view Gaussian features as an efficient yet powerful representation, which can then be fused together for differentiable rendering. (2) 3D Backbone: We present an asymmetric U-Net as a high-throughput backbone operating on multi-view images, which can be produced from text or single-view image input by leveraging multi-view diffusion models. Extensive experiments demonstrate the high fidelity and efficiency of our approach. Notably, we maintain the fast speed to generate 3D objects within 5 seconds while boosting the training resolution to 512, thereby achieving high-resolution 3D content generation.

  19. T

    United States - Producer Price Index by Industry: Electronic Computer...

    • tradingeconomics.com
    csv, excel, json, xml
    Updated Mar 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2020). United States - Producer Price Index by Industry: Electronic Computer Manufacturing: Single User Computers, Microprocessor Based, General Purpose [Dataset]. https://tradingeconomics.com/united-states/producer-price-index-by-industry-electronic-computer-manufacturing-single-user-computers-microprocessor-based-general-purpose-fed-data.html
    Explore at:
    excel, xml, json, csvAvailable download formats
    Dataset updated
    Mar 1, 2020
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1976 - Dec 31, 2025
    Area covered
    United States
    Description

    United States - Producer Price Index by Industry: Electronic Computer Manufacturing: Single User Computers, Microprocessor Based, General Purpose was 102.32700 Index Jun 2007=100 in August of 2025, according to the United States Federal Reserve. Historically, United States - Producer Price Index by Industry: Electronic Computer Manufacturing: Single User Computers, Microprocessor Based, General Purpose reached a record high of 1033.50000 in January of 2005 and a record low of 87.90000 in December of 2020. Trading Economics provides the current actual value, an historical data chart and related indicators for United States - Producer Price Index by Industry: Electronic Computer Manufacturing: Single User Computers, Microprocessor Based, General Purpose - last updated from the United States Federal Reserve on December of 2025.

  20. f

    UA_L-DoTT: University of Alabama's Large Dataset of Trains and Trucks -...

    • plus.figshare.com
    bin
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maxwell Eastepp; Lauren Faris; Kenneth Ricks (2023). UA_L-DoTT: University of Alabama's Large Dataset of Trains and Trucks - Dataset Repository [Dataset]. http://doi.org/10.25452/figshare.plus.19311938.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figshare+
    Authors
    Maxwell Eastepp; Lauren Faris; Kenneth Ricks
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    UA_L-DoTT (University of Alabama's Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of viewpoints in images and point clouds. The dataset consists of 97,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in using machine learning using cameras, LiDARs, or both.Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Army.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Istiak Ahmad (2022). Potrika: Raw and Balanced Newspaper Datasets in the Bangla Language with Eight Topics and Five Attributes [Dataset]. http://doi.org/10.17632/v362rp78dc.3

Data from: Potrika: Raw and Balanced Newspaper Datasets in the Bangla Language with Eight Topics and Five Attributes

Related Article
Explore at:
Dataset updated
Nov 4, 2022
Authors
Istiak Ahmad
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Knowledge is central to human and scientific developments. Natural Language Processing (NLP) allows automated analysis and creation of knowledge. Data is a crucial NLP and machine learning ingredient. The scarcity of open datasets is a well-known problem in the machine and deep learning research. This is very much the case for textual NLP datasets in English and other major world languages. For the Bangla language, the situation is even more challenging and the number of large datasets for NLP research is practically nil. We hereby present Potrika, a large single-label Bangla news article textual dataset curated for NLP research from six popular online news portals in Bangladesh (Jugantor, Jaijaidin, Ittefaq, Kaler Kontho, Inqilab, and Somoyer Alo) for the period 2014-2020. The articles are classified into eight distinct categories (National, Sports, International, Entertainment, Economy, Education, Politics, and Science & Technology) providing five attributes (News Article, Category, Headline, Publication Date, and Newspaper Source). The raw dataset contains 185.51 million words and 12.57 million sentences contained in 664,880 news articles. Moreover, using NLP augmentation techniques, we create from the raw (unbalanced) dataset another (balanced) dataset comprising 320,000 news articles with 40,000 articles in each of the eight news categories. Potrika contains both datasets (raw and balanced) to suit a wide range of NLP research. By far, to the best of our knowledge, Potrika is the largest and the most extensive dataset for news classification.

Further details of the dataset, its collection, and usage for deep journalism including detection of the multi-perspective parameters for transportation can be found in our article here: https://doi.org/10.3390/su14095711.

Search
Clear search
Close search
Google apps
Main menu