25 datasets found
  1. D

    Data from: Data sharing by scientists: practices and perceptions

    • datasetcatalog.nlm.nih.gov
    • search.dataone.org
    • +2more
    Updated Jul 7, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aydinoglu, Arsev Umur; Douglass, Kimberly; Tenopir, Carol; Wu, Lei; Frame, Mike; Manoff, Maribeth; Read, Eleanor; Allard, Suzie (2011). Data sharing by scientists: practices and perceptions [Dataset]. http://doi.org/10.5061/dryad.6t94p
    Explore at:
    Dataset updated
    Jul 7, 2011
    Authors
    Aydinoglu, Arsev Umur; Douglass, Kimberly; Tenopir, Carol; Wu, Lei; Frame, Mike; Manoff, Maribeth; Read, Eleanor; Allard, Suzie
    Description

    Background: Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers –data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings: A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance: Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.

  2. D

    Lifesciences Data Mining and Visualization Market Report | Global Forecast...

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Lifesciences Data Mining and Visualization Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-lifesciences-data-mining-and-visualization-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 5, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Lifesciences Data Mining and Visualization Market Outlook



    The global market size for Lifesciences Data Mining and Visualization was valued at approximately USD 1.5 billion in 2023 and is projected to reach around USD 4.3 billion by 2032, growing at a compound annual growth rate (CAGR) of 12.5% during the forecast period. The growth of this market is driven by the increasing demand for sophisticated data analysis tools in the life sciences sector, advancements in analytical technologies, and the rising volume of complex biological data generated from research and clinical trials.



    One of the primary growth factors for the Lifesciences Data Mining and Visualization market is the burgeoning amount of data generated from various life sciences applications, such as genomics, proteomics, and clinical trials. With the advent of high-throughput technologies, researchers and healthcare professionals are now capable of generating vast amounts of data, which necessitates the use of advanced data mining and visualization tools to derive actionable insights. These tools not only help in managing and interpreting large datasets but also in uncovering hidden patterns and relationships, thereby accelerating research and development processes.



    Another significant driver is the increasing adoption of artificial intelligence (AI) and machine learning (ML) algorithms in the life sciences domain. These technologies have proven to be invaluable in enhancing data analysis capabilities, enabling more precise and predictive modeling of biological systems. By integrating AI and ML with data mining and visualization platforms, researchers can achieve higher accuracy in identifying potential drug targets, understanding disease mechanisms, and personalizing treatment plans. This trend is expected to continue, further propelling the market's growth.



    Moreover, the rising emphasis on personalized medicine and the need for precision in healthcare is fueling the demand for data mining and visualization tools. Personalized medicine relies heavily on the analysis of individual genetic, proteomic, and metabolomic profiles to tailor treatments specifically to patients' unique characteristics. The ability to visualize these complex datasets in an understandable and actionable manner is critical for the successful implementation of personalized medicine strategies, thereby boosting the demand for advanced data analysis tools.



    From a regional perspective, North America is anticipated to dominate the Lifesciences Data Mining and Visualization market, owing to the presence of a robust healthcare infrastructure, significant investments in research and development, and a high adoption rate of advanced technologies. The European market is also expected to witness substantial growth, driven by increasing government initiatives to support life sciences research and the presence of leading biopharmaceutical companies. The Asia Pacific region is projected to experience the fastest growth, attributed to the expanding healthcare sector, rising investments in biotechnology research, and the increasing adoption of data analytics solutions.



    Component Analysis



    The Lifesciences Data Mining and Visualization market is segmented by component into software and services. The software segment is expected to hold a significant share of the market, driven by the continuous advancements in data mining algorithms and visualization techniques. Software solutions are critical in processing large volumes of complex biological data, facilitating real-time analysis, and providing intuitive visual representations that aid in decision-making. The increasing integration of AI and ML into these software solutions is further enhancing their capabilities, making them indispensable tools in life sciences research.



    The services segment, on the other hand, is projected to grow at a considerable rate, as organizations seek specialized expertise to manage and interpret their data. Services include consulting, implementation, and maintenance, as well as training and support. The demand for these services is driven by the need to ensure optimal utilization of data mining software and to keep up with the rapid pace of technological advancements. Moreover, many life sciences organizations lack the in-house expertise required to handle large-scale data analytics projects, thereby turning to external service providers for assistance.



    Within the software segment, there is a growing trend towards the development of integrated platforms that combine multiple functionalities, such as data collection, pre

  3. A

    Cloud Computing for Science Data Processing in Support of Emergency Response...

    • data.amerigeoss.org
    • data.wu.ac.at
    html
    Updated Jul 27, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States[old] (2019). Cloud Computing for Science Data Processing in Support of Emergency Response [Dataset]. https://data.amerigeoss.org/tl/dataset/cloud-computing-for-science-data-processing-in-support-of-emergency-response
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 27, 2019
    Dataset provided by
    United States[old]
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Cloud computing enables users to create virtual computers, each one with the optimal configuration of hardware and software for a job. The number of virtual computers can be increased to process large data sets or reduce processing time. Large scale scientific applications of the cloud, in many cases, are still in development.

    For example, in the event of an environmental crisis, such as the Deepwater Horizon oil spill, tornadoes, Mississippi River flooding, or a hurricane, up to date information is one of the most important commodities for decision makers. The volume of remote sensing data that is needed to be processed to accurately retrieve ocean properties from satellite measurements can easily exceed a terabyte, even for a small region such as the Mississippi Sound. Often, with current infrastructure, the time required to download, process and analyze the large volumes of remote sensing data, limits data processing capabilities to provide timely information to emergency responders. The use of a cloud computing platform, like NASA’s Nebula, can help eliminate those barriers.

    NASA Nebula was developed as an open-source cloud computing platform to provide an easily quantifiable and improved alternative to building additional expensive data centers and to provide an easier way for NASA scientists and researchers to share large, complex data sets with external partners and the public. Nebula was designed as an Infrastructure-as-a-Service (IaaS) implementation that provided scalable computing and storage for science data and Web-based applications. Nebula IaaS allowed users to unilaterally provision, manage, and decommission computing capabilities (virtual machine instances, storage, etc.) on an as-needed basis through a Web interface or a set of command-line tools.

    This project demonstrated a novel way to conduct large scale scientific data processing utilizing NASA’s cloud computer, Nebula. Remote sensing data from the Deepwater Horizon oil spill site was analyzed to assess changes in concentration of suspended sediments in the area surrounding the spill site.

    Software for processing time series of satellite remote sensing data was packaged together with a computer code that uses web services to download the data sets from a NASA data archive and distribution system. The new application package was able to be quickly deployed on a cloud computing platform when, and only for as long as, processing of the time series data is required to support emergency response. Fast network connection between the cloud system and the data archive enabled remote processing of the satellite data without the need for downloading the input data to a local computer system: only the output data products are transferred for further analysis.

    NASA was a pioneer in cloud computing by having established its own private cloud computing data center called Nebula in 2009 at the Ames Research Center (Ames). Nebula provided high-capacity computing and data storage services to NASA Centers, Mission Directorates, and external customers. In 2012, NASA shut down Nebula based on the results of a 5-month test that benchmarked Nebula’s capabilities against those of Amazon and Microsoft. The test found that public clouds were more reliable and cost effective and offered much greater computing capacity and better IT support services than Nebula.

  4. p

    Population and Housing Census 2005 - Palau

    • microdata.pacificdata.org
    Updated Aug 18, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of Planning and Statistics (2013). Population and Housing Census 2005 - Palau [Dataset]. https://microdata.pacificdata.org/index.php/catalog/27
    Explore at:
    Dataset updated
    Aug 18, 2013
    Dataset authored and provided by
    Office of Planning and Statistics
    Time period covered
    2005
    Area covered
    Palau
    Description

    Abstract

    The 2005 Republic of Palau Census of Population and Housing will be used to give a snapshot of Republic of Palau's population and housing at the mid-point of the decade. This Census is also important because it measures the population at the beginning of the implementation of the Compact of Free Association. The information collected in the census is needed to plan for the needs of the population. The government uses the census figures to allocate funds for public services in a wide variety of areas, such as education, housing, and job training. The figures also are used by private businesses, academic institutions, local organizations, and the public in general to understand who we are and what our situation is, in order to prepare better for our future needs.

    The fundamental purpose of a census is to provide information on the size, distribution and characteristics of a country's population. The census data are used for policymaking, planning and administration, as well as in management and evaluation of programmes in education, labour force, family planning, housing, health, transportation and rural development. A basic administrative use is in the demarcation of constituencies and allocation of representation to governing bodies. The census is also an invaluable resource for research, providing data for scientific analysis of the composition and distribution of the population and for statistical models to forecast its future growth. The census provides business and industry with the basic data they need to appraise the demand for housing, schools, furnishings, food, clothing, recreational facilities, medical supplies and other goods and services.

    Geographic coverage

    A hierarchical geographic presentation shows the geographic entities in a superior/subordinate structure in census products. This structure is derived from the legal, administrative, or areal relationships of the entities. The hierarchical structure is depicted in report tables by means of indentation. The following structure is used for the 2005 Census of the Republic of Palau:

    Republic of Palau State Hamlet/Village Enumeration District Block

    Analysis unit

    Individuals Families Households General Population

    Universe

    The Census covered all the households and respective residents in the entire country.

    Kind of data

    Census/enumeration data [cen]

    Sampling procedure

    Not applicable to a full enumeration census.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    The 2005 Palau Census of Population and Housing comprises three parts: 1. Housing - one form for each household 2. Population - one for for each member of the household 3. People who have left home - one form for each household.

    Cleaning operations

    Full scale processing and editing activiities comprised eight separate sessions either with or separately but with remote guidance of the U.S. Census Bureau experts to finalize all datasets for publishing stage.

    Processing operation was handled with care to produce a set of data that describes the population as clearly and accurately as possible. To meet this objective, questionnaires were reviewed and edited during field data collection operations by crew leaders for consistency, completeness, and acceptability. Questionnaires were also reviewed by census clerks in the census office for omissions, certain inconsistencies, and population coverage. For example, write-in entries such as "Don't know" or "NA" were considered unacceptable in certain quantities and/or in conjunction with other data omissions.

    As a result of this review operation, a telephone or personal visit follow-up was made to obtain missing information. Potential coverage errors were included in the follow-up, as well as questionnaires with omissions or inconsistencies beyond the completeness and quality tolerances specified in the review procedures.

    Subsequent to field operations, remaining incomplete or inconsistent information on the questionnaires was assigned using imputation procedures during the final automated edit of the collected data. Allocations, or computer assignments of acceptable data in place of unacceptable entries or blanks, were needed most often when an entry for a given item was lacking or when the information reported for a person or housing unit on that item was inconsistent with other information for that same person or housing unit. As in previous censuses, the general procedure for changing unacceptable entries was to assign an entry for a person or housing unit that was consistent with entries for persons or housing units with similar characteristics. The assignment of acceptable data in lace of blanks or unacceptable entries enhanced the usefulness of the data.

    Another way to make corrections during the computer editing process is substitution. Substitution is the assignment of a full set of characteristics for a person or housing unit. Because of the detailed field operations, substitution was not needed for the 2005 Census.

    Sampling error estimates

    Sampling Error is not applicable to full enumeration censuses.

    Data appraisal

    In any large-scale statistical operation, such as the 2005 Census of the Republic of Palau, human- and machine-related errors were anticipated. These errors are commonly referred to as nonsampling errors. Such errors include not enumerating every household or every person in the population, not obtaining all required information form the respondents, obtaining incorrect or inconsistent information, and recording information incorrectly. In addition, errors can occur during the field review of the enumerators' work, during clerical handling of the census questionnaires, or during the electronic processing of the questionnaires.

    To reduce various types of nonsampling errors, a number of techniques were implemented during the planning, data collection, and data processing activities. Quality assurance methods were used throughout the data collection and processing phases of the census to improve the quality of the data.

  5. r

    International Journal of Engineering and Advanced Technology Acceptance Rate...

    • researchhelpdesk.org
    Updated May 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). International Journal of Engineering and Advanced Technology Acceptance Rate - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/acceptance-rate/552/international-journal-of-engineering-and-advanced-technology
    Explore at:
    Dataset updated
    May 1, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    International Journal of Engineering and Advanced Technology Acceptance Rate - ResearchHelpDesk - International Journal of Engineering and Advanced Technology (IJEAT) is having Online-ISSN 2249-8958, bi-monthly international journal, being published in the months of February, April, June, August, October, and December by Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP) Bhopal (M.P.), India since the year 2011. It is academic, online, open access, double-blind, peer-reviewed international journal. It aims to publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. All submitted papers will be reviewed by the board of committee of IJEAT. Aim of IJEAT Journal disseminate original, scientific, theoretical or applied research in the field of Engineering and allied fields. dispense a platform for publishing results and research with a strong empirical component. aqueduct the significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. seek original and unpublished research papers based on theoretical or experimental works for the publication globally. publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. impart a platform for publishing results and research with a strong empirical component. create a bridge for a significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. solicit original and unpublished research papers, based on theoretical or experimental works. Scope of IJEAT International Journal of Engineering and Advanced Technology (IJEAT) covers all topics of all engineering branches. Some of them are Computer Science & Engineering, Information Technology, Electronics & Communication, Electrical and Electronics, Electronics and Telecommunication, Civil Engineering, Mechanical Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. The main topic includes but not limited to: 1. Smart Computing and Information Processing Signal and Speech Processing Image Processing and Pattern Recognition WSN Artificial Intelligence and machine learning Data mining and warehousing Data Analytics Deep learning Bioinformatics High Performance computing Advanced Computer networking Cloud Computing IoT Parallel Computing on GPU Human Computer Interactions 2. Recent Trends in Microelectronics and VLSI Design Process & Device Technologies Low-power design Nanometer-scale integrated circuits Application specific ICs (ASICs) FPGAs Nanotechnology Nano electronics and Quantum Computing 3. Challenges of Industry and their Solutions, Communications Advanced Manufacturing Technologies Artificial Intelligence Autonomous Robots Augmented Reality Big Data Analytics and Business Intelligence Cyber Physical Systems (CPS) Digital Clone or Simulation Industrial Internet of Things (IIoT) Manufacturing IOT Plant Cyber security Smart Solutions – Wearable Sensors and Smart Glasses System Integration Small Batch Manufacturing Visual Analytics Virtual Reality 3D Printing 4. Internet of Things (IoT) Internet of Things (IoT) & IoE & Edge Computing Distributed Mobile Applications Utilizing IoT Security, Privacy and Trust in IoT & IoE Standards for IoT Applications Ubiquitous Computing Block Chain-enabled IoT Device and Data Security and Privacy Application of WSN in IoT Cloud Resources Utilization in IoT Wireless Access Technologies for IoT Mobile Applications and Services for IoT Machine/ Deep Learning with IoT & IoE Smart Sensors and Internet of Things for Smart City Logic, Functional programming and Microcontrollers for IoT Sensor Networks, Actuators for Internet of Things Data Visualization using IoT IoT Application and Communication Protocol Big Data Analytics for Social Networking using IoT IoT Applications for Smart Cities Emulation and Simulation Methodologies for IoT IoT Applied for Digital Contents 5. Microwaves and Photonics Microwave filter Micro Strip antenna Microwave Link design Microwave oscillator Frequency selective surface Microwave Antenna Microwave Photonics Radio over fiber Optical communication Optical oscillator Optical Link design Optical phase lock loop Optical devices 6. Computation Intelligence and Analytics Soft Computing Advance Ubiquitous Computing Parallel Computing Distributed Computing Machine Learning Information Retrieval Expert Systems Data Mining Text Mining Data Warehousing Predictive Analysis Data Management Big Data Analytics Big Data Security 7. Energy Harvesting and Wireless Power Transmission Energy harvesting and transfer for wireless sensor networks Economics of energy harvesting communications Waveform optimization for wireless power transfer RF Energy Harvesting Wireless Power Transmission Microstrip Antenna design and application Wearable Textile Antenna Luminescence Rectenna 8. Advance Concept of Networking and Database Computer Network Mobile Adhoc Network Image Security Application Artificial Intelligence and machine learning in the Field of Network and Database Data Analytic High performance computing Pattern Recognition 9. Machine Learning (ML) and Knowledge Mining (KM) Regression and prediction Problem solving and planning Clustering Classification Neural information processing Vision and speech perception Heterogeneous and streaming data Natural language processing Probabilistic Models and Methods Reasoning and inference Marketing and social sciences Data mining Knowledge Discovery Web mining Information retrieval Design and diagnosis Game playing Streaming data Music Modelling and Analysis Robotics and control Multi-agent systems Bioinformatics Social sciences Industrial, financial and scientific applications of all kind 10. Advanced Computer networking Computational Intelligence Data Management, Exploration, and Mining Robotics Artificial Intelligence and Machine Learning Computer Architecture and VLSI Computer Graphics, Simulation, and Modelling Digital System and Logic Design Natural Language Processing and Machine Translation Parallel and Distributed Algorithms Pattern Recognition and Analysis Systems and Software Engineering Nature Inspired Computing Signal and Image Processing Reconfigurable Computing Cloud, Cluster, Grid and P2P Computing Biomedical Computing Advanced Bioinformatics Green Computing Mobile Computing Nano Ubiquitous Computing Context Awareness and Personalization, Autonomic and Trusted Computing Cryptography and Applied Mathematics Security, Trust and Privacy Digital Rights Management Networked-Driven Multicourse Chips Internet Computing Agricultural Informatics and Communication Community Information Systems Computational Economics, Digital Photogrammetric Remote Sensing, GIS and GPS Disaster Management e-governance, e-Commerce, e-business, e-Learning Forest Genomics and Informatics Healthcare Informatics Information Ecology and Knowledge Management Irrigation Informatics Neuro-Informatics Open Source: Challenges and opportunities Web-Based Learning: Innovation and Challenges Soft computing Signal and Speech Processing Natural Language Processing 11. Communications Microstrip Antenna Microwave Radar and Satellite Smart Antenna MIMO Antenna Wireless Communication RFID Network and Applications 5G Communication 6G Communication 12. Algorithms and Complexity Sequential, Parallel And Distributed Algorithms And Data Structures Approximation And Randomized Algorithms Graph Algorithms And Graph Drawing On-Line And Streaming Algorithms Analysis Of Algorithms And Computational Complexity Algorithm Engineering Web Algorithms Exact And Parameterized Computation Algorithmic Game Theory Computational Biology Foundations Of Communication Networks Computational Geometry Discrete Optimization 13. Software Engineering and Knowledge Engineering Software Engineering Methodologies Agent-based software engineering Artificial intelligence approaches to software engineering Component-based software engineering Embedded and ubiquitous software engineering Aspect-based software engineering Empirical software engineering Search-Based Software engineering Automated software design and synthesis Computer-supported cooperative work Automated software specification Reverse engineering Software Engineering Techniques and Production Perspectives Requirements engineering Software analysis, design and modelling Software maintenance and evolution Software engineering tools and environments Software engineering decision support Software design patterns Software product lines Process and workflow management Reflection and metadata approaches Program understanding and system maintenance Software domain modelling and analysis Software economics Multimedia and hypermedia software engineering Software engineering case study and experience reports Enterprise software, middleware, and tools Artificial intelligent methods, models, techniques Artificial life and societies Swarm intelligence Smart Spaces Autonomic computing and agent-based systems Autonomic computing Adaptive Systems Agent architectures, ontologies, languages and protocols Multi-agent systems Agent-based learning and knowledge discovery Interface agents Agent-based auctions and marketplaces Secure mobile and multi-agent systems Mobile agents SOA and Service-Oriented Systems Service-centric software engineering Service oriented requirements engineering Service oriented architectures Middleware for service based systems Service discovery and composition Service level

  6. AutoML Market Analysis, Size, and Forecast 2025-2029: North America (US and...

    • technavio.com
    Updated Jul 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). AutoML Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, Italy, and UK), APAC (China, India, Japan, and South Korea), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/automl-market-industry-analysis
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Canada, France, South Korea, Japan, Italy, Germany, United Kingdom, United States, Global
    Description

    Snapshot img

    AutoML Market Size 2025-2029

    The AutoML market size is forecast to increase by USD 13.53 billion at a CAGR of 44.8% between 2024 and 2029.

    The market is experiencing significant growth due to the increasing democratization of AI technology, making machine learning more accessible to businesses of all sizes. Simultaneously, the talent shortage in data science continues to persist, driving the demand for automated machine learning solutions. A notable trend in the market is the fusion of predictive AutoML with generative AI, enabling lifecycle automation and streamlining the machine learning process. However, the lack of transparency and trust in complex models poses a significant challenge for businesses, as they strive to ensure the accuracy and reliability of their AI applications.
    Companies seeking to capitalize on market opportunities must address these challenges by focusing on explainable AI and building robust, trustworthy models. Navigating the complex landscape of AutoML requires strategic planning and a deep understanding of the latest trends and developments in AI technology. AutoML applications span various industries, from finance to healthcare, and the market is witnessing a shift towards scalable, cloud-based systems. Ensuring robust data security and privacy measures is essential for companies to maintain customer trust and comply with regulatory requirements.
    

    What will be the Size of the AutoML Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free Sample

    The market is witnessing significant advancements with the increasing adoption of AutoML software platforms and model monitoring tools to streamline machine learning processes. Real-time AutoML is gaining traction, enabling businesses to make instant decisions based on data. Open-source AutoML tools offer flexibility and cost savings, while model retraining strategies and deployment strategies ensure models remain accurate and effective. API integration with AutoML platforms facilitates seamless workflows, and frameworks provide a solid foundation for building customized solutions.

    Data cleaning and feature engineering are also crucial steps in the data analytics process to ensure data accuracy and quality. Batch processing and libraries cater to large-scale data needs, while model versioning and pipelines ensure data consistency and traceability. Despite its benefits, AutoML faces limitations, including data version control challenges and the need for continuous model optimization. Overall, the market is evolving rapidly, offering businesses innovative solutions to tackle complex data challenges. Data science platforms provide essential tools for data cleaning and data transformation, ensuring data integrity for big data analytics.

    How is this AutoML Industry segmented?

    The AutoML industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Type
    
      Services
      Platforms
      Software tools
    
    
    Deployment
    
      Cloud
      On-premises
    
    
    Application
    
      Data processing
      Model selection
      Hyperparameter tuning
    
    
    Sector
    
      BFSI
      Retail and e-commerce
      Manufacturing
      Healthcare
    
    
    End-user
    
      Large enterprises
      SMEs
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        Italy
        UK
    
    
      APAC
    
        China
        India
        Japan
        South Korea
    
    
      Rest of World (ROW)
    

    By Type Insights

    The Services segment is estimated to witness significant growth during the forecast period. The global automated machine learning (AutoML) market is witnessing significant growth due to the increasing adoption of advanced technologies such as natural language processing, gradient boosting algorithms, neural network architecture, and cross-validation techniques. AutoML platforms are increasingly being utilized by businesses to automate various machine learning tasks, including anomaly detection, data preprocessing, model selection, and hyperparameter optimization. These platforms employ various methods like Bayesian optimization, clustering algorithms, decision tree ensembles, and ensemble learning to improve model performance. Data augmentation techniques and dimensionality reduction are essential components of AutoML model training, ensuring data scalability and model accuracy. Statistical analysis and time series analysis provide valuable insights, while ETL processes streamline data integration.

    Deep learning algorithms, image classification models, and time series forecasting are some of the advanced applications of AutoML. Model performance metrics like accuracy and precision are critical in evaluating the effectiveness of these models.

  7. R

    Cloud Real-Time Analytics Market Market Research Report 2033

    • researchintelo.com
    csv, pdf, pptx
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Intelo (2025). Cloud Real-Time Analytics Market Market Research Report 2033 [Dataset]. https://researchintelo.com/report/cloud-real-time-analytics-market-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset authored and provided by
    Research Intelo
    License

    https://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy

    Time period covered
    2024 - 2033
    Area covered
    Global
    Description

    Cloud Real-Time Analytics Market Outlook



    According to our latest research, the global cloud real-time analytics market size in 2024 stands at USD 12.7 billion, driven by the escalating demand for instantaneous data-driven decision-making across industries. The market is poised for robust growth, registering a CAGR of 20.8% from 2025 to 2033. By the end of 2033, the market is forecasted to reach an impressive USD 84.6 billion. This surge is attributed to the exponential increase in cloud adoption, the proliferation of IoT devices, and the growing need for advanced analytics solutions that can handle massive data streams in real time, as per our latest research findings.



    One of the primary growth factors for the cloud real-time analytics market is the rapid digital transformation initiatives undertaken by enterprises worldwide. Organizations are increasingly leveraging cloud-based analytics to gain actionable insights from data generated by various digital touchpoints such as social media, web applications, and connected devices. The agility and scalability offered by cloud platforms enable businesses to process and analyze large volumes of data with minimal latency, which is essential for applications like fraud detection, customer personalization, and operational optimization. Moreover, the cost-effectiveness of cloud deployment compared to traditional on-premises solutions is further accelerating market adoption, especially among small and medium enterprises seeking to remain competitive.



    Another significant growth driver is the evolution of artificial intelligence and machine learning technologies, which are being seamlessly integrated into cloud real-time analytics platforms. These advanced technologies empower enterprises to move beyond descriptive analytics to predictive and prescriptive analytics, enhancing their ability to anticipate trends, mitigate risks, and optimize performance in real time. The increasing complexity of cyber threats and the need for proactive risk management have also led to a surge in demand for real-time analytics in sectors such as BFSI, healthcare, and government. Additionally, the proliferation of 5G networks and edge computing is expected to further boost the adoption of cloud real-time analytics by enabling faster data processing closer to the source.



    The shift towards hybrid and multi-cloud architectures is also playing a pivotal role in the expansion of the cloud real-time analytics market. Enterprises are increasingly adopting hybrid cloud models to balance data security, compliance, and scalability requirements. This hybrid approach enables organizations to process sensitive data within private clouds while leveraging the computational power of public clouds for large-scale analytics. The flexibility offered by hybrid and multi-cloud strategies is particularly beneficial for industries with stringent regulatory requirements, such as healthcare and finance. Furthermore, strategic partnerships between cloud service providers and analytics vendors are fostering innovation and expanding the capabilities of real-time analytics solutions.



    From a regional perspective, North America continues to dominate the cloud real-time analytics market, accounting for the largest share in 2024 due to the presence of leading technology providers, high cloud adoption rates, and a mature digital infrastructure. Europe is following closely, driven by the increasing focus on data privacy and regulatory compliance, while Asia Pacific is emerging as the fastest-growing region, fueled by rapid industrialization, digitalization, and government initiatives to promote smart cities and digital economies. Latin America and the Middle East & Africa are also witnessing growing adoption, albeit at a slower pace, as organizations in these regions gradually embrace cloud-based analytics to enhance operational efficiency and customer engagement.



    Component Analysis



    The cloud real-time analytics market by component is segmented into software and services, each playing a critical role in driving the adoption and value proposition of real-time analytics solutions. The software segment encompasses analytics platforms, data integration tools, visualization software, and machine learning engines that enable organizations to derive actionable insights from real-time data streams. With the increasing complexity of data sources and the need for advanced analytics capabilities, vendors are continuously enhancing their software offerings wit

  8. F

    STEM-NER-60k

    • data.uni-hannover.de
    zip
    Updated May 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TIB (2022). STEM-NER-60k [Dataset]. https://data.uni-hannover.de/dataset/stem-ner-60k
    Explore at:
    zip(36715541)Available download formats
    Dataset updated
    May 24, 2022
    Dataset authored and provided by
    TIB
    License

    Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
    License information was derived automatically

    Description

    A Large-scale Dataset of STEM Science as PROCESS, METHOD, MATERIAL, and DATA Named Entities

    This repository hosts data as a follow-up study to the following publications

    D'Souza, J., Hoppe, A., Brack, A., Jaradeh, M., Auer, S., & Ewerth, R. (2020). The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources. In Proceedings of The 12th Language Resources and Evaluation Conference (pp. 2192–2203). European Language Resources Association.

    Brack, A., D’Souza, J., Hoppe, A., Auer, S., Ewerth, R. (2020). Domain-Independent Extraction of Scientific Concepts from Research Articles. In: , et al. Advances in Information Retrieval. ECIR 2020. Lecture Notes in Computer Science, vol 12035. Springer, Cham. https://doi.org/10.1007/978-3-030-45439-5_17

    Supporting dataset link https://data.uni-hannover.de/dataset/stem-ecr-v1-0

    Description

    Roughly 60,000 titles and abstracts of scholarly articles with the CC-BY redistributable license were downloaded from Elsevier. The articles spanned 10 STEM domains which were the most prolific on Elsevier viz., Agriculture, Astronomy, Biology, Chemistry, Computer Science, Earth Science, Engineering, Material Science, and Mathematics. The STEM NER system reported in the publication above was applied on these articles. An automatically extracted dataset of 4 typed entities, viz., Process, Method, Material, and Data was created.

    What this repository contains?

    Aggregated lists of Process, Method, Material, and Data entities with respective occurrence counts extracted from 59,984 scholarly publications organized per the 10 STEM domains considered.

    Additionally, the list of Elsevier CC-BY articles used in this study are provided in the raw-data directory of the repository.

    Useful Links

  9. m

    English/Turkish Wikipedia Named-Entity Recognition and Text Categorization...

    • data.mendeley.com
    Updated Feb 9, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    H. Bahadir Sahin (2017). English/Turkish Wikipedia Named-Entity Recognition and Text Categorization Dataset [Dataset]. http://doi.org/10.17632/cdcztymf4k.1
    Explore at:
    Dataset updated
    Feb 9, 2017
    Authors
    H. Bahadir Sahin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    TWNERTC and EWNERTC are collections of automatically categorized and annotated sentences obtained from Turkish and English Wikipedia for named-entity recognition and text categorization.

    Firstly, we construct large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The final gazetteers has 77 domains (categories) and more than 1000 fine-grained entity types for both languages. Turkish gazetteers contains approximately 300K named-entities and English gazetteers has approximately 23M named-entities.

    By leveraging large-scale gazetteers and linked Wikipedia articles, we construct TWNERTC and EWNERTC. Since the categorization and annotation processes are automated, the raw collections are prone to ambiguity. Hence, we introduce two noise reduction methodologies: (a) domain-dependent (b) domain-independent. We produce two different versions by post-processing raw collections. As a result of this process, we introduced 3 versions of TWNERTC and EWNERTC: (a) raw (b) domain-dependent post-processed (c) domain-independent post-processed. Turkish collections have approximately 700K sentences for each version (varies between versions), while English collections contain more than 7M sentences.

    We also introduce "Coarse-Grained NER" versions of the same datasets. We reduce fine-grained types into "organization", "person", "location" and "misc" by mapping each fine-grained type to the most similar coarse-grained version. Note that this process also eliminated many domains and fine-grained annotations due to lack of information for coarse-grained NER. Hence, "Coarse-Grained NER" labelled datasets contain only 25 domains and number of sentences are decreased compared to "Fine-Grained NER" versions.

    All processes are explained in our published white paper for Turkish; however, major methods (gazetteers creation, automatic categorization/annotation, noise reduction) do not change for English.

  10. d

    Facilitating Collaborative Science using HydroShare and Jupyter Notebooks

    • search.dataone.org
    • hydroshare.org
    • +1more
    Updated Dec 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Castronova; Dandong Yin; Lorne Leonard; Christina Bandaragoda; David Tarboton (2021). Facilitating Collaborative Science using HydroShare and Jupyter Notebooks [Dataset]. https://search.dataone.org/view/sha256%3A3f6505545f97268126b194857ee29668c7b603c0f22243d0b9dab8aee8a69752
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset provided by
    Hydroshare
    Authors
    Anthony Castronova; Dandong Yin; Lorne Leonard; Christina Bandaragoda; David Tarboton
    Description

    Jupyter notebooks are becoming popular among the geoscience community for there ability to clearly present, disseminate, and describe scientific findings in a transparent and reproducible manner. This also makes them a desirable mechanism for sharing and collaborating scientific data and workflows with colleagues during the research process, especially when addressing large-scale cross-disciplinary geoscience issues. This work extends Jupyter notebooks to operate in a pre-configured cloud environment that is integrated with HydroShare for its data sharing and collaboration functionality, and notebooks are executed on the Resourcing Open Geospatial Education and Research (ROGER) supercomputer hosted in the CyberGIS center. This design enables researchers to address problems that are often larger in scale than can be done on a typical desktop computer. Additionally, the integration of these technologies enables researchers to collaborate on notebook workflows that execute in the cloud and are shared through the HydroShare platform. The goals of this work are to establish an open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, and (2) organize data driven educational material via classroom modules, workshops, or training courses. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  11. a

    Data from: Evaluating community science sampling for microplastics in shore...

    • ottawa-riverkeeper-open-data-ork-so.hub.arcgis.com
    Updated Aug 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ottawa Riverkeeper - Garde-rivière des Outaouais (2024). Evaluating community science sampling for microplastics in shore sediments of large river watersheds [Dataset]. https://ottawa-riverkeeper-open-data-ork-so.hub.arcgis.com/datasets/evaluating-community-science-sampling-for-microplastics-in-shore-sediments-of-large-river-watersheds
    Explore at:
    Dataset updated
    Aug 30, 2024
    Dataset authored and provided by
    Ottawa Riverkeeper - Garde-rivière des Outaouais
    Description

    A community science project in the Ottawa River Watershed in Canada involved volunteers in collecting sediment from 68 locations across 750 km. The project saw a 91% return rate of distributed kits, with 42 volunteers participating. Analysis revealed relatively low particle concentrations, influenced by factors like the watershed's large size, lower population density, and the Ottawa River's characteristics. The study highlighted the advantages of community science in large-scale freshwater research but emphasized the need for careful research design and strict quality control, particularly in lab sample processing. Community science is a valuable method for large-scale microplastic sampling.Un projet de science communautaire dans le bassin versant de la rivière des Outaouais au Canada a impliqué des bénévoles pour collecter des sédiments à 68 endroits sur 750 km. Le projet a enregistré un taux de retour de 91 % des kits distribués, avec la participation de 42 bénévoles. L'analyse a révélé des concentrations de particules relativement faibles, influencées par des facteurs tels que la grande taille du bassin versant, la faible densité de population et les caractéristiques de la rivière des Outaouais. L'étude a mis en évidence les avantages de la science communautaire dans la recherche à grande échelle sur l'eau douce, mais a souligné la nécessité d'une conception de recherche minutieuse et d'un contrôle de qualité strict, en particulier dans le traitement des échantillons en laboratoire. La science communautaire est une méthode précieuse pour l'échantillonnage de microplastiques à grande échelle.

  12. r

    International Journal of Engineering and Advanced Technology FAQ -...

    • researchhelpdesk.org
    Updated May 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). International Journal of Engineering and Advanced Technology FAQ - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/faq/552/international-journal-of-engineering-and-advanced-technology
    Explore at:
    Dataset updated
    May 28, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    International Journal of Engineering and Advanced Technology FAQ - ResearchHelpDesk - International Journal of Engineering and Advanced Technology (IJEAT) is having Online-ISSN 2249-8958, bi-monthly international journal, being published in the months of February, April, June, August, October, and December by Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP) Bhopal (M.P.), India since the year 2011. It is academic, online, open access, double-blind, peer-reviewed international journal. It aims to publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. All submitted papers will be reviewed by the board of committee of IJEAT. Aim of IJEAT Journal disseminate original, scientific, theoretical or applied research in the field of Engineering and allied fields. dispense a platform for publishing results and research with a strong empirical component. aqueduct the significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. seek original and unpublished research papers based on theoretical or experimental works for the publication globally. publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. impart a platform for publishing results and research with a strong empirical component. create a bridge for a significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. solicit original and unpublished research papers, based on theoretical or experimental works. Scope of IJEAT International Journal of Engineering and Advanced Technology (IJEAT) covers all topics of all engineering branches. Some of them are Computer Science & Engineering, Information Technology, Electronics & Communication, Electrical and Electronics, Electronics and Telecommunication, Civil Engineering, Mechanical Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. The main topic includes but not limited to: 1. Smart Computing and Information Processing Signal and Speech Processing Image Processing and Pattern Recognition WSN Artificial Intelligence and machine learning Data mining and warehousing Data Analytics Deep learning Bioinformatics High Performance computing Advanced Computer networking Cloud Computing IoT Parallel Computing on GPU Human Computer Interactions 2. Recent Trends in Microelectronics and VLSI Design Process & Device Technologies Low-power design Nanometer-scale integrated circuits Application specific ICs (ASICs) FPGAs Nanotechnology Nano electronics and Quantum Computing 3. Challenges of Industry and their Solutions, Communications Advanced Manufacturing Technologies Artificial Intelligence Autonomous Robots Augmented Reality Big Data Analytics and Business Intelligence Cyber Physical Systems (CPS) Digital Clone or Simulation Industrial Internet of Things (IIoT) Manufacturing IOT Plant Cyber security Smart Solutions – Wearable Sensors and Smart Glasses System Integration Small Batch Manufacturing Visual Analytics Virtual Reality 3D Printing 4. Internet of Things (IoT) Internet of Things (IoT) & IoE & Edge Computing Distributed Mobile Applications Utilizing IoT Security, Privacy and Trust in IoT & IoE Standards for IoT Applications Ubiquitous Computing Block Chain-enabled IoT Device and Data Security and Privacy Application of WSN in IoT Cloud Resources Utilization in IoT Wireless Access Technologies for IoT Mobile Applications and Services for IoT Machine/ Deep Learning with IoT & IoE Smart Sensors and Internet of Things for Smart City Logic, Functional programming and Microcontrollers for IoT Sensor Networks, Actuators for Internet of Things Data Visualization using IoT IoT Application and Communication Protocol Big Data Analytics for Social Networking using IoT IoT Applications for Smart Cities Emulation and Simulation Methodologies for IoT IoT Applied for Digital Contents 5. Microwaves and Photonics Microwave filter Micro Strip antenna Microwave Link design Microwave oscillator Frequency selective surface Microwave Antenna Microwave Photonics Radio over fiber Optical communication Optical oscillator Optical Link design Optical phase lock loop Optical devices 6. Computation Intelligence and Analytics Soft Computing Advance Ubiquitous Computing Parallel Computing Distributed Computing Machine Learning Information Retrieval Expert Systems Data Mining Text Mining Data Warehousing Predictive Analysis Data Management Big Data Analytics Big Data Security 7. Energy Harvesting and Wireless Power Transmission Energy harvesting and transfer for wireless sensor networks Economics of energy harvesting communications Waveform optimization for wireless power transfer RF Energy Harvesting Wireless Power Transmission Microstrip Antenna design and application Wearable Textile Antenna Luminescence Rectenna 8. Advance Concept of Networking and Database Computer Network Mobile Adhoc Network Image Security Application Artificial Intelligence and machine learning in the Field of Network and Database Data Analytic High performance computing Pattern Recognition 9. Machine Learning (ML) and Knowledge Mining (KM) Regression and prediction Problem solving and planning Clustering Classification Neural information processing Vision and speech perception Heterogeneous and streaming data Natural language processing Probabilistic Models and Methods Reasoning and inference Marketing and social sciences Data mining Knowledge Discovery Web mining Information retrieval Design and diagnosis Game playing Streaming data Music Modelling and Analysis Robotics and control Multi-agent systems Bioinformatics Social sciences Industrial, financial and scientific applications of all kind 10. Advanced Computer networking Computational Intelligence Data Management, Exploration, and Mining Robotics Artificial Intelligence and Machine Learning Computer Architecture and VLSI Computer Graphics, Simulation, and Modelling Digital System and Logic Design Natural Language Processing and Machine Translation Parallel and Distributed Algorithms Pattern Recognition and Analysis Systems and Software Engineering Nature Inspired Computing Signal and Image Processing Reconfigurable Computing Cloud, Cluster, Grid and P2P Computing Biomedical Computing Advanced Bioinformatics Green Computing Mobile Computing Nano Ubiquitous Computing Context Awareness and Personalization, Autonomic and Trusted Computing Cryptography and Applied Mathematics Security, Trust and Privacy Digital Rights Management Networked-Driven Multicourse Chips Internet Computing Agricultural Informatics and Communication Community Information Systems Computational Economics, Digital Photogrammetric Remote Sensing, GIS and GPS Disaster Management e-governance, e-Commerce, e-business, e-Learning Forest Genomics and Informatics Healthcare Informatics Information Ecology and Knowledge Management Irrigation Informatics Neuro-Informatics Open Source: Challenges and opportunities Web-Based Learning: Innovation and Challenges Soft computing Signal and Speech Processing Natural Language Processing 11. Communications Microstrip Antenna Microwave Radar and Satellite Smart Antenna MIMO Antenna Wireless Communication RFID Network and Applications 5G Communication 6G Communication 12. Algorithms and Complexity Sequential, Parallel And Distributed Algorithms And Data Structures Approximation And Randomized Algorithms Graph Algorithms And Graph Drawing On-Line And Streaming Algorithms Analysis Of Algorithms And Computational Complexity Algorithm Engineering Web Algorithms Exact And Parameterized Computation Algorithmic Game Theory Computational Biology Foundations Of Communication Networks Computational Geometry Discrete Optimization 13. Software Engineering and Knowledge Engineering Software Engineering Methodologies Agent-based software engineering Artificial intelligence approaches to software engineering Component-based software engineering Embedded and ubiquitous software engineering Aspect-based software engineering Empirical software engineering Search-Based Software engineering Automated software design and synthesis Computer-supported cooperative work Automated software specification Reverse engineering Software Engineering Techniques and Production Perspectives Requirements engineering Software analysis, design and modelling Software maintenance and evolution Software engineering tools and environments Software engineering decision support Software design patterns Software product lines Process and workflow management Reflection and metadata approaches Program understanding and system maintenance Software domain modelling and analysis Software economics Multimedia and hypermedia software engineering Software engineering case study and experience reports Enterprise software, middleware, and tools Artificial intelligent methods, models, techniques Artificial life and societies Swarm intelligence Smart Spaces Autonomic computing and agent-based systems Autonomic computing Adaptive Systems Agent architectures, ontologies, languages and protocols Multi-agent systems Agent-based learning and knowledge discovery Interface agents Agent-based auctions and marketplaces Secure mobile and multi-agent systems Mobile agents SOA and Service-Oriented Systems Service-centric software engineering Service oriented requirements engineering Service oriented architectures Middleware for service based systems Service discovery and composition Service level agreements (drafting,

  13. c

    The global Graph Analytics market size is USD 2522 million in 2024 and will...

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research, The global Graph Analytics market size is USD 2522 million in 2024 and will expand at a compound annual growth rate (CAGR) of 34.0% from 2024 to 2031. [Dataset]. https://www.cognitivemarketresearch.com/graph-analytics-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset authored and provided by
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global Graph Analytics market size will be USD 2522 million in 2024 and will expand at a compound annual growth rate (CAGR) of 34.0% from 2024 to 2031. Key Dynamics of Graph Analytics Market

    Key Drivers of Graph Analytics Market

    Increasing Demand for Immediate Big Data Insights: Organizations are progressively depending on graph analytics to handle extensive amounts of interconnected data for instantaneous insights. This is essential for applications such as fraud detection, recommendation systems, and customer behavior analysis, particularly within the finance, retail, and social media industries.

    Rising Utilization in Fraud Detection and Cybersecurity: Graph analytics facilitates the discovery of intricate relationships within transactional data, aiding in the identification of anomalies, insider threats, and fraudulent patterns. Its capacity to analyze nodes and edges in real-time is leading to significant adoption in cybersecurity and banking sectors.

    Progress in AI and Machine Learning Integration: Graph analytics platforms are progressively merging with AI and ML algorithms to improve predictive functionalities. This collaboration fosters enhanced pattern recognition, network analysis, and more precise forecasting across various sectors including healthcare, logistics, and telecommunications.

    Key Restrains for Graph Analytics Market

    High Implementation and Infrastructure Expenses: Establishing a graph analytics system necessitates sophisticated infrastructure, storage, and processing capabilities. These substantial expenses may discourage small and medium-sized enterprises from embracing graph-based solutions, particularly in the absence of a clear return on investment.

    Challenges in Data Modeling and Querying: In contrast to conventional relational databases, graph databases demand specialized expertise for schema design, data modeling, and query languages such as Cypher or Gremlin. This significant learning curve hampers adoption in organizations lacking technical expertise.

    Concerns Regarding Data Privacy and Security: Since graph analytics frequently involves the examination of sensitive personal and behavioral data, it presents regulatory and privacy challenges. Complying with data protection regulations like GDPR becomes increasingly difficult when handling large-scale, interconnected datasets.

    Key Trends in Graph Analytics Market

    Increased Utilization in Supply Chain and Logistics Optimization: Graph analytics is increasingly being adopted in logistics for the purpose of mapping routes, managing supplier relationships, and pinpointing bottlenecks. The implementation of real-time graph-based decision-making is enhancing both efficiency and resilience within global supply chains.

    Growth of Cloud-Based Graph Analytics Platforms: Cloud service providers such as AWS, Azure, and Google Cloud are broadening their support for graph databases and analytics solutions. This shift minimizes initial infrastructure expenses and facilitates scalable deployments for enterprises of various sizes.

    Advent of Explainable AI (XAI) in Graph Analytics: The need for explainability is becoming a significant priority in graph analytics. Organizations are pursuing transparency regarding how graph algorithms reach their conclusions, particularly in regulated sectors, which is increasing the demand for tools that offer inherent interpretability and traceability. Introduction of the Graph Analytics Market

    The Graph Analytics Market is rapidly expanding, driven by the growing need for advanced data analysis techniques in various sectors. Graph analytics leverages graph structures to represent and analyze relationships and dependencies, providing deeper insights than traditional data analysis methods. Key factors propelling this market include the rise of big data, the increasing adoption of artificial intelligence and machine learning, and the demand for real-time data processing. Industries such as finance, healthcare, telecommunications, and retail are major contributors, utilizing graph analytics for fraud detection, personalized recommendations, network optimization, and more. Leading vendors are continually innovating to offer scalable, efficient solutions, incorporating advanced features like graph databases and visualization tools.

  14. R

    AI in Geospatial Analytics Market Market Research Report 2033

    • researchintelo.com
    csv, pdf, pptx
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Intelo (2025). AI in Geospatial Analytics Market Market Research Report 2033 [Dataset]. https://researchintelo.com/report/ai-in-geospatial-analytics-market-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset authored and provided by
    Research Intelo
    License

    https://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy

    Time period covered
    2024 - 2033
    Area covered
    Global
    Description

    AI in Geospatial Analytics Market Outlook



    According to our latest research, the AI in Geospatial Analytics market size reached USD 9.2 billion in 2024 globally, driven by the increasing adoption of artificial intelligence to analyze and interpret geospatial data across various industries. The market is expected to grow at a robust CAGR of 19.8% from 2025 to 2033, reaching a projected value of USD 41.7 billion by 2033. This remarkable growth is underpinned by the rising need for real-time location-based insights, advancements in AI algorithms, and the proliferation of high-resolution satellite imagery and IoT devices. As per our latest research, the integration of AI with geospatial analytics is transforming decision-making processes in sectors such as urban planning, agriculture, defense, and environmental monitoring.



    One of the most significant growth factors for the AI in Geospatial Analytics market is the exponential increase in the volume and variety of geospatial data generated from satellites, drones, and IoT sensors. Organizations are leveraging AI-driven geospatial analytics to efficiently process and analyze these massive datasets, extracting actionable insights that drive operational efficiency and strategic planning. The capability of AI to automate feature extraction, pattern recognition, and predictive modeling has enabled businesses and government agencies to make faster and more informed decisions. Furthermore, the integration of machine learning and deep learning techniques with geospatial data is enabling the development of sophisticated models for land-use classification, disaster response, and urban infrastructure management.



    Another key driver is the growing demand for real-time geospatial intelligence across critical applications such as disaster management, transportation, and security. AI-powered geospatial analytics platforms are enabling authorities to monitor and respond to natural disasters, optimize logistics routes, and enhance situational awareness in defense operations. The ability to analyze spatial data in real time is proving invaluable for emergency response teams, urban planners, and logistics providers, who require up-to-date information to make timely decisions. Additionally, the increasing use of AI in monitoring environmental changes, such as deforestation and climate change, is supporting sustainability initiatives and regulatory compliance.



    The rapid advancements in cloud computing and edge AI are also contributing to the growth of the AI in Geospatial Analytics market. Cloud-based geospatial analytics solutions offer scalable processing power and storage, enabling organizations to handle large-scale spatial datasets without significant infrastructure investments. Edge AI, on the other hand, facilitates real-time analytics at the source of data generation, reducing latency and bandwidth requirements. The convergence of AI, cloud, and geospatial technologies is fostering new business models and service offerings, making advanced geospatial analytics accessible to a broader range of industries, including agriculture, utilities, and BFSI.



    From a regional perspective, North America currently leads the AI in Geospatial Analytics market, accounting for the largest share in 2024, followed by Europe and Asia Pacific. The presence of major technology providers, strong government initiatives, and early adoption of AI-driven geospatial solutions are driving market growth in these regions. Asia Pacific is expected to witness the highest growth rate during the forecast period, fueled by rapid urbanization, infrastructure development, and increasing investments in smart city projects. Meanwhile, Latin America and the Middle East & Africa are gradually embracing AI-powered geospatial analytics, particularly in sectors such as agriculture, oil & gas, and disaster management, albeit at a slower pace due to infrastructural and regulatory challenges.



    Component Analysis



    The AI in Geospatial Analytics market by component is segmented into software, hardware, and services, each playing a vital role in the overall ecosystem. Software forms the backbone of geospatial analytics, encompassing platforms and tools that leverage AI algorithms for spatial data processing, visualization, and predictive modeling. The software segment is witnessing rapid innovation, with vendors introducing user-friendly interfaces, automated feature extraction capabilities, and integration with GIS and remote sensing platforms. Cloud-based software solutions are gaining

  15. AI Data Management Market Analysis, Size, and Forecast 2025-2029: North...

    • technavio.com
    pdf
    Updated Jul 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). AI Data Management Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, Italy, and UK), APAC (China, India, Japan, and South Korea), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/ai-data-management-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 19, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2025 - 2029
    Area covered
    Canada, United States
    Description

    Snapshot img

    AI Data Management Market Size 2025-2029

    The AI data management market size is forecast to increase by USD 51.04 billion at a CAGR of 19.7% between 2024 and 2029.

    The market is experiencing significant growth, driven by the proliferation of generative AI and large language models. These advanced technologies are increasingly being adopted across industries, leading to an exponential increase in data generation and the need for efficient data management solutions. Furthermore, the ascendancy of data-centric AI and the industrialization of data curation are key trends shaping the market. However, the market also faces challenges. Extreme data complexity and quality assurance at scale pose significant obstacles.
    Companies seeking to capitalize on the opportunities presented by the market must invest in solutions that address these challenges effectively. By doing so, they can gain a competitive edge, improve operational efficiency, and unlock new revenue streams. Ensuring data accuracy, completeness, and consistency across vast datasets is a daunting task, requiring sophisticated data management tools and techniques. Cloud computing is a key trend in the market, as cloud-based solutions offer quick deployment, flexibility, and scalability.
    

    What will be the Size of the AI Data Management Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free Sample

    The market for AI data management continues to evolve, with applications spanning various sectors, from finance to healthcare and retail. The model training process involves intricate data preprocessing steps, feature selection techniques, and data pipeline design to ensure optimal model performance. Real-time data processing and anomaly detection techniques are crucial for effective model monitoring systems, while data access management and data security measures ensure data privacy compliance. Data lifecycle management, including data validation techniques, metadata management strategy, and data lineage management, is essential for maintaining data quality.

    Data governance framework and data versioning system enable effective data governance strategy and data privacy compliance. For instance, a leading retailer reported a 20% increase in sales due to implementing data quality monitoring and AI model deployment. The industry anticipates a 25% growth in the market size by 2025, driven by the continuous unfolding of market activities and evolving patterns. Data integration tools, data pipeline design, data bias detection, data visualization tools, and data encryption techniques are key components of this dynamic landscape. Statistical modeling methods and predictive analytics models rely on cloud data solutions and big data infrastructure for efficient data processing.

    How is this AI Data Management Industry segmented?

    The AI data management industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Component
    
      Platform
      Software tools
      Services
    
    
    Technology
    
      Machine learning
      Natural language processing
      Computer vision
      Context awareness
    
    
    End-user
    
      BFSI
      Retail and e-commerce
      Healthcare and life sciences
      Manufacturing
      Others
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        Italy
        UK
    
    
      APAC
    
        China
        India
        Japan
        South Korea
    
    
      Rest of World (ROW)
    

    By Component Insights

    The Platform segment is estimated to witness significant growth during the forecast period. In the dynamic and evolving world of data management, integrated platforms have emerged as a foundational and increasingly dominant category. These platforms offer a unified environment for managing both data and AI workflows, addressing the strategic imperative for enterprises to break down silos between data engineering, data science, and machine learning operations. The market trajectory is heavily influenced by the rise of the data lakehouse architecture, which combines the scalability and cost efficiency of data lakes with the performance and management features of data warehouses. Data preprocessing techniques and validation rules ensure data accuracy and consistency, while data access control maintains security and privacy.

    Machine learning models, model performance evaluation, and anomaly detection algorithms drive insights and predictions, with feature engineering methods and real-time data streaming enabling continuous learning. Data lifecycle management, data quality metrics, and data governance policies ensure data integrity and compliance. Cloud data warehousing and data lake architecture facilitate efficient data storage and

  16. w

    Showing Life Opportunities 2019-2020, Data from Experiment 1: Municipality...

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +1more
    Updated Jan 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David McKenzie (2024). Showing Life Opportunities 2019-2020, Data from Experiment 1: Municipality of Quito and Educational Zone 2 - Ecuador [Dataset]. https://microdata.worldbank.org/index.php/catalog/6110
    Explore at:
    Dataset updated
    Jan 8, 2024
    Dataset provided by
    Francisco Flores
    Mathis Schulte
    Thomas Astebro
    Bruno Crepon
    Guido Buenstorf
    Mona Mensmann
    David McKenzie
    Igor Asanov
    Time period covered
    2019 - 2021
    Area covered
    Ecuador
    Description

    Abstract

    Opportunity-focused, high-growth entrepreneurship and science-led innovation are crucial for continued economic growth and productivity. Working in these fields offers the opportunity for rewarding and high-paying careers. However, the majority of youth in developing countries do not consider either as job options, affecting their choices of what to study. Youth may not select these educational and career paths due to lack of knowledge, lack of appropriate skills, and lack of role models. We provide a scalable approach to overcoming these constraints through an online education course for secondary school students that covers entrepreneurial soft skills, scientific methods, and interviews with role models.

    The study comprises three experimental trials provided Before and during COVID-19 pandemic in different regions of Ecuador. This catalog entry includes data from Experiment 1: Educational Zone 2/Municipality of Quito 2019-2020. The data from the other two experiments are also available in the catalog.

    Experiment 1: Educational Zone 2/Municipality of Quito 2019-2020 In course of Showing Life Opportunities project we conducted a randomized control trial in high schools in Educational Zone 2, Ecuador and Municipality of Quito, Ecuador in 2019-2020; Students finish the program in July 2020. The intervention is an online education course that covers entrepreneurial soft skills, scientific methods, and interviews with role models. This course is taken by students at school (some students finish the program at school during COVID-19 outbreak). We work with mostly 14-19 year-old students (16,570 students). The experimental program covers 126 schools in Educational Zone 2 and 11 schools in Municipality of Quito. We randomly assign schools either to treatment (and receiving the entrepreneurship courses online), or placebo-control (receiving a placebo treatment of online courses from standard curricula) groups. We also cross-randomize the role models and evaluate set of nimble interventions to increase take-up.

    The details of intervention can be found in AEA registry: Asanov, Igor and David McKenzie. 2020. Showing Life Opportunities: Increasing opportunity-driven entrepreneurship and STEM careers through online courses in schools. AEA RCT Registry. July 19.

    Geographic coverage

    Experiment 1: Municipality of Quito and Educational Zone 2 Educational Zone 2 has its administrative headquarters in the city of Tena, Napo province. Its covers provinces of Napo, Orellana and Pichincha, 8 districts (15D01, 22D01, 17D10, 17D11, 15D02, 17D12, 22D02, 22D03), its 16 cantons and 68 parishes. It has an area of 39,542.58 km². The educational zone 2 spread from east to the western border of the Ecuador. We cover students of age 14-18 in schools that has sufficient access to the internet and classes of the K10, K11, or K12. We included the municipality of Quito in the study to enrich the coverage of program by having large (capital) city in the sample.

    Analysis unit

    Student

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    All students in selected schools who were present in classes filled out the baseline questionnaire

    Mode of data collection

    Internet [int]

    Research instrument

    Questionnaires We execute three main sets of questioners. A. Internet (Online Based survey)

    The survey consists of a multi-topic questionnaire administered to the students through online learning platform in school during normal educational hours before COVID-19 pandemic or at home during the COVID-19 pandemic. We collect next information: 1. Subject specific knowledge tests. Spanish, English, Statistics, Personal Initiative (only endline), Negotiations (only endline). 2. Career intentions, preferences, beliefs, expectations, and attitudes. STEM and entrepreneurial intentions, preferences, beliefs, expectations, and attitudes. 3. Psychological characteristics. Personal Initiative, Negotiations, General Cognitions (General Self-Efficacy, Youth Self-Efficacy, Perceived Subsidiary Self-Efficacy Scale, Self-Regulatory Focus, Short Grit Scale), Entrepreneurial Cognitions (Business Self-Efficacy, Identifying Opportunities, Business Attitudes, Social Entrepreneurship Standards). 4. Behavior in (incentivized) games: Other-regarding preferences (dictator game), tendency to cooperate (Prisoners Dilemma), Perseverance (triangle game), preference for honesty, creativity (unscramble game). 5. Other background information. Socioeconomic level, language spoken, risk and time preferences, trust level, parents background, big-five personality traits of student, cognitive abilities. Background information (5) collected only at the baseline. B. First follow-up Phone-based Survey Zone 2, Summer (Phone Based). The survey replicates by phone shorter version of the internet-based survey above. We collect next information: 1. Subject specific knowledge tests.
    2. Career intentions, preferences, beliefs, expectations, and attitudes. 3. Psychological characteristics

    C. (Second) Follow-up Phone-Based Survey, Winter, Zone 2, Highlands Educational Regime.

    We execute multi-topic questionnaire by phone to capture the first life-outcomes of students who finished the school. We collect next information:

    1. Life Outcome 1- Education. The set of questions that aims to measure the learning success, career/study intentions, propensity to plan and approach others with studying tasks, entrepreneurial intentions.
    2. Life Outcome 2- Labor. The set of questions that aims to measure employment status and income, job searching behavior, time devoted for working/business, salary expectations and knowledge about the careers, self-initiated contribution to the family.
    3. Personal Initiative/Negotiations related and other measures. The set of questions that aim to measure level of personal initiative, negotiation strategies, pregnancy rate, gender stereotypes, math/STEM self-efficacy, gender attitudes, parent-student communication effects.

    Cleaning operations

    Data Editing A. Internet, Online-based surveys. We extracted the raw data generated on online platform from each experiment and prepared it for research purposes. We made several pre-processing steps of data: 1. We transform the raw data generated on platform in standard statistical software (R/STATA) readable format. 2. We extracted the answer for each item for each student for each survey (Baseline, Midline, Endline). 3. We cleaned duplicated students and duplicated answers for each item in each survey based on administrative data, performance and information given by students on platform. 4. In case of baseline survey, we standardized items/scales but also kept the raw items.

    B. Phone-based surveys. The phone-based surveys are collected with help of advanced CATI kit. It contains all cases (attempts to call) and indication if the survey was effective. The data is cleaned to be ready for analysis. The data is anonymized but contains unique anonymous student id for merging across datasets.

  17. Amount of data created, consumed, and stored 2010-2023, with forecasts to...

    • statista.com
    Updated Jun 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Amount of data created, consumed, and stored 2010-2023, with forecasts to 2028 [Dataset]. https://www.statista.com/statistics/871513/worldwide-data-created/
    Explore at:
    Dataset updated
    Jun 30, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    May 2024
    Area covered
    Worldwide
    Description

    The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching *** zettabytes in 2024. Over the next five years up to 2028, global data creation is projected to grow to more than *** zettabytes. In 2020, the amount of data created and replicated reached a new high. The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic, as more people worked and learned from home and used home entertainment options more often. Storage capacity also growing Only a small percentage of this newly created data is kept though, as just * percent of the data produced and consumed in 2020 was saved and retained into 2021. In line with the strong growth of the data volume, the installed base of storage capacity is forecast to increase, growing at a compound annual growth rate of **** percent over the forecast period from 2020 to 2025. In 2020, the installed base of storage capacity reached *** zettabytes.

  18. t

    Jennifer D'Souza (2022). Dataset: STEM-NER-60k....

    • service.tib.eu
    Updated Apr 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Jennifer D'Souza (2022). Dataset: STEM-NER-60k. https://doi.org/10.25835/heyid7l7 [Dataset]. https://service.tib.eu/ldmservice/dataset/luh-stem-ner-60k
    Explore at:
    Dataset updated
    Apr 26, 2022
    License

    Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
    License information was derived automatically

    Description

    A Large-scale Dataset of STEM Science as PROCESS, METHOD, MATERIAL, and DATA Named Entities This repository hosts data as a follow-up study to the following publications D'Souza, J., Hoppe, A., Brack, A., Jaradeh, M., Auer, S., & Ewerth, R. (2020). The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources. In Proceedings of The 12th Language Resources and Evaluation Conference (pp. 2192–2203). European Language Resources Association. Brack, A., D’Souza, J., Hoppe, A., Auer, S., Ewerth, R. (2020). Domain-Independent Extraction of Scientific Concepts from Research Articles. In: , et al. Advances in Information Retrieval. ECIR 2020. Lecture Notes in Computer Science, vol 12035. Springer, Cham. https://doi.org/10.1007/978-3-030-45439-5_17 Supporting dataset link https://data.uni-hannover.de/dataset/stem-ecr-v1-0 Description Roughly 60,000 titles and abstracts of scholarly articles with the CC-BY redistributable license were downloaded from Elsevier. The articles spanned 10 STEM domains which were the most prolific on Elsevier viz., Agriculture, Astronomy, Biology, Chemistry, Computer Science, Earth Science, Engineering, Material Science, and Mathematics. The STEM NER system reported in the publication above was applied on these articles. An automatically extracted dataset of 4 typed entities, viz., Process, Method, Material, and Data was created. What this repository contains?

  19. f

    Exploring the Relationship between the Engineering and Physical Sciences and...

    • figshare.com
    docx
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ludo Waltman; Anthony F. J. van Raan; Sue Smart (2023). Exploring the Relationship between the Engineering and Physical Sciences and the Health and Life Sciences by Advanced Bibliometric Methods [Dataset]. http://doi.org/10.1371/journal.pone.0111530
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ludo Waltman; Anthony F. J. van Raan; Sue Smart
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We investigate the extent to which advances in the health and life sciences (HLS) are dependent on research in the engineering and physical sciences (EPS), particularly physics, chemistry, mathematics, and engineering. The analysis combines two different bibliometric approaches. The first approach to analyze the ‘EPS-HLS interface’ is based on term map visualizations of HLS research fields. We consider 16 clinical fields and five life science fields. On the basis of expert judgment, EPS research in these fields is studied by identifying EPS-related terms in the term maps. In the second approach, a large-scale citation-based network analysis is applied to publications from all fields of science. We work with about 22,000 clusters of publications, each representing a topic in the scientific literature. Citation relations are used to identify topics at the EPS-HLS interface. The two approaches complement each other. The advantages of working with textual data compensate for the limitations of working with citation relations and the other way around. An important advantage of working with textual data is in the in-depth qualitative insights it provides. Working with citation relations, on the other hand, yields many relevant quantitative statistics. We find that EPS research contributes to HLS developments mainly in the following five ways: new materials and their properties; chemical methods for analysis and molecular synthesis; imaging of parts of the body as well as of biomaterial surfaces; medical engineering mainly related to imaging, radiation therapy, signal processing technology, and other medical instrumentation; mathematical and statistical methods for data analysis. In our analysis, about 10% of all EPS and HLS publications are classified as being at the EPS-HLS interface. This percentage has remained more or less constant during the past decade.

  20. r

    International Journal of Engineering and Advanced Technology Impact Factor...

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). International Journal of Engineering and Advanced Technology Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/552/international-journal-of-engineering-and-advanced-technology
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    International Journal of Engineering and Advanced Technology Impact Factor 2024-2025 - ResearchHelpDesk - International Journal of Engineering and Advanced Technology (IJEAT) is having Online-ISSN 2249-8958, bi-monthly international journal, being published in the months of February, April, June, August, October, and December by Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP) Bhopal (M.P.), India since the year 2011. It is academic, online, open access, double-blind, peer-reviewed international journal. It aims to publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. All submitted papers will be reviewed by the board of committee of IJEAT. Aim of IJEAT Journal disseminate original, scientific, theoretical or applied research in the field of Engineering and allied fields. dispense a platform for publishing results and research with a strong empirical component. aqueduct the significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. seek original and unpublished research papers based on theoretical or experimental works for the publication globally. publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. impart a platform for publishing results and research with a strong empirical component. create a bridge for a significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. solicit original and unpublished research papers, based on theoretical or experimental works. Scope of IJEAT International Journal of Engineering and Advanced Technology (IJEAT) covers all topics of all engineering branches. Some of them are Computer Science & Engineering, Information Technology, Electronics & Communication, Electrical and Electronics, Electronics and Telecommunication, Civil Engineering, Mechanical Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. The main topic includes but not limited to: 1. Smart Computing and Information Processing Signal and Speech Processing Image Processing and Pattern Recognition WSN Artificial Intelligence and machine learning Data mining and warehousing Data Analytics Deep learning Bioinformatics High Performance computing Advanced Computer networking Cloud Computing IoT Parallel Computing on GPU Human Computer Interactions 2. Recent Trends in Microelectronics and VLSI Design Process & Device Technologies Low-power design Nanometer-scale integrated circuits Application specific ICs (ASICs) FPGAs Nanotechnology Nano electronics and Quantum Computing 3. Challenges of Industry and their Solutions, Communications Advanced Manufacturing Technologies Artificial Intelligence Autonomous Robots Augmented Reality Big Data Analytics and Business Intelligence Cyber Physical Systems (CPS) Digital Clone or Simulation Industrial Internet of Things (IIoT) Manufacturing IOT Plant Cyber security Smart Solutions – Wearable Sensors and Smart Glasses System Integration Small Batch Manufacturing Visual Analytics Virtual Reality 3D Printing 4. Internet of Things (IoT) Internet of Things (IoT) & IoE & Edge Computing Distributed Mobile Applications Utilizing IoT Security, Privacy and Trust in IoT & IoE Standards for IoT Applications Ubiquitous Computing Block Chain-enabled IoT Device and Data Security and Privacy Application of WSN in IoT Cloud Resources Utilization in IoT Wireless Access Technologies for IoT Mobile Applications and Services for IoT Machine/ Deep Learning with IoT & IoE Smart Sensors and Internet of Things for Smart City Logic, Functional programming and Microcontrollers for IoT Sensor Networks, Actuators for Internet of Things Data Visualization using IoT IoT Application and Communication Protocol Big Data Analytics for Social Networking using IoT IoT Applications for Smart Cities Emulation and Simulation Methodologies for IoT IoT Applied for Digital Contents 5. Microwaves and Photonics Microwave filter Micro Strip antenna Microwave Link design Microwave oscillator Frequency selective surface Microwave Antenna Microwave Photonics Radio over fiber Optical communication Optical oscillator Optical Link design Optical phase lock loop Optical devices 6. Computation Intelligence and Analytics Soft Computing Advance Ubiquitous Computing Parallel Computing Distributed Computing Machine Learning Information Retrieval Expert Systems Data Mining Text Mining Data Warehousing Predictive Analysis Data Management Big Data Analytics Big Data Security 7. Energy Harvesting and Wireless Power Transmission Energy harvesting and transfer for wireless sensor networks Economics of energy harvesting communications Waveform optimization for wireless power transfer RF Energy Harvesting Wireless Power Transmission Microstrip Antenna design and application Wearable Textile Antenna Luminescence Rectenna 8. Advance Concept of Networking and Database Computer Network Mobile Adhoc Network Image Security Application Artificial Intelligence and machine learning in the Field of Network and Database Data Analytic High performance computing Pattern Recognition 9. Machine Learning (ML) and Knowledge Mining (KM) Regression and prediction Problem solving and planning Clustering Classification Neural information processing Vision and speech perception Heterogeneous and streaming data Natural language processing Probabilistic Models and Methods Reasoning and inference Marketing and social sciences Data mining Knowledge Discovery Web mining Information retrieval Design and diagnosis Game playing Streaming data Music Modelling and Analysis Robotics and control Multi-agent systems Bioinformatics Social sciences Industrial, financial and scientific applications of all kind 10. Advanced Computer networking Computational Intelligence Data Management, Exploration, and Mining Robotics Artificial Intelligence and Machine Learning Computer Architecture and VLSI Computer Graphics, Simulation, and Modelling Digital System and Logic Design Natural Language Processing and Machine Translation Parallel and Distributed Algorithms Pattern Recognition and Analysis Systems and Software Engineering Nature Inspired Computing Signal and Image Processing Reconfigurable Computing Cloud, Cluster, Grid and P2P Computing Biomedical Computing Advanced Bioinformatics Green Computing Mobile Computing Nano Ubiquitous Computing Context Awareness and Personalization, Autonomic and Trusted Computing Cryptography and Applied Mathematics Security, Trust and Privacy Digital Rights Management Networked-Driven Multicourse Chips Internet Computing Agricultural Informatics and Communication Community Information Systems Computational Economics, Digital Photogrammetric Remote Sensing, GIS and GPS Disaster Management e-governance, e-Commerce, e-business, e-Learning Forest Genomics and Informatics Healthcare Informatics Information Ecology and Knowledge Management Irrigation Informatics Neuro-Informatics Open Source: Challenges and opportunities Web-Based Learning: Innovation and Challenges Soft computing Signal and Speech Processing Natural Language Processing 11. Communications Microstrip Antenna Microwave Radar and Satellite Smart Antenna MIMO Antenna Wireless Communication RFID Network and Applications 5G Communication 6G Communication 12. Algorithms and Complexity Sequential, Parallel And Distributed Algorithms And Data Structures Approximation And Randomized Algorithms Graph Algorithms And Graph Drawing On-Line And Streaming Algorithms Analysis Of Algorithms And Computational Complexity Algorithm Engineering Web Algorithms Exact And Parameterized Computation Algorithmic Game Theory Computational Biology Foundations Of Communication Networks Computational Geometry Discrete Optimization 13. Software Engineering and Knowledge Engineering Software Engineering Methodologies Agent-based software engineering Artificial intelligence approaches to software engineering Component-based software engineering Embedded and ubiquitous software engineering Aspect-based software engineering Empirical software engineering Search-Based Software engineering Automated software design and synthesis Computer-supported cooperative work Automated software specification Reverse engineering Software Engineering Techniques and Production Perspectives Requirements engineering Software analysis, design and modelling Software maintenance and evolution Software engineering tools and environments Software engineering decision support Software design patterns Software product lines Process and workflow management Reflection and metadata approaches Program understanding and system maintenance Software domain modelling and analysis Software economics Multimedia and hypermedia software engineering Software engineering case study and experience reports Enterprise software, middleware, and tools Artificial intelligent methods, models, techniques Artificial life and societies Swarm intelligence Smart Spaces Autonomic computing and agent-based systems Autonomic computing Adaptive Systems Agent architectures, ontologies, languages and protocols Multi-agent systems Agent-based learning and knowledge discovery Interface agents Agent-based auctions and marketplaces Secure mobile and multi-agent systems Mobile agents SOA and Service-Oriented Systems Service-centric software engineering Service oriented requirements engineering Service oriented architectures Middleware for service based systems Service discovery and composition Service level

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Aydinoglu, Arsev Umur; Douglass, Kimberly; Tenopir, Carol; Wu, Lei; Frame, Mike; Manoff, Maribeth; Read, Eleanor; Allard, Suzie (2011). Data sharing by scientists: practices and perceptions [Dataset]. http://doi.org/10.5061/dryad.6t94p

Data from: Data sharing by scientists: practices and perceptions

Data from: Data sharing by scientists: practices and perceptions

Related Article
Explore at:
Dataset updated
Jul 7, 2011
Authors
Aydinoglu, Arsev Umur; Douglass, Kimberly; Tenopir, Carol; Wu, Lei; Frame, Mike; Manoff, Maribeth; Read, Eleanor; Allard, Suzie
Description

Background: Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers –data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings: A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance: Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.

Search
Clear search
Close search
Google apps
Main menu