54 datasets found
  1. f

    Data_Sheet_1_ImputEHR: A Visualization Tool of Imputation for the Prediction...

    • frontiersin.figshare.com
    • figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yi-Hui Zhou; Ehsan Saghapour (2023). Data_Sheet_1_ImputEHR: A Visualization Tool of Imputation for the Prediction of Biomedical Data.PDF [Dataset]. http://doi.org/10.3389/fgene.2021.691274.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Yi-Hui Zhou; Ehsan Saghapour
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Electronic health records (EHRs) have been widely adopted in recent years, but often include a high proportion of missing data, which can create difficulties in implementing machine learning and other tools of personalized medicine. Completed datasets are preferred for a number of analysis methods, and successful imputation of missing EHR data can improve interpretation and increase our power to predict health outcomes. However, use of the most popular imputation methods mainly require scripting skills, and are implemented using various packages and syntax. Thus, the implementation of a full suite of methods is generally out of reach to all except experienced data scientists. Moreover, imputation is often considered as a separate exercise from exploratory data analysis, but should be considered as art of the data exploration process. We have created a new graphical tool, ImputEHR, that is based on a Python base and allows implementation of a range of simple and sophisticated (e.g., gradient-boosted tree-based and neural network) data imputation approaches. In addition to imputation, the tool enables data exploration for informed decision-making, as well as implementing machine learning prediction tools for response data selected by the user. Although the approach works for any missing data problem, the tool is primarily motivated by problems encountered for EHR and other biomedical data. We illustrate the tool using multiple real datasets, providing performance measures of imputation and downstream predictive analysis.

  2. G

    GeoThermalCloud: Cloud Fusion of Big Data and Multi-Physics Models using...

    • gdr.openei.org
    • data.openei.org
    • +3more
    code, text_document
    Updated Apr 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bulbul Ahmmed; Bulbul Ahmmed (2022). GeoThermalCloud: Cloud Fusion of Big Data and Multi-Physics Models using Machine Learning for Discovery, Exploration and Development of Hidden Geothermal Resources [Dataset]. http://doi.org/10.15121/1869828
    Explore at:
    code, text_documentAvailable download formats
    Dataset updated
    Apr 4, 2022
    Dataset provided by
    Office of Energy Efficiency and Renewable Energyhttp://energy.gov/eere
    Stanford University
    Geothermal Data Repository
    Authors
    Bulbul Ahmmed; Bulbul Ahmmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Geothermal exploration and production are challenging, expensive and risky. The GeoThermalCloud uses Machine Learning to predict the location of hidden geothermal resources. This submission includes a training dataset for the GeoThermalCloud neural network. Machine Learning for Discovery, Exploration, and Development of Hidden Geothermal Resources.

  3. f

    Data from: Data-Driven Approach Considering Imbalance in Data Sets and...

    • acs.figshare.com
    zip
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wataru Takahara; Ryuto Baba; Yosuke Harashima; Tomoaki Takayama; Shogo Takasuka; Yuichi Yamaguchi; Akihiko Kudo; Mikiya Fujii (2025). Data-Driven Approach Considering Imbalance in Data Sets and Experimental Conditions for Exploration of Photocatalysts [Dataset]. http://doi.org/10.1021/acsomega.4c06997.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    ACS Publications
    Authors
    Wataru Takahara; Ryuto Baba; Yosuke Harashima; Tomoaki Takayama; Shogo Takasuka; Yuichi Yamaguchi; Akihiko Kudo; Mikiya Fujii
    Description

    In the field of data-driven material development, an imbalance in data sets where data points are concentrated in certain regions often causes difficulties in building regression models when machine learning methods are applied. One example of inorganic functional materials facing such difficulties is photocatalysts. Therefore, advanced data-driven approaches are expected to help efficiently develop novel photocatalytic materials even if an imbalance exists in data sets. We propose a two-stage machine learning model aimed at handling imbalanced data sets without data thinning. In this study, we used two types of data sets that exhibit the imbalance: the Materials Project data set (openly shared due to its public domain data) and the in-house metal-sulfide photocatalyst data set (not openly shared due to the confidentiality of experimental data). This two-stage machine learning model consists of the following two parts: the first regression model, which predicts the target quantitatively, and the second classification model, which determines the reliability of the values predicted by the first regression model. We also propose a search scheme for variables related to the experimental conditions based on the proposed two-stage machine learning model. This scheme is designed for photocatalyst exploration, taking experimental conditions into account as the optimal set of variables for these conditions is unknown. The proposed two-stage machine learning model improves the prediction accuracy of the target compared with that of the one-stage model.

  4. B

    Big Data In Oil Gas Exploration Production Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Feb 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Big Data In Oil Gas Exploration Production Market Report [Dataset]. https://www.promarketreports.com/reports/big-data-in-oil-gas-exploration-production-market-20330
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Feb 21, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Market Analysis: The global Big Data in Oil & Gas Exploration & Production market is projected to surge from $674.52 million in 2025 to $1,664.15 million by 2033, registering a CAGR of 7.43% during the forecast period. The rising adoption of advanced technologies such as machine learning, data analytics, and cloud computing in oil and gas exploration and production is driving market growth. These technologies enable companies to improve data-driven decision-making and optimize operations, leading to increased efficiency and reduced costs. Key Trends and Dynamics: The market for Big Data in Oil & Gas Exploration & Production is segmented into application, technology, deployment type, end use, and region. The upstream segment accounted for the dominant share in 2025 due to the growing need for data analytics and machine learning techniques in reservoir characterization, drilling optimization, and production monitoring. Artificial intelligence (AI) is emerging as a key trend, with its applications including predictive maintenance, automated data analysis, and optimization of exploration and production processes. Cloud-based deployments are gaining traction, providing cost savings and scalability benefits to the industry. Recent developments include: , Recent developments in the Global Big Data in the Oil and Gas Exploration and Production Market highlight a significant trend toward digital transformation and advanced analytics. Companies like Halliburton and Schlumberger are increasingly integrating AI-driven solutions to enhance exploration efficiency and reduce operational costs. Additionally, Amazon Web Services and Microsoft are expanding their cloud services tailored for the oil and gas sector, enabling companies like TotalEnergies and Baker Hughes to leverage seamless data integration and analytics. Notably, several organizations are focusing on mergers and acquisitions to strengthen their data capabilities; for instance, IBM's acquisition of cloud-based analytics firms enhances its position in the market., The growth of data analytics technologies is also reflected in the valuation of companies such as Oracle and GE Oil and Gas, which are witnessing increased investments. Moreover, Weatherford and HPE are targeting collaborations to optimize data management solutions for upstream operations, potentially impacting efficiency and decision-making processes across the sector. The collective movement towards embracing big data technologies signifies a robust shift in the oil and gas industry's approach to exploration and production, ultimately driving competitive advantages and operational improvements., Big Data in Oil and Gas Exploration and Production Market Segmentation Insights, Big Data in Oil and Gas Exploration and Production Market Application Outlook. Key drivers for this market are: Enhanced reservoir management, Predictive maintenance solutions; Real-time data analytics; Improved drilling efficiency; AI-driven exploration techniques. Potential restraints include: data integration challenges, regulatory compliance pressures; advanced analytics demand; cost optimization requirements; real-time decision-making needs.

  5. Artificial Intelligence Space Exploration Market Report | Global Forecast...

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Artificial Intelligence Space Exploration Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/artificial-intelligence-space-exploration-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Artificial Intelligence in Space Exploration Market Outlook



    The artificial intelligence in space exploration market is projected to witness significant growth, with a market size valued at approximately USD 2.5 billion in 2023 and expected to grow to USD 6.8 billion by 2032, reflecting a robust compound annual growth rate (CAGR) of 11.3% during the forecast period. This growth can be attributed to the increasing demand for advanced technologies to enhance the efficiency and effectiveness of space missions. As space exploration becomes more complex, AI technologies are poised to revolutionize how we explore and understand the universe, providing unprecedented capabilities in terms of automation, data processing, and mission planning.



    One of the key growth factors driving the market is the increasing volume of data generated from space missions, which requires sophisticated AI systems for efficient analysis and interpretation. As the number of satellites in orbit grows and space missions become more frequent, the amount of data collected is unparalleled. AI technologies, particularly machine learning and data analytics, are critical in processing this data to derive meaningful insights, optimizing operational efficiency, and improving decision-making processes. Additionally, AI's ability to enhance predictive maintenance of spacecraft systems significantly reduces operational costs and extends the lifespan of these expensive assets.



    Another growth factor is the rising interest and investment from commercial space companies. These enterprises are leveraging AI to gain a competitive edge in satellite operations and spacecraft navigation. By employing AI-driven technologies, companies can automate routine operations, reduce human error, and enhance the overall reliability of their missions. Furthermore, AI assists in mission planning and execution, which are crucial for the success of commercial space endeavors. With the continuous support from private investments and the increasing involvement of startups in the space sector, AI in space exploration is set to expand its market influence significantly.



    Additionally, government agencies and research institutions are investing heavily in AI to further their space exploration goals. By integrating AI technologies into their operations, they aim to improve mission outcomes, enhance safety, and reduce costs. These institutions are also collaborating internationally to develop AI applications for space exploration, fostering innovation and sharing of critical technological advancements. Such collaborations are expected to boost the adoption of AI in space exploration, promoting the development of new applications and technologies that can address emerging challenges and opportunities in space missions.



    Space Mining is emerging as a pivotal aspect of the future of space exploration, offering the potential to unlock vast resources beyond Earth. As the demand for rare minerals and metals increases, space mining presents a promising solution to resource scarcity on our planet. The development of AI technologies is crucial in this domain, enabling the automation of mining operations on asteroids and other celestial bodies. AI-driven systems can efficiently analyze geological data to identify resource-rich areas, optimize extraction processes, and ensure the safety and sustainability of space mining activities. This advancement not only supports the economic viability of space missions but also paves the way for new industries and opportunities in the space sector.



    Regionally, North America dominates the market, driven by the strong presence of key players and significant investments in AI and space exploration technologies. The region's well-established infrastructure and government support through organizations like NASA play a crucial role in market growth. Meanwhile, Asia Pacific is expected to witness the highest growth rate, with countries like China and India increasing their focus on space technologies and AI integration. These countries are investing in developing their space capabilities and have ambitious plans for future space missions, creating substantial opportunities for AI technology vendors.



    Technology Analysis



    Machine learning stands at the forefront of AI technologies utilized in space exploration, offering powerful capabilities for data processing, anomaly detection, and predictive analytics. It plays a crucial role in automating spacecraft operations, enabling real-time decision-making, an

  6. d

    Data from: Appendices for Geothermal Exploration Artificial Intelligence...

    • datasets.ai
    • data.openei.org
    • +2more
    10, 57
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Energy (2024). Appendices for Geothermal Exploration Artificial Intelligence Report [Dataset]. https://datasets.ai/datasets/appendices-for-geothermal-exploration-artificial-intelligence-report
    Explore at:
    10, 57Available download formats
    Dataset updated
    Sep 11, 2024
    Dataset authored and provided by
    Department of Energy
    Description

    The Geothermal Exploration Artificial Intelligence looks to use machine learning to spot geothermal identifiers from land maps. This is done to remotely detect geothermal sites for the purpose of energy uses. Such uses include enhanced geothermal system (EGS) applications, especially regarding finding locations for viable EGS sites. This submission includes the appendices and reports formerly attached to the Geothermal Exploration Artificial Intelligence Quarterly and Final Reports.

    The appendices below include methodologies, results, and some data regarding what was used to train the Geothermal Exploration AI. The methodology reports explain how specific anomaly detection modes were selected for use with the Geo Exploration AI. This also includes how the detection mode is useful for finding geothermal sites. Some methodology reports also include small amounts of code. Results from these reports explain the accuracy of methods used for the selected sites (Brady Desert Peak and Salton Sea). Data from these detection modes can be found in some of the reports, such as the Mineral Markers Maps, but most of the raw data is included the DOE Database which includes Brady, Desert Peak, and Salton Sea Geothermal Sites.

  7. r

    International Journal of Engineering and Advanced Technology Publication fee...

    • researchhelpdesk.org
    Updated Jun 25, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). International Journal of Engineering and Advanced Technology Publication fee - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/publication-fee/552/international-journal-of-engineering-and-advanced-technology
    Explore at:
    Dataset updated
    Jun 25, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    International Journal of Engineering and Advanced Technology Publication fee - ResearchHelpDesk - International Journal of Engineering and Advanced Technology (IJEAT) is having Online-ISSN 2249-8958, bi-monthly international journal, being published in the months of February, April, June, August, October, and December by Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP) Bhopal (M.P.), India since the year 2011. It is academic, online, open access, double-blind, peer-reviewed international journal. It aims to publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. All submitted papers will be reviewed by the board of committee of IJEAT. Aim of IJEAT Journal disseminate original, scientific, theoretical or applied research in the field of Engineering and allied fields. dispense a platform for publishing results and research with a strong empirical component. aqueduct the significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. seek original and unpublished research papers based on theoretical or experimental works for the publication globally. publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. impart a platform for publishing results and research with a strong empirical component. create a bridge for a significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. solicit original and unpublished research papers, based on theoretical or experimental works. Scope of IJEAT International Journal of Engineering and Advanced Technology (IJEAT) covers all topics of all engineering branches. Some of them are Computer Science & Engineering, Information Technology, Electronics & Communication, Electrical and Electronics, Electronics and Telecommunication, Civil Engineering, Mechanical Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. The main topic includes but not limited to: 1. Smart Computing and Information Processing Signal and Speech Processing Image Processing and Pattern Recognition WSN Artificial Intelligence and machine learning Data mining and warehousing Data Analytics Deep learning Bioinformatics High Performance computing Advanced Computer networking Cloud Computing IoT Parallel Computing on GPU Human Computer Interactions 2. Recent Trends in Microelectronics and VLSI Design Process & Device Technologies Low-power design Nanometer-scale integrated circuits Application specific ICs (ASICs) FPGAs Nanotechnology Nano electronics and Quantum Computing 3. Challenges of Industry and their Solutions, Communications Advanced Manufacturing Technologies Artificial Intelligence Autonomous Robots Augmented Reality Big Data Analytics and Business Intelligence Cyber Physical Systems (CPS) Digital Clone or Simulation Industrial Internet of Things (IIoT) Manufacturing IOT Plant Cyber security Smart Solutions – Wearable Sensors and Smart Glasses System Integration Small Batch Manufacturing Visual Analytics Virtual Reality 3D Printing 4. Internet of Things (IoT) Internet of Things (IoT) & IoE & Edge Computing Distributed Mobile Applications Utilizing IoT Security, Privacy and Trust in IoT & IoE Standards for IoT Applications Ubiquitous Computing Block Chain-enabled IoT Device and Data Security and Privacy Application of WSN in IoT Cloud Resources Utilization in IoT Wireless Access Technologies for IoT Mobile Applications and Services for IoT Machine/ Deep Learning with IoT & IoE Smart Sensors and Internet of Things for Smart City Logic, Functional programming and Microcontrollers for IoT Sensor Networks, Actuators for Internet of Things Data Visualization using IoT IoT Application and Communication Protocol Big Data Analytics for Social Networking using IoT IoT Applications for Smart Cities Emulation and Simulation Methodologies for IoT IoT Applied for Digital Contents 5. Microwaves and Photonics Microwave filter Micro Strip antenna Microwave Link design Microwave oscillator Frequency selective surface Microwave Antenna Microwave Photonics Radio over fiber Optical communication Optical oscillator Optical Link design Optical phase lock loop Optical devices 6. Computation Intelligence and Analytics Soft Computing Advance Ubiquitous Computing Parallel Computing Distributed Computing Machine Learning Information Retrieval Expert Systems Data Mining Text Mining Data Warehousing Predictive Analysis Data Management Big Data Analytics Big Data Security 7. Energy Harvesting and Wireless Power Transmission Energy harvesting and transfer for wireless sensor networks Economics of energy harvesting communications Waveform optimization for wireless power transfer RF Energy Harvesting Wireless Power Transmission Microstrip Antenna design and application Wearable Textile Antenna Luminescence Rectenna 8. Advance Concept of Networking and Database Computer Network Mobile Adhoc Network Image Security Application Artificial Intelligence and machine learning in the Field of Network and Database Data Analytic High performance computing Pattern Recognition 9. Machine Learning (ML) and Knowledge Mining (KM) Regression and prediction Problem solving and planning Clustering Classification Neural information processing Vision and speech perception Heterogeneous and streaming data Natural language processing Probabilistic Models and Methods Reasoning and inference Marketing and social sciences Data mining Knowledge Discovery Web mining Information retrieval Design and diagnosis Game playing Streaming data Music Modelling and Analysis Robotics and control Multi-agent systems Bioinformatics Social sciences Industrial, financial and scientific applications of all kind 10. Advanced Computer networking Computational Intelligence Data Management, Exploration, and Mining Robotics Artificial Intelligence and Machine Learning Computer Architecture and VLSI Computer Graphics, Simulation, and Modelling Digital System and Logic Design Natural Language Processing and Machine Translation Parallel and Distributed Algorithms Pattern Recognition and Analysis Systems and Software Engineering Nature Inspired Computing Signal and Image Processing Reconfigurable Computing Cloud, Cluster, Grid and P2P Computing Biomedical Computing Advanced Bioinformatics Green Computing Mobile Computing Nano Ubiquitous Computing Context Awareness and Personalization, Autonomic and Trusted Computing Cryptography and Applied Mathematics Security, Trust and Privacy Digital Rights Management Networked-Driven Multicourse Chips Internet Computing Agricultural Informatics and Communication Community Information Systems Computational Economics, Digital Photogrammetric Remote Sensing, GIS and GPS Disaster Management e-governance, e-Commerce, e-business, e-Learning Forest Genomics and Informatics Healthcare Informatics Information Ecology and Knowledge Management Irrigation Informatics Neuro-Informatics Open Source: Challenges and opportunities Web-Based Learning: Innovation and Challenges Soft computing Signal and Speech Processing Natural Language Processing 11. Communications Microstrip Antenna Microwave Radar and Satellite Smart Antenna MIMO Antenna Wireless Communication RFID Network and Applications 5G Communication 6G Communication 12. Algorithms and Complexity Sequential, Parallel And Distributed Algorithms And Data Structures Approximation And Randomized Algorithms Graph Algorithms And Graph Drawing On-Line And Streaming Algorithms Analysis Of Algorithms And Computational Complexity Algorithm Engineering Web Algorithms Exact And Parameterized Computation Algorithmic Game Theory Computational Biology Foundations Of Communication Networks Computational Geometry Discrete Optimization 13. Software Engineering and Knowledge Engineering Software Engineering Methodologies Agent-based software engineering Artificial intelligence approaches to software engineering Component-based software engineering Embedded and ubiquitous software engineering Aspect-based software engineering Empirical software engineering Search-Based Software engineering Automated software design and synthesis Computer-supported cooperative work Automated software specification Reverse engineering Software Engineering Techniques and Production Perspectives Requirements engineering Software analysis, design and modelling Software maintenance and evolution Software engineering tools and environments Software engineering decision support Software design patterns Software product lines Process and workflow management Reflection and metadata approaches Program understanding and system maintenance Software domain modelling and analysis Software economics Multimedia and hypermedia software engineering Software engineering case study and experience reports Enterprise software, middleware, and tools Artificial intelligent methods, models, techniques Artificial life and societies Swarm intelligence Smart Spaces Autonomic computing and agent-based systems Autonomic computing Adaptive Systems Agent architectures, ontologies, languages and protocols Multi-agent systems Agent-based learning and knowledge discovery Interface agents Agent-based auctions and marketplaces Secure mobile and multi-agent systems Mobile agents SOA and Service-Oriented Systems Service-centric software engineering Service oriented requirements engineering Service oriented architectures Middleware for service based systems Service discovery and composition Service level

  8. f

    Exploration of machine learning techniques in predicting multiple sclerosis...

    • plos.figshare.com
    docx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yijun Zhao; Brian C. Healy; Dalia Rotstein; Charles R. G. Guttmann; Rohit Bakshi; Howard L. Weiner; Carla E. Brodley; Tanuja Chitnis (2023). Exploration of machine learning techniques in predicting multiple sclerosis disease course [Dataset]. http://doi.org/10.1371/journal.pone.0174866
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yijun Zhao; Brian C. Healy; Dalia Rotstein; Charles R. G. Guttmann; Rohit Bakshi; Howard L. Weiner; Carla E. Brodley; Tanuja Chitnis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ObjectiveTo explore the value of machine learning methods for predicting multiple sclerosis disease course.Methods1693 CLIMB study patients were classified as increased EDSS≥1.5 (worsening) or not (non-worsening) at up to five years after baseline visit. Support vector machines (SVM) were used to build the classifier, and compared to logistic regression (LR) using demographic, clinical and MRI data obtained at years one and two to predict EDSS at five years follow-up.ResultsBaseline data alone provided little predictive value. Clinical observation for one year improved overall SVM sensitivity to 62% and specificity to 65% in predicting worsening cases. The addition of one year MRI data improved sensitivity to 71% and specificity to 68%. Use of non-uniform misclassification costs in the SVM model, weighting towards increased sensitivity, improved predictions (up to 86%). Sensitivity, specificity, and overall accuracy improved minimally with additional follow-up data. Predictions improved within specific groups defined by baseline EDSS. LR performed more poorly than SVM in most cases. Race, family history of MS, and brain parenchymal fraction, ranked highly as predictors of the non-worsening group. Brain T2 lesion volume ranked highly as predictive of the worsening group.InterpretationSVM incorporating short-term clinical and brain MRI data, class imbalance corrective measures, and classification costs may be a promising means to predict MS disease course, and for selection of patients suitable for more aggressive treatment regimens.

  9. d

    Data from: GeoThermalCloud framework for fusion of big data and...

    • catalog.data.gov
    • gdr.openei.org
    • +2more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Los Alamos National Laboratory (2025). GeoThermalCloud framework for fusion of big data and multi-physics models in Nevada and Southwest New Mexico [Dataset]. https://catalog.data.gov/dataset/geothermalcloud-framework-for-fusion-of-big-data-and-multi-physics-models-in-nevada-and-so-31a4e
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Los Alamos National Laboratory
    Area covered
    Southwestern New Mexico, New Mexico
    Description

    Our GeoThermalCloud framework is designed to process geothermal datasets using a novel toolbox for unsupervised and physics-informed machine learning called SmartTensors. More information about GeoThermalCloud can be found at the GeoThermalCloud GitHub Repository. More information about SmartTensors can be found at the SmartTensors Github Repository and the SmartTensors page at LANL.gov. Links to these pages are included in this submission. GeoThermalCloud.jl is a repository containing all the data and codes required to demonstrate applications of machine learning methods for geothermal exploration. GeoThermalCloud.jl includes: - site data - simulation scripts - jupyter notebooks - intermediate results - code outputs - summary figures - readme markdown files GeoThermalCloud.jl showcases the machine learning analyses performed for the following geothermal sites: - Brady: geothermal exploration of the Brady geothermal site, Nevada - SWNM: geothermal exploration of the Southwest New Mexico (SWNM) region - GreatBasin: geothermal exploration of the Great Basin region, Nevada Reports, research papers, and presentations summarizing these machine learning analyses are also available and will be posted soon.

  10. n

    Data from: An unbiased machine learning exploration reveals gene sets...

    • data.niaid.nih.gov
    Updated Jul 28, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fu Q; Agarwal D; Deng K; Matheson R; Wei L; Ran Q; Yang H; Deng S; Markmann JF (2021). An unbiased machine learning exploration reveals gene sets predictive of allograft tolerance after kidney transplantation [Dataset]. https://data.niaid.nih.gov/resources?id=gse166865
    Explore at:
    Dataset updated
    Jul 28, 2021
    Dataset provided by
    Massachusetts General Hospital
    Authors
    Fu Q; Agarwal D; Deng K; Matheson R; Wei L; Ran Q; Yang H; Deng S; Markmann JF
    Variables measured
    Transcriptomics
    Description

    Efforts at finding potential biomarkers of tolerance after kidney transplantation have been hindered by limited sample size, as well as the complicated mechanisms underlying tolerance and the potential risk of rejection after immunosuppressant withdrawal. In this work, three different publicly available genome-wide expression data sets of peripheral blood lymphocyte (PBL) from 63 tolerant patients were used to compare 14 different machine learning models for their ability to predict spontaneous kidney graft tolerance. We found that the Best Subset Selection (BSS) regression approach was the most powerful with a sensitivity of 91.7% and a specificity of 93.8% in the test group, and a specificity of 86.1% and a sensitivity of 80% in the validation group. A feature set with five genes (HLA-DOA, TCL1A, EBF1, CD79B, and PNOC) was identified using the BSS model. EBF1 downregulation was also an independent factor predictive of graft rejection and graft loss. An AUC value of 84.4% was achieved using the two-gene signature (EBF1 and HLA-DOA) as an input to our classifier. Overall, our systematic machine learning exploration suggests novel biological targets that might affect tolerance to renal allografts, and provides clinical insights that can potentially guide patient selection for immunosuppressant withdrawal. Total of 31 Tolerant (TOL) participants, 39 Standard Immunotherapy (SI) participants, and 24 Healthy Controls (HC)

  11. w

    Global Mining Digitalisation Market Research Report: By Digital Technology...

    • wiseguyreports.com
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Mining Digitalisation Market Research Report: By Digital Technology (Data Analytics, Cloud Computing, Artificial Intelligence, Blockchain, Machine Learning), By Application (Resource Exploration, Mine Planning and Design, Production Optimization, Safety Management, Environmental Monitoring), By Deployment Model (On-Premise, SaaS (Software-as-a-Service), PaaS (Platform-as-a-Service), IaaS (Infrastructure-as-a-Service)) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/mining-digitalisation-market
    Explore at:
    Dataset updated
    Jul 23, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 7, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 202324.35(USD Billion)
    MARKET SIZE 202426.49(USD Billion)
    MARKET SIZE 203251.9(USD Billion)
    SEGMENTS COVEREDDigital Technology ,Application ,Deployment Model ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICS1 Automation and AI Improving efficiency productivity and safety 2 Data analytics and IoT Enhancing decisionmaking optimizing processes 3 Digital workforce Augmenting remote work capabilities skills development 4 Cloud and edge computing Enabling realtime access scalability 5 Sustainability and decarbonization Reducing environmental impact improving operational efficiency
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDIBM ,Honeywell International ,SAP ,Schneider Electric ,GE Digital ,Microsoft ,Hexagon AB ,Siemens ,Rockwell Automation ,ABB ,PTC ,Emerson Electric ,Oracle Corporation ,Google
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIES1 Predictive maintenance 2 Remote monitoring 3 Autonomous vehicles 4 Smart sensors 5 Data analytics
    COMPOUND ANNUAL GROWTH RATE (CAGR) 8.77% (2025 - 2032)
  12. M

    Mining Exploration Software Report

    • archivemarketresearch.com
    ppt
    Updated Mar 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Mining Exploration Software Report [Dataset]. https://www.archivemarketresearch.com/reports/mining-exploration-software-54837
    Explore at:
    pptAvailable download formats
    Dataset updated
    Mar 9, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global mining exploration software market is experiencing steady growth, projected to reach $233.6 million in 2025 and maintain a compound annual growth rate (CAGR) of 3.2% from 2025 to 2033. This growth is fueled by several key factors. Increased demand for efficient and accurate geological data analysis is driving adoption of advanced software solutions. The mining industry's ongoing digital transformation, focused on improving operational efficiency and reducing exploration costs, is another significant driver. Furthermore, the rising complexity of mining operations and the need for sophisticated visualization tools for interpreting vast datasets are contributing to market expansion. The integration of artificial intelligence (AI) and machine learning (ML) into mining exploration software is creating new opportunities for improved resource discovery and optimized project planning, thereby boosting market growth. Segmentation analysis reveals significant demand across both 2D and 3D software solutions, with a particularly strong emphasis on applications tailored for mine and underground mining operations. Competition is robust, with numerous established and emerging players vying for market share, including AVEVA, AnyLogic, Datamine, Maptek Vulcan, and others, each offering unique software capabilities and specialized services. The regional distribution of the market reveals significant activity across North America, Europe, and the Asia-Pacific region. North America, particularly the United States and Canada, benefits from a large, established mining industry and a strong technological infrastructure. Europe's presence is robust, driven by activity in countries such as the UK and Germany. The Asia-Pacific region is witnessing substantial growth, propelled by large-scale mining projects in countries including China, India, and Australia. While several factors contribute to the market's overall growth, potential restraints include the high initial investment costs associated with implementing new software solutions and the need for specialized technical expertise to operate and maintain these systems. However, the long-term benefits of improved efficiency and reduced exploration risk are expected to outweigh these limitations, ensuring sustained market expansion throughout the forecast period.

  13. D

    Data Visualization Tool Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Feb 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Data Visualization Tool Market Report [Dataset]. https://www.promarketreports.com/reports/data-visualization-tool-market-18228
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Feb 21, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Data Visualization Tool Market is expected to reach a value of 25.83 Billion by 2033, expanding at a CAGR of 8.41% during 2023-2033. The market growth is driven by increasing demand for data-driven decision-making, rising adoption of cloud-based and hybrid deployment models, and advancements in artificial intelligence (AI) and machine learning (ML) technologies. Key trends influencing the market include the growing popularity of self-service data visualization tools, the adoption of augmented analytics for enhanced insights, and the increasing demand for data storytelling capabilities. The market is segmented by deployment type (on-premise, cloud-based, and hybrid), organization size (SMEs and large enterprises), vertical (IT and telecommunications, manufacturing, retail, healthcare, and banking and financial services), functionality (data exploration and analysis, dashboarding and reporting, and data storytelling), data source (structured data, semi-structured data, and unstructured data), and region (North America, South America, Europe, Middle East & Africa, and Asia Pacific). The Asia Pacific region is expected to witness the fastest growth due to the increasing adoption of data visualization tools in emerging economies. Key drivers for this market are: Demand for realtime analytics Cloudbased deployment Advanced visualization techniques Integration with AIML Growing adoption in healthcare and life sciences. Potential restraints include: Increased cloud adoption Growing demand for realtime data insights Burgeoning adoption in BFSI sector.

  14. Data from: Reward-based option competition in human dorsal stream and...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Feb 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Hallquist; Kai Hwang; Beatriz Luna; Alexandre Dombrovski (2024). Reward-based option competition in human dorsal stream and transition from stochastic exploration to exploitation in continuous space [Dataset]. http://doi.org/10.5061/dryad.hmgqnk9qc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 6, 2024
    Dataset provided by
    University of Iowa
    University of Pittsburgh
    University of North Carolina at Chapel Hill
    Authors
    Michael Hallquist; Kai Hwang; Beatriz Luna; Alexandre Dombrovski
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Primates exploring and exploiting a continuous sensorimotor space rely on dynamic maps in the dorsal stream. Two complementary perspectives exist on how these maps encode rewards. Reinforcement learning models integrate rewards incrementally over time, efficiently resolving the exploration/exploitation dilemma. Working memory buffer models explain rapid plasticity of parietal maps but lack a plausible exploration/exploitation policy. The reinforcement learning model presented here unifies both accounts, enabling rapid, information-compressing map updates and efficient transition from exploration to exploitation. As predicted by our model, activity in human fronto-parietal dorsal stream regions, but not in MT+, tracks the number of competing options, as preferred options are selectively maintained on the map while spatiotemporally distant alternatives are compressed out. When valuable new options are uncovered, posterior beta1/alpha oscillations desynchronize within 0.4-0.7 s, consistent with option encoding by competing beta1-stabilized subpopulations. Altogether, outcomes matching locally cached reward representations rapidly update parietal maps, biasing choices toward often-sampled, rewarded options. Methods fMRI acquisition Neuroimaging data during the clock task were acquired in a Siemens Tim Trio 3T scanner for the original study and Siemens Tim Prisma 3T scanner for the replication study at the Magnetic Resonance Research Center, University of Pittsburgh. Due participant-dependent variation in response times on the task, each fMRI run varied in length from 3.15 to 5.87 minutes (M = 4.57 minutes, SD = 0.52). Functional imaging data for the original/replication study were acquired using a simultaneous multislice sequence sensitive to BOLD contrast, TR = 1.0/0.6s, TE = 30/27ms, flip angle = 55/45°, multiband acceleration factor = 5/5, voxel size = 2.3/3.1mm3. We also obtained a sagittal MPRAGE T1-weighted scan, voxel size = 1/1mm3, TR = 2.2/2.3s, TE = 3.58/3.35ms, GRAPPA 2/2x acceleration. The anatomical scan was used for coregistration and nonlinear transformation to functional and stereotaxic templates. We also acquired gradient echo fieldmap images (TEs = 4.93/4.47ms and 7.39/6.93ms) for each subject to mitigate inhomogeneity-related distortions in the functional MRI data. Preprocessing of fMRI data Anatomical scans were registered to the MNI152 template (82) using both affine (ANTS SyN) and nonlinear (FSL FNIRT) transformations. Functional images were preprocessed using tools from NiPy (83), AFNI (version 19.0.26) (84), and the FMRIB software library (FSL version 6.0.1) (85). First, slice timing and motion coregistration were performed simultaneously using a four-dimensional registration algorithm implemented in NiPy (86). Non-brain voxels were removed from functional images by masking voxels with low intensity and by the ROBEX brain extraction algorithm (87). We reduced distortion due to susceptibility artifacts using fieldmap correction implemented in FSL FUGUE. Participants’ functional images were aligned to their anatomical scan using the white matter segmentation of each image and a boundary-based registration algorithm (88), augmented by fieldmap unwarping coefficients. Given the low contrast between gray and white matter in echoplanar scans with fast repetition times, we first aligned functional scans to a single-band fMRI reference image with better contrast. The reference image was acquired using the same scanning parameters, but without multiband acceleration. Functional scans were then warped into MNI152 template space (2.3mm output resolution) in one step using the concatenation of functional-reference, fieldmap unwarping, reference-structural, and structural-MNI152 transforms. Images were spatially smoothed using a 5mm full-width at half maximum (FWHM) kernel using a nonlinear smoother implemented in FSL SUSAN. To reduce head motion artifacts, we then conducted an independent component analysis for each run using FSL MELODIC. The spatiotemporal components were then passed to a classification algorithm, ICA-AROMA, validated to identify and remove motion-related artifacts (89). Components identified as noise were regressed out of the data using FSL regfilt (non-aggressive regression approach). ICA-AROMA has performed very well in head-to-head comparisons of alternative strategies for reducing head motion artifacts (90). We then applied a .008 Hz temporal high-pass filter to remove slow-frequency signal changes (91); the same filter was applied to all regressors in GLM analyses. Finally, we renormalized each voxel time series to have a mean of 100 to provide similar scaling of voxelwise regression coefficients across runs and participants. Treatment of head motion In addition to mitigating head motion-related artifacts using ICA-AROMA, we excluded runs in which more than 10% of volumes had a framewise displacement (FD) of 0.9mm or greater, as well as runs in which head movement exceeded 5mm at any point in the acquisition. This led to the exclusion of 11 runs total, yielding 549 total usable runs across participants. Furthermore, in voxelwise GLMs, we included the mean time series from deep cerebral white matter and the ventricles, as well as first derivatives of these signals, as confound regressors (90). MEG Data acquisition MEG data were acquired using an Elekta Neuromag VectorView MEG system (Elekta Oy, Helsinki, Finland) in a three-layer magnetically shielded room. The system comprised of 306 sensors, with 204 planar gradiometers and 102 magnetometers. In this project we only included data from the gradiometers, as data from magnetometers added noise and had a different amplitude scale. MEG data were recorded continuously with a sampling rate of 1000 Hz. We measured head position relative to the MEG sensors throughout the recording period using 4 continuous head position indicators (cHPI) that continuously emit sinusoidal signals, and head movements were corrected offline during preprocessing. To monitor saccades and eye blinks, we used two bipolar electrode pairs to record vertical and horizontal electrooculogram (EOG). Preprocessing of MEG data Flat or noisy channels were identified with manual inspections, and all data preprocessed using the temporal signal space separation (TSSS) method (92, 93). TSSS suppresses environmental artifacts from outside the MEG helmet and performs head movement correction by aligning sensor-level data to a common reference (94). This realignment allowed sensor-level data to be pooled across subjects group analyses of sensor-space data. Cardiac and ocular artifacts were then removed using an independent component analysis by decomposing MEG sensor data into independent components (ICs) using the infomax algorithm (95). Each IC was then correlated with ECG and EOG recordings, and an IC was designated as an artifact if the absolute value of the correlation was at least three standard deviations higher than the mean of all correlations. The non-artifact ICs were projected back to the sensor space to reconstruct the signals for analysis. After preprocessing, data were epoched to the onset of feedback, with a window from -0.7 to 1.0 seconds. Trials with gradiometer peak-to-peak amplitudes exceeded 3000 fT/cm were excluded. Please note that the following processing step has NOT been applied to MEG data: "For each sensor, we computed the time-frequency decomposition of activity on each trial by convolving time-domain signals with Morlet wavelet, stepping from 2 to 40 Hz in logarithmic scale using 6 wavelet cycles".

  15. o

    Data and Code for: Deep Learning for Economists

    • openicpsr.org
    delimited
    Updated Nov 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melissa Dell (2024). Data and Code for: Deep Learning for Economists [Dataset]. http://doi.org/10.3886/E210922V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Nov 13, 2024
    Dataset provided by
    American Economic Association
    Authors
    Melissa Dell
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    1877 - 2012
    Area covered
    United States, United Kingdom
    Description

    Deep learning provides powerful methods to impute structured information from large-scale, unstructured text and image datasets. For example, economists might wish to detect the presence of economic activity in satellite images, or to measure the topics or entities mentioned in social media, the congressional record, or firm filings. This review introduces deep neural networks, covering methods such as classifiers, regression models, generative AI, and embedding models. Applications include classification, document digitization, record linkage, and methods for data exploration in massive scale text and image corpora. When suitable methods are used, deep learning models can be cheap to tune and can scale affordably to problems involving millions or billions of data points.. The review is accompanied by a regularly updated companion website, https://econdl.github.io/}{EconDL, with user-friendly demo notebooks, software resources, and a knowledge base that provides technical details and additional applications.

  16. Taranaki Basin Curated Well Logs

    • zenodo.org
    • explore.openaire.eu
    • +1more
    application/gzip
    Updated May 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Breno W.S.R. de Carvalho; Matheus Oliveira; Maiana Avalone; Júlio Hoffimann; Daniela Szwarcman; Jorge Guevara Diaz; Bianca Zadrozny; Breno W.S.R. de Carvalho; Matheus Oliveira; Maiana Avalone; Júlio Hoffimann; Daniela Szwarcman; Jorge Guevara Diaz; Bianca Zadrozny (2020). Taranaki Basin Curated Well Logs [Dataset]. http://doi.org/10.5281/zenodo.3832955
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 20, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Breno W.S.R. de Carvalho; Matheus Oliveira; Maiana Avalone; Júlio Hoffimann; Daniela Szwarcman; Jorge Guevara Diaz; Bianca Zadrozny; Breno W.S.R. de Carvalho; Matheus Oliveira; Maiana Avalone; Júlio Hoffimann; Daniela Szwarcman; Jorge Guevara Diaz; Bianca Zadrozny
    License

    https://cdla.io/sharing-1-0https://cdla.io/sharing-1-0

    Description

    Machine learning (ML) models are being widely used in the geosciences for various tasks involving well log data, including prediction of missing well log curves, picking of stratigraphic surfaces, facies classification, and segmentation of different rock types. Even though various ML applications have been proposed in the literature, it is difficult to reproduce and advance the prior art without having access to the data and preprocessing steps used. In fact, there is an increasing need for benchmark cases to assess past and future solutions. The present dataset integrates well log data curated from the 2016 New Zealand Petroleum Exploration Public Data Pack.

  17. a

    Data from: Mineral prospectivity mapping using machine learning techniques...

    • hub.arcgis.com
    • metal-earth-hub-laurentianu.hub.arcgis.com
    • +1more
    Updated Sep 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MetalEarth (2023). Mineral prospectivity mapping using machine learning techniques for gold exploration in the Larder Lake area, Ontario, Canada [Dataset]. https://hub.arcgis.com/documents/1be05051de7c498c97e7cb267076b435
    Explore at:
    Dataset updated
    Sep 21, 2023
    Dataset authored and provided by
    MetalEarth
    Description

    A mineral prospectivity map (MPM) focusing on gold mineralization in the Larder Lake region of Northern Ontario, Canada, has been produced in this study. We have used the Random Forest (RF) algorithm to use 32 predictor maps integrating geophysical, geochemical, and geological datasets from various sources that represent vectors to gold mineralization. It is evident from the efficiency of classification curves that MPMs generated are robust. The unsupervised algorithms, K-means and principal component analysis (PCA) were used to investigate and visualize the clustering nature of large geochemical and geophysical datasets. We used RQ-mode PCA to compute variable and object loadings simultaneously, which allows the displays of observations and the variables at the same scale. PCA biplots of the Larder Lake geochemical data show that Au is strongly correlated with W, S, Pb and K, but inversely correlated with Fe, Mn, Co, Mg, Ca, and Ni. The known gold mineralization locations were well classified by RF with the accuracy of 95.63 %. Furthermore, partial least squares-discriminant analysis (PLS-DA) model combines 3D geophysical clusters and geochemical compositions, which indicates the Au-rich areas are characterized with low to mid resistivity – low susceptibility properties. We conclude that the Larder Lake-Cadillac deformation zone (LLCDZ) is relatively more fertile than the Lincoln-Nipissing shear zone (LNSZ) with respect to gold mineralization due to deeper penetrating faults. The intersection of the LLCDZ and network of high-angle NE-trending cross faults acts as key conduits for gold endowments in the Larder Lake area. This study innovatively combined multivariate geological, geochemical, and geophysical datasets via machine learning algorithms, which improves identification of geochemical anomalies and interpretation of spatial features associated with gold mineralization.

  18. m

    Drilling Data Management Systems Report

    • marketreportanalytics.com
    doc, ppt
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Drilling Data Management Systems Report [Dataset]. https://www.marketreportanalytics.com/reports/drilling-data-management-systems-56593
    Explore at:
    ppt, docAvailable download formats
    Dataset updated
    Apr 3, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Variables measured
    Market Size
    Description

    The global market for Drilling Data Management Systems (DDMS) is experiencing robust growth, driven by increasing demand for enhanced operational efficiency and safety in oil and gas exploration and production. The integration of advanced technologies like artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) within DDMS is revolutionizing data analysis and interpretation, leading to better decision-making and reduced operational costs. This market is segmented by application (oil and gas industries), type (hardware, software, and services), and geography. Major players like Schlumberger, Halliburton, and BHGE are actively investing in research and development to improve the capabilities of their DDMS offerings, resulting in a highly competitive landscape. The increasing complexity of drilling operations, coupled with stringent regulatory requirements for data security and environmental compliance, further fuels the demand for sophisticated DDMS solutions. We project a market size of approximately $3 billion in 2025, with a CAGR of around 8% over the forecast period (2025-2033). This growth is anticipated across all segments, although the software segment is expected to show the fastest growth due to the increasing adoption of cloud-based solutions and data analytics platforms. North America and Europe currently hold significant market share, but the Asia-Pacific region is expected to witness substantial growth in the coming years, driven by increasing exploration and production activities in countries like China and India. The restraints to growth primarily stem from the high initial investment costs associated with implementing DDMS, particularly for smaller operators. Furthermore, the integration of DDMS with existing legacy systems can be complex and time-consuming, posing a challenge for seamless adoption. However, the long-term benefits of improved efficiency, reduced downtime, and enhanced safety are outweighing these initial challenges, driving market expansion. The continued advancements in cloud computing, big data analytics, and cybersecurity solutions will further propel the market's growth trajectory, making DDMS an indispensable tool for optimizing drilling operations in the oil and gas industry. The emergence of specialized solutions catering to specific drilling challenges, coupled with the growing adoption of advanced analytics techniques for predictive maintenance, is reshaping the competitive dynamics and further solidifying the future of the DDMS market.

  19. Data from: Machine learning-guided high throughput nanoparticle design

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Apr 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ana Ortiz-Perez; Ana Ortiz-Perez; Derek van Tilborg; Derek van Tilborg; Roy van der Meel; Roy van der Meel; Francesca Grisoni; Francesca Grisoni; Lorenzo Albertazzi; Lorenzo Albertazzi (2024). Machine learning-guided high throughput nanoparticle design [Dataset]. http://doi.org/10.5281/zenodo.8289605
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 9, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ana Ortiz-Perez; Ana Ortiz-Perez; Derek van Tilborg; Derek van Tilborg; Roy van der Meel; Roy van der Meel; Francesca Grisoni; Francesca Grisoni; Lorenzo Albertazzi; Lorenzo Albertazzi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Widefield microscopy high content images used for this study. Contains all the intermediate reports in excel result from image analysis and processing:

    • 00_Initial Dataset (DoE): contains all image data used to determine the labels for the first active learning cycle. Nano particle formulations were suggested using deisgn of experiments.
    • 01_ML_Iteration01 (Exploration): contains all image data used to determine the labels for the formulations suggested by the first active learning cycle
    • 02_ML_Iteration02 (Exploitation): contains all image data used to determine the labels for the formulations suggested by the second active learning cycle
    • 03_ML_Iteration03 (Exploration): contains all image data used to determine the labels for last (model validation) experiment. Includes the subsets of particles predicted with low and high uptake.
  20. w

    Global Standalone Analytics Sandbox Market Research Report: By Deployment...

    • wiseguyreports.com
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Standalone Analytics Sandbox Market Research Report: By Deployment Type (On-premise, Cloud-based), By Data Source (Structured data only, Unstructured data only, Both structured and unstructured data), By Industry (Financial services, Healthcare, Retail, Manufacturing, Government), By Functionality (Data exploration and visualization, Statistical analysis, Machine learning and predictive modeling) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/cn/reports/standalone-analytics-sandbox-market
    Explore at:
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20230.69(USD Billion)
    MARKET SIZE 20240.78(USD Billion)
    MARKET SIZE 20322.3(USD Billion)
    SEGMENTS COVEREDDeployment Type ,Data Source ,Industry ,Functionality ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICS1 Growing Adoption of DataDriven DecisionMaking 2 Rise of Complex Data Environments 3 Increasing Demand for Data Security and Governance 4 Proliferation of CloudBased Analytics Solutions 5 Growing Focus on Data Privacy and Compliance
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDDomo ,Oracle ,ThoughtSpot ,Databricks ,Looker ,TIBCO ,Microsoft ,SAP ,Snowflake ,Google ,Tableau ,Qlik ,Alteryx ,IBM ,SAS
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESCloudbased deployments Data governance and security Realtime analytics Machine learning and AI Selfservice analytics
    COMPOUND ANNUAL GROWTH RATE (CAGR) 14.4% (2025 - 2032)
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yi-Hui Zhou; Ehsan Saghapour (2023). Data_Sheet_1_ImputEHR: A Visualization Tool of Imputation for the Prediction of Biomedical Data.PDF [Dataset]. http://doi.org/10.3389/fgene.2021.691274.s001

Data_Sheet_1_ImputEHR: A Visualization Tool of Imputation for the Prediction of Biomedical Data.PDF

Related Article
Explore at:
pdfAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
Frontiers
Authors
Yi-Hui Zhou; Ehsan Saghapour
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Electronic health records (EHRs) have been widely adopted in recent years, but often include a high proportion of missing data, which can create difficulties in implementing machine learning and other tools of personalized medicine. Completed datasets are preferred for a number of analysis methods, and successful imputation of missing EHR data can improve interpretation and increase our power to predict health outcomes. However, use of the most popular imputation methods mainly require scripting skills, and are implemented using various packages and syntax. Thus, the implementation of a full suite of methods is generally out of reach to all except experienced data scientists. Moreover, imputation is often considered as a separate exercise from exploratory data analysis, but should be considered as art of the data exploration process. We have created a new graphical tool, ImputEHR, that is based on a Python base and allows implementation of a range of simple and sophisticated (e.g., gradient-boosted tree-based and neural network) data imputation approaches. In addition to imputation, the tool enables data exploration for informed decision-making, as well as implementing machine learning prediction tools for response data selected by the user. Although the approach works for any missing data problem, the tool is primarily motivated by problems encountered for EHR and other biomedical data. We illustrate the tool using multiple real datasets, providing performance measures of imputation and downstream predictive analysis.

Search
Clear search
Close search
Google apps
Main menu