5 datasets found
  1. d

    Data from: Subsurface Characterization and Machine Learning Predictions at...

    • catalog.data.gov
    • gdr.openei.org
    • +4more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). Subsurface Characterization and Machine Learning Predictions at Brady Hot Springs Results [Dataset]. https://catalog.data.gov/dataset/subsurface-characterization-and-machine-learning-predictions-at-brady-hot-springs-results-6c85f
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Description

    Geothermal power plants typically show decreasing heat and power production rates over time. Mitigation strategies include optimizing the management of existing wells - increasing or decreasing the fluid flow rates across the wells - and drilling new wells at appropriate locations. The latter is expensive, time-consuming, and subject to many engineering constraints, but the former is a viable mechanism for periodic adjustment of the available fluid allocations. Data and supporting literature from a study describing a new approach combining reservoir modeling and machine learning to produce models that enable strategies for the mitigation of decreased heat and power production rates over time for geothermal power plants. The computational approach used enables translation of sets of potential flow rates for the active wells into reservoir-wide estimates of produced energy and discovery of optimal flow allocations among the studied sets. In our computational experiments, we utilize collections of simulations for a specific reservoir (which capture subsurface characterization and realize history matching) along with machine learning models that predict temperature and pressure timeseries for production wells. We evaluate this approach using an "open-source" reservoir we have constructed that captures many of the characteristics of Brady Hot Springs, a commercially operational geothermal field in Nevada, USA. Selected results from a reservoir model of Brady Hot Springs itself are presented to show successful application to an existing system. In both cases, energy predictions prove to be highly accurate: all observed prediction errors do not exceed 3.68% for temperatures and 4.75% for pressures. In a cumulative energy estimation, we observe prediction errors that are less than 4.04%. A typical reservoir simulation for Brady Hot Springs completes in approximately 4 hours, whereas our machine learning models yield accurate 20-year predictions for temperatures, pressures, and produced energy in 0.9 seconds. This paper aims to demonstrate how the models and techniques from our study can be applied to achieve rapid exploration of controlled parameters and optimization of other geothermal reservoirs. Includes a synthetic, yet realistic, model of a geothermal reservoir, referred to as open-source reservoir (OSR). OSR is a 10-well (4 injection wells and 6 production wells) system that resembles Brady Hot Springs (a commercially operational geothermal field in Nevada, USA) at a high level but has a number of sufficiently modified characteristics (which renders any possible similarity between specific characteristics like temperatures and pressures as purely random). We study OSR through CMG simulations with a wide range of flow allocation scenarios. Includes a dataset with 101 simulated scenarios that cover the period of time between 2020 and 2040 and a link to the published paper about this project, where we focus on the Machine Learning work for predicting OSR's energy production based on the simulation data, as well as a link to the GitHub repository where we have published the code we have developed (please refer to the repository's readme file to see instructions on how to run the code). Additional links are included to associated work led by the USGS to identify geologic factors associated with well productivity in geothermal fields. Below are the high-level steps for applying the same modeling + ML process to other geothermal reservoirs: 1. Develop a geologic model of the geothermal field. The location of faults, upflow zones, aquifers, etc. need to be accounted for as accurately as possible 2. The geologic model needs to be converted to a reservoir model that can be used in a reservoir simulator, such as, for instance, CMG STARS, TETRAD, or FALCON 3. Using native state modeling, the initial temperature and pressure distributions are evaluated, and they become the initial conditions for dynamic reservoir simulations 4. Using history matching with tracers and available production data, the model should be tuned to represent the subsurface reservoir as accurately as possible 5. A large number of simulations is run using the history-matched reservoir model. Each simulation assumes a different wellbore flow rate allocation across the injection and production wells, where the individual selected flow rates do not violate the practical constraints for the corresponding wells. 6. ML models are trained using the simulation data. The code in our GitHub repository demonstrates how these models can be trained and evaluated. 7. The trained ML models can be used to evaluate a large set of candidate flow allocations with the goal of selecting the most optimal allocations, i.e., producing the largest amounts of thermal energy over the modeled period of time. The referenced paper provides more details about this optimization process

  2. f

    Data_Sheet_1_Synthetic artificial intelligence using generative adversarial...

    • frontiersin.figshare.com
    docx
    Updated Jun 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhaoran Wang; Gilbert Lim; Wei Yan Ng; Tien-En Tan; Jane Lim; Sing Hui Lim; Valencia Foo; Joshua Lim; Laura Gutierrez Sinisterra; Feihui Zheng; Nan Liu; Gavin Siew Wei Tan; Ching-Yu Cheng; Gemmy Chui Ming Cheung; Tien Yin Wong; Daniel Shu Wei Ting (2023). Data_Sheet_1_Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration.docx [Dataset]. http://doi.org/10.3389/fmed.2023.1184892.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 22, 2023
    Dataset provided by
    Frontiers
    Authors
    Zhaoran Wang; Gilbert Lim; Wei Yan Ng; Tien-En Tan; Jane Lim; Sing Hui Lim; Valencia Foo; Joshua Lim; Laura Gutierrez Sinisterra; Feihui Zheng; Nan Liu; Gavin Siew Wei Tan; Ching-Yu Cheng; Gemmy Chui Ming Cheung; Tien Yin Wong; Daniel Shu Wei Ting
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionAge-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale.MethodsTo build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively.Results and discussionThe introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61–0.66) and Cohen’s kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.

  3. f

    Details of the datasets.

    • plos.figshare.com
    xls
    Updated Feb 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gülcan Gencer; Kerem Gencer (2025). Details of the datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0318657.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Gülcan Gencer; Kerem Gencer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundRetinal problems are critical because they can cause severe vision loss if not treated. Traditional methods for diagnosing retinal disorders often rely heavily on manual interpretation of optical coherence tomography (OCT) images, which can be time-consuming and dependent on the expertise of ophthalmologists. This leads to challenges in early diagnosis, especially as retinal diseases like diabetic macular edema (DME), Drusen, and Choroidal neovascularization (CNV) become more prevalent. OCT helps ophthalmologists diagnose patients more accurately by allowing for early detection. This paper offers a hybrid SE (Squeeze-and-Excitation)-Enhanced Hybrid Model for detecting retinal disorders from OCT images, including DME, Drusen, and CNV, using artificial intelligence and deep learning.MethodsThe model integrates SE blocks with EfficientNetB0 and Xception architectures, which provide high success in image classification tasks. EfficientNetB0 achieves high accuracy with fewer parameters through model scaling strategies, while Xception offers powerful feature extraction using deep separable convolutions. The combination of these architectures enhances both the efficiency and classification performance of the model, enabling more accurate detection of retinal disorders from OCT images. Additionally, SE blocks increase the representational ability of the network by adaptively recalibrating per-channel feature responses.ResultsThe combined features from EfficientNetB0 and Xception are processed via fully connected layers and categorized using the Softmax algorithm. The methodology was tested on UCSD and Duke’s OCT datasets and produced excellent results. The proposed SE-Improved Hybrid Model outperformed the current best-known approaches, with accuracy rates of 99.58% on the UCSD dataset and 99.18% on the Duke dataset.ConclusionThese findings emphasize the model’s ability to effectively diagnose retinal disorders using OCT images and indicate substantial promise for the development of computer-aided diagnostic tools in the field of ophthalmology.

  4. Artificial Intelligence (AI) In Construction Market By Application (Field...

    • verifiedmarketresearch.com
    Updated Nov 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2024). Artificial Intelligence (AI) In Construction Market By Application (Field Management, Project Management), Industry Type (Heavy Construction, Institutional Commercials), & Region for 2024-2031 [Dataset]. https://www.verifiedmarketresearch.com/product/artificial-intelligence-ai-in-construction-market/
    Explore at:
    Dataset updated
    Nov 6, 2024
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2024 - 2031
    Area covered
    Global
    Description

    Artificial Intelligence (AI) In Construction Market size was valued at USD 1.53 Billion in 2024 and is projected to reach USD 14.21 Billion by 2031, growing at a CAGR of 36.00% during the forecast period 2024-2031.Global Artificial Intelligence (AI) In Construction Market DriversTechnological ProgressData Availability and Big Data Analytics: Building Information Modeling (BIM), drones, and Internet of Things (IoT) sensors are just a few of the sources that the construction sector is using to generate enormous amounts of data. AI uses this data to improve decision-making, streamline workflows, and offer predictive insights. AI applications are more reliable and accurate when big data analytics is used to handle and analyze complicated datasets.Automation and Machine Learning: More complex and precise predictive models are made possible by developments in machine learning algorithms. Artificial intelligence (AI) automation is increasing efficiency by optimizing processes including resource allocation, project management, and scheduling. AI-powered robotics are also being utilized to increase safety and decrease human error in jobs like welding, demolition, and bricklaying.Computer Vision: This technology is particularly transformative in construction. AI-powered computer vision can monitor site progress, ensure safety compliance, and detect defects in real-time. Drones and cameras equipped with AI analyze construction sites to provide actionable insights, improving quality control and reducing costly rework.Economic FactorsCost Reduction: AI helps in significantly reducing costs associated with construction projects. Through predictive maintenance, AI minimizes downtime and extends the life of equipment. Optimized resource management ensures materials are used efficiently, reducing waste and costs. Furthermore, AI-driven project management tools can prevent delays and associated costs by identifying potential issues early.Competitive Advantage: Companies adopting AI technologies gain a competitive edge by enhancing their efficiency, reducing operational costs, and delivering projects faster. This is increasingly important in a highly competitive industry where margins are often tight. Early adopters of AI in construction are likely to set industry benchmarks and attract more business.Operational EfficienciesEnhanced Productivity: AI streamlines construction processes by automating repetitive tasks, improving scheduling, and optimizing workflows. This results in increased productivity and allows human workers to focus on more complex, value-added activities. AI also enhances the accuracy of labor forecasting and deployment, ensuring optimal use of human resources.Improved Safety: Safety is a critical concern in construction. AI technologies, such as wearable devices and computer vision, monitor worker movements and site conditions in real-time to detect hazards and prevent accidents. AI-driven predictive analytics can foresee potential safety issues, allowing for proactive measures to mitigate risks.

  5. f

    DataSheet_1_Convolutional Neural Net-Based Cassava Storage Root Counting...

    • frontiersin.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Atanbori; Maria Elker Montoya-P; Michael Gomez Selvaraj; Andrew P. French; Tony P. Pridmore (2023). DataSheet_1_Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images.pdf [Dataset]. http://doi.org/10.3389/fpls.2019.01516.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    John Atanbori; Maria Elker Montoya-P; Michael Gomez Selvaraj; Andrew P. French; Tony P. Pridmore
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.

  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
National Renewable Energy Laboratory (2025). Subsurface Characterization and Machine Learning Predictions at Brady Hot Springs Results [Dataset]. https://catalog.data.gov/dataset/subsurface-characterization-and-machine-learning-predictions-at-brady-hot-springs-results-6c85f

Data from: Subsurface Characterization and Machine Learning Predictions at Brady Hot Springs Results

Related Article
Explore at:
Dataset updated
Jan 20, 2025
Dataset provided by
National Renewable Energy Laboratory
Description

Geothermal power plants typically show decreasing heat and power production rates over time. Mitigation strategies include optimizing the management of existing wells - increasing or decreasing the fluid flow rates across the wells - and drilling new wells at appropriate locations. The latter is expensive, time-consuming, and subject to many engineering constraints, but the former is a viable mechanism for periodic adjustment of the available fluid allocations. Data and supporting literature from a study describing a new approach combining reservoir modeling and machine learning to produce models that enable strategies for the mitigation of decreased heat and power production rates over time for geothermal power plants. The computational approach used enables translation of sets of potential flow rates for the active wells into reservoir-wide estimates of produced energy and discovery of optimal flow allocations among the studied sets. In our computational experiments, we utilize collections of simulations for a specific reservoir (which capture subsurface characterization and realize history matching) along with machine learning models that predict temperature and pressure timeseries for production wells. We evaluate this approach using an "open-source" reservoir we have constructed that captures many of the characteristics of Brady Hot Springs, a commercially operational geothermal field in Nevada, USA. Selected results from a reservoir model of Brady Hot Springs itself are presented to show successful application to an existing system. In both cases, energy predictions prove to be highly accurate: all observed prediction errors do not exceed 3.68% for temperatures and 4.75% for pressures. In a cumulative energy estimation, we observe prediction errors that are less than 4.04%. A typical reservoir simulation for Brady Hot Springs completes in approximately 4 hours, whereas our machine learning models yield accurate 20-year predictions for temperatures, pressures, and produced energy in 0.9 seconds. This paper aims to demonstrate how the models and techniques from our study can be applied to achieve rapid exploration of controlled parameters and optimization of other geothermal reservoirs. Includes a synthetic, yet realistic, model of a geothermal reservoir, referred to as open-source reservoir (OSR). OSR is a 10-well (4 injection wells and 6 production wells) system that resembles Brady Hot Springs (a commercially operational geothermal field in Nevada, USA) at a high level but has a number of sufficiently modified characteristics (which renders any possible similarity between specific characteristics like temperatures and pressures as purely random). We study OSR through CMG simulations with a wide range of flow allocation scenarios. Includes a dataset with 101 simulated scenarios that cover the period of time between 2020 and 2040 and a link to the published paper about this project, where we focus on the Machine Learning work for predicting OSR's energy production based on the simulation data, as well as a link to the GitHub repository where we have published the code we have developed (please refer to the repository's readme file to see instructions on how to run the code). Additional links are included to associated work led by the USGS to identify geologic factors associated with well productivity in geothermal fields. Below are the high-level steps for applying the same modeling + ML process to other geothermal reservoirs: 1. Develop a geologic model of the geothermal field. The location of faults, upflow zones, aquifers, etc. need to be accounted for as accurately as possible 2. The geologic model needs to be converted to a reservoir model that can be used in a reservoir simulator, such as, for instance, CMG STARS, TETRAD, or FALCON 3. Using native state modeling, the initial temperature and pressure distributions are evaluated, and they become the initial conditions for dynamic reservoir simulations 4. Using history matching with tracers and available production data, the model should be tuned to represent the subsurface reservoir as accurately as possible 5. A large number of simulations is run using the history-matched reservoir model. Each simulation assumes a different wellbore flow rate allocation across the injection and production wells, where the individual selected flow rates do not violate the practical constraints for the corresponding wells. 6. ML models are trained using the simulation data. The code in our GitHub repository demonstrates how these models can be trained and evaluated. 7. The trained ML models can be used to evaluate a large set of candidate flow allocations with the goal of selecting the most optimal allocations, i.e., producing the largest amounts of thermal energy over the modeled period of time. The referenced paper provides more details about this optimization process

Search
Clear search
Close search
Google apps
Main menu