16 datasets found
  1. Code and dataset for publication "Laser Wakefield Accelerator modelling with...

    • zenodo.org
    zip
    Updated Jan 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. J. V. Streeter; M. J. V. Streeter (2023). Code and dataset for publication "Laser Wakefield Accelerator modelling with Variational Neural Networks" [Dataset]. http://doi.org/10.5281/zenodo.7510352
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    M. J. V. Streeter; M. J. V. Streeter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and code for reproducing figures in published work.

    High Power Laser Science and Engineering

    https://doi.org/10.1017/hpl.2022.47

    Code used various python packages including tensorflow.

    Conda environment was created with (on 6th Jan 2022)
    conda create --name tf tensorflow notebook tensorflow-probability pandas tqdm scikit-learn matplotlib seaborn protobuf opencv scipy scikit-image scikit-optimize Pillow PyAbel libclang flatbuffers gast --channel conda-forge

  2. d

    Data from: Computational analyses of dynamic visual courtship display reveal...

    • dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jul 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noori Choi; Eileen Hebets; Dustin Wilgers (2025). Computational analyses of dynamic visual courtship display reveal diet-dependent and plastic male signaling in Rabidosa rabida wolf spiders [Dataset]. http://doi.org/10.5061/dryad.sbcc2frb6
    Explore at:
    Dataset updated
    Jul 21, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Noori Choi; Eileen Hebets; Dustin Wilgers
    Time period covered
    Jan 1, 2023
    Description

    It has long been a challenge to quantify the variation in dynamic motions to understand how those displays function in animal communication. The traditional approach is dependent on labor-intensive manual identification/annotation by experts. However, the recent progress in computational techniques provides researchers with toolsets for rapid, objective, and reproducible quantification of dynamic visual displays. In the present study, we investigated the effects of diet manipulation on dynamic visual components of male courtship displays of Rabidosa rabida wolf spiders using machine learning algorithms. Our results suggest that (i) the computational approach can provide an insight into the variation in the dynamic visual display between high- and low-diet males which is not clearly shown with the traditional approach and (ii) males may plastically alter their courtship display according to the body size of females they encounter. Through the present study, we add an example of the utili..., Raw data - We recorded male courtship with a Photron Fastcam 1024 PCI 100k high-speed camera (Photron USA, San Diego, CA, USA) and a Sony DCR-HC65 NTSC Handycam (Sony Electronics Inc., USA). Then, we analyzed the movement of the foreleg and pedipalps during the selected courtship bouts using ProAnalyst Lite software (Xcitex Inc., Woburn, Massachusetts, USA). We first set the x-axis and y-axis by where the pedipalp tip was in contact with the substrate (y-position 0) and most posterior point of the abdomen (x-position 0) at the beginning of the courtship bout. When the foreleg or pedipalps did not move during the courtship bout, the location of the joint was recorded by the location of the parts at the cocked position. In the case of the image being blurred, the location of blurred points was guessed based on the previous or subsequent frames or other parts in the current frame., , # Computational analyses of the courtship dance of male wolf spiders

    • 4 Python codes, 1 R code and 4 CSV files are included.
    1. 0_raw_data_process.py
    • fill the non-observed values with the initial position of each features
    • create gif and png figures to describe the visual display
    • require the following packages
      • numpy, pandas, seaborn, matplotlib, math
    1. 1_rabidosa_pose_cluster.py
    • conduct clustering posture of forelegs from each frame
    • using UMAP and HDBSCAN
    • require the following packages
      • umap, hdbscan, pickle, pandas, numpy, tensorflow, seaborn, matplotlib, scipy, sklearn
    1. 2_rabidosa_LSTM.py
    • train and save LSTM model of dynamic visual display of male R. rabida
    • clustering visual displays using umap and hdbscan
    • require the following packages
      • umap, hdbscan, pickle, pandas, numpy, tensorflow, seaborn, matplotlib, tsaug, sklearn
    1. 3_trad_clustering.py
    • clustering visual displays using traditional features with umap and hdbscan
    • require the...
  3. m

    Neural Networks in Friction Factor Analysis of Smooth Pipe Bends

    • data.mendeley.com
    Updated Dec 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adarsh Vasa (2022). Neural Networks in Friction Factor Analysis of Smooth Pipe Bends [Dataset]. http://doi.org/10.17632/sjvbwh5ckg.1
    Explore at:
    Dataset updated
    Dec 19, 2022
    Authors
    Adarsh Vasa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PROGRAM SUMMARY No. of lines in distributed program, including test data, etc.: 481 No. of bytes in distributed program, including test data, etc.: 14540.8 Distribution format: .py, .csv Programming language: Python Computer: Any workstation or laptop computer running TensorFlow, Google Colab, Anaconda, Jupyter, pandas, NumPy, Microsoft Azure and Alteryx. Operating system: Windows and Mac OS, Linux.

    Nature of problem: Navier-Stokes equations are solved numerically in ANSYS Fluent using Reynolds stress model for turbulence. The simulated values of friction factor are validated with theoretical and experimental data obtained from literature. Artificial neural networks are then used for a prediction-based augmentation of friction factor. The capabilities of the neural networks is discussed, in regard to computational cost and domain limitations.

    Solution method: The simulation data is obtained through Reynolds stress modelling of fluid flow through pipe. This data is augmented using the artificial neural network model that predicts within and without data domain.

    Restrictions: The code used in this research is limited to smooth pipe bends, in which friction factor is analysed using a steady state incompressible fluid flow.

    Runtime: The artificial neural network produces results within a span of 20 seconds for three-dimensional geometry, using the allocated free computational resources of Google Colaboratory cloud-based computing system.

  4. Z

    API Database of Python frameworks & Labeled Issues

    • data.niaid.nih.gov
    Updated Aug 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous Authors (2021). API Database of Python frameworks & Labeled Issues [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_2756358
    Explore at:
    Dataset updated
    Aug 4, 2021
    Authors
    Anonymous Authors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PyLibAPIs.7z : contains public API data (mongodb dump) for these frameworks:

    TensorFlow

    Keras

    scikit-learn

    Pandas

    Flask

    Django

    Label.xlsx: cintains issues and their labels

  5. Data product and code for: Spatiotemporal Distribution of Dissolved...

    • zenodo.org
    nc, zip
    Updated Dec 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson (2024). Data product and code for: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning [Dataset]. http://doi.org/10.5281/zenodo.14575969
    Explore at:
    nc, zipAvailable download formats
    Dataset updated
    Dec 30, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data product and code for: Ehmen et al.: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning

    Note that due to the data limit on Zenodo only a compressed version of the ensemble mean is uploaded here (compressed_DIC_mean_15fold_ensemble_aveRMSE7.46_0.15TTcasts_1990-2023.nc). Individual ensemble members can be generated through the weight and scaler files found in weights_and_scalers_DIC_paper.zip and the code "ResNet_DIC_loading_past_prediction_2024-12-28.py" (see description below).

    EN4_thickness_GEBCO.nc contains the scaling factors used in "plot_carbon_inventory_for_ensemble_2024-01-27.py" (see description below).
    DIC_paper_code_Ehmen_et_al.zip contains the python code used to generate products and figures.

    Prerequisites: Python running the modules tensorflow, shap, xarray, pandas and scipy. Plots additionally use matplotlib, cartopy, seaborn, statsmodels, gsw and cmocean.

    The main scripts used to generate reconstructions are “ResNet_DIC_2024-12-28.py” (for new training runs) and “ResNet_DIC_loading_past_prediction_2024-12-28.py” (for already trained past weight and scaler files). Usage:

    • Assign the correct directories in the function “create_directories” according to your own system. You won’t need the same if-statements for individual platforms and computers
    • Download the most recent version of GLODAP and store it in the directory chosen in “create_directories”. Check if the filename is the same as used in “import_GLODAP_dataset”. Unless the GLODAP creators change their naming system of the columns, newer versions can be used instead of GLODAPv2.2023
    • Download the HOT, BATS and Drake Passage time series and ensure the filenames are the same as in “import_time_series_data”. Store them in the time series directory chosen in “create_directories”. This is optional and the time series prediction can be commented out.
    • Download EN4 analysis files for the years you want and store them in the EN4 analysis directory chosen in “create_directories”. For the reconstruction to be created from all available EN4 analysis files, the variable prediction_to_file needs to be True, otherwise only a single time slice will be predicted (but not saved) for testing and plotting.
    • If you want to generate reconstructions pre-trained models, make sure the “scalers” and “weight_files” subdirectories are correctly stored in the “training” directory defined in “create_directories”.
    • Store the synthetic dataset of ECCO-Darwin values at GLODAP locations in the directory chosen in “create_directories”. For predicting the full model fields ECCO-Darwin needs to be in a csv-style format (for use in pandas dataframes), i.e. the multi-dimensional data needs to be flattened. Store these altered csv-style files in the directory chosen in “create_directories”

    Once a reconstruction has been generated the following scripts found in the subdirectory “working_with_finished_reconstructions” can be used:

    • ensemble_create_mean_and_std_2023-11-27.py: this creates an ensemble mean from ideally 15 ensemble members (number can be adjusted, if less reconstruction files are found than this number it is adjusted automatically). For DIC it also calculates the uncertainty following the method by Keppler et al. 2023.
    • plot_carbon_inventory_for_ensemble_2024-01-27.py: plots the carbon inventory change for DIC from both ensemble mean and the individual ensemble members. The most important settings are the default. Other options include plotting the seasonal change, others are not supported in this version as they require additional files not supplied here.
    • depth_slices_and_zonal_means_full_prediction_2024-07-05.py: creates several world maps for individual depths and zonal means for the Indian, Atlantic and Pacific Ocean.
    • Hovmoeller_plots_from_predictions_2024-05-02.py: generates simplified Hovmöller plots from individual reconstructions.
    • DIC_comparison_with_other_products_2024-06-27: interpolates and compares this product with climatologies and products from other studies. These need to be downloaded first. Products can be excluded if they are removed from the list “files_to_compare”.
  6. D

    Fracture network segmentation

    • darus.uni-stuttgart.de
    Updated Jul 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dongwon Lee; Karadimitriou Nikolaos; Holger Steeb (2021). Fracture network segmentation [Dataset]. http://doi.org/10.18419/DARUS-1847
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 7, 2021
    Dataset provided by
    DaRUS
    Authors
    Dongwon Lee; Karadimitriou Nikolaos; Holger Steeb
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    This dataset contains the codes to reproduce the five different segmentation results of the paper Lee et al (2021). The original dataset before applying these segmentation codes could be found in Ruf & Steeb (2020). The adopted segmentation methods in order to identify the micro fractures within the original dataset are the Local threshold, Sato, Chan-Vese, Random forest and U-net model. The Local threshold, Sato and U-net models are written in Python. The codes require a version above Python 3.7.7 with tensorflow, keras, pandas, scipy, scikit and numpy libraries. The workflow of the Chan-Vese method is interpreted in Matlab2018b. The result of the Random forest method could be reproduced with the uploaded trained model in an open source program ImageJ and trainableWeka library. For further details of operation, please refer to the readme.txt file.

  7. Bird Species Image Classification Dataset

    • kaggle.com
    Updated Jun 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Evil Spirit05 (2025). Bird Species Image Classification Dataset [Dataset]. https://www.kaggle.com/datasets/evilspirit05/birds-species-prediction
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 11, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Evil Spirit05
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset contains high-quality images of six distinct bird species, curated for use in image classification, computer vision, and biodiversity research tasks. Each bird species included in this dataset is well-represented, making it ideal for training and evaluating deep learning models.

    LabelSpecies NameImage Count
    1American Goldfinch143
    2Emperor Penguin139
    3Downy Woodpecker137
    4Flamingo132
    5Carmine Bee-eater131
    6Barn Owl129

    📂 Dataset Highlights: * Total Images: 811 * Classes: 6 unique bird species * Balanced Labels: Nearly equal distribution across classes * Use Cases: Image classification, model benchmarking, transfer learning, educational projects, biodiversity analysis

    🧠 Potential Applications: * Training deep learning models like CNNs for bird species recognition * Fine-tuning pre-trained models using a small and balanced dataset * Educational projects in ornithology and computer vision * Biodiversity and wildlife conservation tech solutions

    🛠️ Suggested Tools: * Python (Pandas, NumPy, Matplotlib) * TensorFlow / PyTorch for model development * OpenCV for image preprocessing * Streamlit for creating interactive demos

  8. D

    Image enhancement code: time-resolved tomograms of EICP application using 3D...

    • darus.uni-stuttgart.de
    Updated Feb 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dongwon Lee; Holger Steeb (2023). Image enhancement code: time-resolved tomograms of EICP application using 3D U-net [Dataset]. http://doi.org/10.18419/DARUS-2991
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 7, 2023
    Dataset provided by
    DaRUS
    Authors
    Dongwon Lee; Holger Steeb
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    This dataset contains the codes to reproduce the results of "Time resolved micro-XRCT dataset of Enzymatically Induced Calcite Precipitation (EICP) in sintered glass bead columns", cf. https://doi.org/10.18419/darus-2227. The code takes "low-dose" images as an input where the images contain many artifacts and noise as a trade-off of a fast data acquisition (6 min / dataset while 3 hours / dataset ("high-dose") in normal configuration). These low quality images are able to be improved with the help of a pre-trained model. The pre-trained model provided in here is trained with pairs of "high-dose" and "low-dose" data of above mentioned EICP application. The examples of used training, input and output data can be also found in this dataset. Although we showed only limited examples in here, we would like to emphasize that the used workflow and codes can be further extended to general image enhancement applications. The code requires a Python version above 3.7.7 with packages such as tensorflow, kears, pandas, scipy, scikit, numpy and patchify libraries. For further details of operation, please refer to the readme.txt file.

  9. h

    Supporting data for “Deep learning methods and applications to digital...

    • datahub.hku.hk
    Updated Oct 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shichao Ma (2024). Supporting data for “Deep learning methods and applications to digital health” [Dataset]. http://doi.org/10.25442/hku.27060427.v1
    Explore at:
    Dataset updated
    Oct 3, 2024
    Dataset provided by
    HKU Data Repository
    Authors
    Shichao Ma
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This repository contains three folders which contain either the data or the source code for the three main chapters (Chapter 3, 4, and 5) in the thesis. Those folders are 1) Dataset (Chapter 3): This file contains phonocardigrams signals (/PhysioNet2016) used in Chapter 3 and 4 as the upstream pretraining data. This is a public dataset. /SourceCode includes all the statistical analysis and visualization scripts for Chapter 3. Yaseen_dataset and PASCAL contain phonocardigrams signals with pathological features, Yaseen_dataset serves as the downstream finetuning dataset in Chapter 3, while PASCAL datasets serves as the secondary testing dataset in Chapter 3. 2) Dataset (Chapter 4): /SourceCode includes all the statistical analysis and visualization scripts for Chapter 4. 3) Dataset (Chapter 5): PAD-UFES-20_processed contains dermatology images processed from the PAD-UFES-20 dataset, which is a public dataset. The dataset is used in the Chapter 5. And /SourceCode includes all the statistical analysis and visualization scripts for Chapter 5.Several packges are mendatory to run the source code, including:Python > 3.6 (3.11 preferred), TensorFlow > 2.16, Keras > 3.3, NumPy > 1.26, Pandas > 2.2, SciPy > 1.13

  10. DeepB3Pred - code

    • figshare.com
    zip
    Updated Sep 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saleh Musleh (2025). DeepB3Pred - code [Dataset]. http://doi.org/10.6084/m9.figshare.30149158.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 17, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Saleh Musleh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    DeepB3Pred uses the following dependencies:MATLAB2018a python 3.10 numpy scipy pandas scikit-learn catboost= 1.1.1 gc_forset xgboost-1.5.0 tensorflow=1.15.0 Keras=2.1.6Guiding principles:The data contains a training dataset and a testing dataset. The training dataset TR_BB.fasta includes BBB_pos and BBB_neg training samples. Testing dataset TS_BB includes BBB_pos and BBB_neg testing samplesFeature_Extraction: CPSR is the implementation of component protein sequence representation. MCTD is the implementation of composition-transition and distribution, and GSFE is the implementation of graphical and statistical-based feature engineering.Classifier: DeepB3Pred.py is the implementation of the proposed method to predict B3PPs and non-B3PPs.

  11. Emotion Prediction with Quantum5 Neural Network AI

    • kaggle.com
    zip
    Updated Oct 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EMİRHAN BULUT (2025). Emotion Prediction with Quantum5 Neural Network AI [Dataset]. https://www.kaggle.com/datasets/emirhanai/emotion-prediction-with-semi-supervised-learning
    Explore at:
    zip(2332683 bytes)Available download formats
    Dataset updated
    Oct 19, 2025
    Authors
    EMİRHAN BULUT
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Emotion Prediction with Quantum5 Neural Network AI Machine Learning - By Emirhan BULUT

    V1

    I have created an artificial intelligence software that can make an emotion prediction based on the text you have written using the Semi Supervised Learning method and the RC algorithm. I used very simple codes and it was a software that focused on solving the problem. I aim to create the 2nd version of the software using RNN (Recurrent Neural Network). I hope I was able to create an example for you to use in your thesis and projects.

    V2

    I decided to apply a technique that I had developed in the emotion dataset that I had used Semi-Supervised learning in Machine Learning methods before. This technique is produced according to Quantum5 laws. I developed a smart artificial intelligence software that can predict emotion with Quantum5 neuronal networks. I share this software with all humanity as open source on Kaggle. It is my first open source project in NLP system with Quantum technology. Developing the NLP system with Quantum technology is very exciting!

    Happy learning!

    Emirhan BULUT

    Head of AI and AI Inventor

    Emirhan BULUT. (2022). Emotion Prediction with Quantum5 Neural Network AI [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DS/2129637

    The coding language used:

    Python 3.9.8

    Libraries Used:

    Keras

    Tensorflow

    NumPy

    Pandas

    Scikit-learn (SKLEARN)

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Quantum%205.png" alt="Emotion Prediction with Quantum5 Neural Network on AI - Emirhan BULUT">

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Emotion%20Prediction%20with%20Semi%20Supervised%20Learning%20of%20Machine%20Learning%20Software%20with%20RC%20Algorithm%20-%20By%20Emirhan%20BULUT.png" alt="Emotion Prediction with Semi Supervised Learning of Machine Learning Software with RC Algorithm - Emirhan BULUT">

    Developer Information:

    Name-Surname: Emirhan BULUT

    Contact (Email) : emirhan@isap.solutions

    LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/

    Kaggle: https://www.kaggle.com/emirhanai

    Official Website: https://www.emirhanbulut.com.tr

  12. Image Classification by CNN

    • kaggle.com
    zip
    Updated Mar 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harsh Jaglan (2024). Image Classification by CNN [Dataset]. https://www.kaggle.com/datasets/harshjaglan01/image-classification-by-cnn/code
    Explore at:
    zip(311627190 bytes)Available download formats
    Dataset updated
    Mar 4, 2024
    Authors
    Harsh Jaglan
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Automated Flower Identification Using Convolutional Neural Networks

    This project aims to develop a model for identifying five different flower species (rose, tulip, sunflower, dandelion, daisy) using Convolutional Neural Networks (CNNs).

    Description

    The dataset consists of 5,000 images (1,000 images per class) collected from various online sources. The model achieved an accuracy of 98.58% on the test set. Usage

    This project requires Python 3.x and the following libraries:

    TensorFlow: For making Neural Networks numpy: For numerical computing and array operations. pandas: For data manipulation and analysis. matplotlib: For creating visualizations such as line plots, bar plots, and histograms. seaborn: For advanced data visualization and creating statistically-informed graphics. scikit-learn: For machine learning algorithms and model training. To run the project:

    Clone this repository.

    Install the required libraries. Run the Jupyter Notebook: jupyter notebook flower_classification.ipynb Additional Information Link to code: https://github.com/Harshjaglan01/flower-classification-cnn License: MIT License

  13. w

    Global Machine Learning Framework Market Research Report: By Application...

    • wiseguyreports.com
    Updated Aug 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Machine Learning Framework Market Research Report: By Application (Natural Language Processing, Computer Vision, Speech Recognition, Predictive Analytics, Reinforcement Learning), By Deployment Mode (Cloud-Based, On-Premises, Hybrid), By End User (BFSI, Healthcare, Retail, IT & Telecom, Manufacturing), By Framework Type (Open Source, Commercial, Proprietary) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/cn/reports/machine-learning-framework-market
    Explore at:
    Dataset updated
    Aug 23, 2025
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Aug 25, 2025
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20244.65(USD Billion)
    MARKET SIZE 20255.51(USD Billion)
    MARKET SIZE 203530.0(USD Billion)
    SEGMENTS COVEREDApplication, Deployment Mode, End User, Framework Type, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSrising demand for automation, increasing big data utilization, growth in AI adoption, need for real-time analytics, surge in cloud-based services
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDNVIDIA, Apache, Microsoft, H2O.ai, Google, Alteryx, Oracle, C3.ai, Pandas, DataRobot, Facebook, Amazon, RapidMiner, Keras, TensorFlow, IBM
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESIncreased demand for automation, Growth in AI-based applications, Expansion in edge computing, Rising need for real-time data processing, Surge in personalized customer experiences
    COMPOUND ANNUAL GROWTH RATE (CAGR) 18.4% (2025 - 2035)
  14. Images used for training, validation, and testing.

    • kaggle.com
    Updated Mar 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chrysthian Chrisley (2024). Images used for training, validation, and testing. [Dataset]. https://www.kaggle.com/datasets/chrysthian/images-used-for-training-validation-and-testing
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 15, 2024
    Dataset provided by
    Kaggle
    Authors
    Chrysthian Chrisley
    License

    Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
    License information was derived automatically

    Description

    Imports:

    # All Imports
    import os
    from matplotlib import pyplot as plt
    import pandas as pd
    from sklearn.calibration import LabelEncoder
    import seaborn as sns
    import matplotlib.image as mpimg
    import cv2
    import numpy as np
    import pickle
    
    # Tensflor and Keras Layer and Model and Optimize and Loss
    import tensorflow as tf
    from tensorflow import keras
    from keras import Sequential
    from keras.layers import *
    
    #Kernel Intilizer 
    from keras.optimizers import Adamax
    
    # PreTrained Model
    from keras.applications import *
    
    #Early Stopping
    from keras.callbacks import EarlyStopping
    import warnings 
    

    Warnings Suppression | Configuration

    # Warnings Remove 
    warnings.filterwarnings("ignore")
    
    # Define the base path for the training folder
    base_path = 'jaguar_cheetah/train'
    
    # Weights file
    weights_file = 'Model_train_weights.weights.h5'
    
    # Path to the saved or to save the model:
    model_file = 'Model-cheetah_jaguar_Treined.keras'
    
    # Model history
    history_path = 'training_history_cheetah_jaguar.pkl'
    
    # Initialize lists to store file paths and labels
    filepaths = []
    labels = []
    
    # Iterate over folders and files within the training directory
    for folder in ['Cheetah', 'Jaguar']:
      folder_path = os.path.join(base_path, folder)
      for filename in os.listdir(folder_path):
        file_path = os.path.join(folder_path, filename)
        filepaths.append(file_path)
        labels.append(folder)
    
    # Create the TRAINING dataframe
    file_path_series = pd.Series(filepaths , name= 'filepath')
    Label_path_series = pd.Series(labels , name = 'label')
    df_train = pd.concat([file_path_series ,Label_path_series ] , axis = 1)
    
    
    # Define the base path for the test folder
    directory = "jaguar_cheetah/test"
    
    filepath =[]
    label = []
    
    folds = os.listdir(directory)
    
    for fold in folds:
      f_path = os.path.join(directory , fold)
      
      imgs = os.listdir(f_path)
      
      for img in imgs:
        
        img_path = os.path.join(f_path , img)
        filepath.append(img_path)
        label.append(fold)
        
    # Create the TEST dataframe
    file_path_series = pd.Series(filepath , name= 'filepath')
    Label_path_series = pd.Series(label , name = 'label')
    df_test = pd.concat([file_path_series ,Label_path_series ] , axis = 1)
    
    # Display the first rows of the dataframe for verification
    #print(df_train)
    
    # Folders with Training and Test files
    data_dir = 'jaguar_cheetah/train'
    test_dir = 'jaguar_cheetah/test'
    
    # Image size 256x256
    IMAGE_SIZE = (256,256) 
    

    Tain | Test

    #print('Training Images:')
    
    # Create the TRAIN dataframe
    train_ds = tf.keras.utils.image_dataset_from_directory(
      data_dir,
      validation_split=0.1,
      subset='training',
      seed=123,
      image_size=IMAGE_SIZE,
      batch_size=32)
    
    #Testing Data
    #print('Validation Images:')
    validation_ds = tf.keras.utils.image_dataset_from_directory(
      data_dir, 
      validation_split=0.1,
      subset='validation',
      seed=123,
      image_size=IMAGE_SIZE,
      batch_size=32)
    
    print('Testing Images:')
    test_ds = tf.keras.utils.image_dataset_from_directory(
      test_dir, 
      seed=123,
      image_size=IMAGE_SIZE,
      batch_size=32)
    
    # Extract labels
    train_labels = train_ds.class_names
    test_labels = test_ds.class_names
    validation_labels = validation_ds.class_names
    
    # Encode labels
    # Defining the class labels
    class_labels = ['CHEETAH', 'JAGUAR'] 
    
    # Instantiate (encoder) LabelEncoder
    label_encoder = LabelEncoder()
    
    # Fit the label encoder on the class labels
    label_encoder.fit(class_labels)
    
    # Transform the labels for the training dataset
    train_labels_encoded = label_encoder.transform(train_labels)
    
    # Transform the labels for the validation dataset
    validation_labels_encoded = label_encoder.transform(validation_labels)
    
    # Transform the labels for the testing dataset
    test_labels_encoded = label_encoder.transform(test_labels)
    
    # Normalize the pixel values
    
    # Train files 
    train_ds = train_ds.map(lambda x, y: (x / 255.0, y))
    # Validate files
    validation_ds = validation_ds.map(lambda x, y: (x / 255.0, y))
    # Test files
    test_ds = test_ds.map(lambda x, y: (x / 255.0, y))
    
    #TRAINING VISUALIZATION
    #Count the occurrences of each category in the column
    count = df_train['label'].value_counts()
    
    # Create a figure with 2 subplots
    fig, axs = plt.subplots(1, 2, figsize=(12, 6), facecolor='white')
    
    # Plot a pie chart on the first subplot
    palette = sns.color_palette("viridis")
    sns.set_palette(palette)
    axs[0].pie(count, labels=count.index, autopct='%1.1f%%', startangle=140)
    axs[0].set_title('Distribution of Training Categories')
    
    # Plot a bar chart on the second subplot
    sns.barplot(x=count.index, y=count.values, ax=axs[1], palette="viridis")
    axs[1].set_title('Count of Training Categories')
    
    # Adjust the layout
    plt.tight_layout()
    
    # Visualize
    plt.show()
    
    # TEST VISUALIZATION
    count = df_test['label'].value_counts()
    
    # Create a figure with 2 subplots
    fig, axs = plt.subplots(1, 2, figsize=(12, 6), facec...
    
  15. Cryptocurrency Prediction Artificial Intelligence

    • kaggle.com
    zip
    Updated Aug 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EMİRHAN BULUT (2025). Cryptocurrency Prediction Artificial Intelligence [Dataset]. https://www.kaggle.com/datasets/emirhanai/cryptocurrency-prediction-artificial-intelligence/versions/172
    Explore at:
    zip(319102 bytes)Available download formats
    Dataset updated
    Aug 5, 2025
    Authors
    EMİRHAN BULUT
    License

    http://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html

    Description

    Cryptocurrency-Prediction-with-Artificial-Intelligence

    First Version.. Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT I developed Cryptocurrency Prediction (Deep Learning with LSTM Neural Networks) software with Artificial Intelligence. I predicted the fall on December 28, 2021 with 98.5% accuracy in the XRP/USDT pair. '0.009179626158151918' MAE Score, '0.0002120391943355104' MSE Score, 98.35% Accuracy Question software has been completed.

    The XRP/USDT pair forecast for December 28, 2021 was correctly forecasted based on data from Binance.

    Software codes and information are shared with you as open source code free of charge on GitHub and My Personal Web Address.

    Happy learning!

    Emirhan BULUT

    Senior Artificial Intelligence Engineer & Inventor

    The coding language used:

    Python 3.9.8

    Libraries Used:

    Tensorflow - Keras

    NumPy

    Matplotlib

    Pandas

    Scikit-learn - (SKLEARN)

    https://raw.githubusercontent.com/emirhanai/Cryptocurrency-Prediction-with-Artificial-Intelligence/main/XRP-1%20-%20PREDICTION.png" alt="Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT">

    Developer Information:

    Name-Surname: Emirhan BULUT

    Contact (Email) : emirhan@isap.solutions

    LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/

    Kaggle: https://www.kaggle.com/emirhanai

    Official Website: https://www.emirhanbulut.com.tr

  16. Age and Sex Prediction by Artificial Intelligence

    • kaggle.com
    zip
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EMİRHAN BULUT (2025). Age and Sex Prediction by Artificial Intelligence [Dataset]. https://www.kaggle.com/datasets/emirhanai/age-and-sex-prediction-by-artificial-intelligence/discussion
    Explore at:
    zip(7673754 bytes)Available download formats
    Dataset updated
    Nov 7, 2025
    Authors
    EMİRHAN BULUT
    License

    http://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html

    Description

    Age and Sex Prediction from Image - Convolutional Neural Network with Artificial Intelligence

    I developed an artificial intelligence software that predicts your Age and Gender. It has a 93% accuracy rate. I'm 21 years old and he predicted my age 100% correctly! I adjusted the algorithm and prepared the codes. A system that works together with Neural Networks in the Deep Learning system. I used Convolutional Layers from Convolutional Neural Networks. I am pleased to present this software for humanity. Doctoral students can use it in their theses or various companies can use this software! Upload your photo, guess your age and gender!

    Kind regards,

    Emirhan BULUT

    Head of AI & AI Inventor

    The coding language used:

    Python 3.9.8

    Libraries Used:

    TensorFlow

    Keras

    OpenCV

    MatPlotlib

    NumPy

    Pandas

    Scikit-learn - (SKLEARN)

    https://raw.githubusercontent.com/emirhanai/Age-and-Sex-Prediction-from-Image---Convolutional-Neural-Network-with-Artificial-Intelligence/main/Age%20and%20Sex%20Prediction%20from%20Image%20-%20Convolutional%20Neural%20Network%20with%20Artificial%20Intelligence.png" alt="Age and Sex Prediction from Image - Convolutional Neural Network with Artificial Intelligence">

    Developer Information:

    Name-Surname: Emirhan BULUT

    Contact (Email) : emirhan@isap.solutions

    LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/

    Kaggle: https://www.kaggle.com/emirhanai

    Official Website: https://www.emirhanbulut.com.tr

  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
M. J. V. Streeter; M. J. V. Streeter (2023). Code and dataset for publication "Laser Wakefield Accelerator modelling with Variational Neural Networks" [Dataset]. http://doi.org/10.5281/zenodo.7510352
Organization logo

Code and dataset for publication "Laser Wakefield Accelerator modelling with Variational Neural Networks"

Explore at:
zipAvailable download formats
Dataset updated
Jan 8, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
M. J. V. Streeter; M. J. V. Streeter
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Data and code for reproducing figures in published work.

High Power Laser Science and Engineering

https://doi.org/10.1017/hpl.2022.47

Code used various python packages including tensorflow.

Conda environment was created with (on 6th Jan 2022)
conda create --name tf tensorflow notebook tensorflow-probability pandas tqdm scikit-learn matplotlib seaborn protobuf opencv scipy scikit-image scikit-optimize Pillow PyAbel libclang flatbuffers gast --channel conda-forge

Search
Clear search
Close search
Google apps
Main menu